id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2301.12509
Constraining Barrow entropy-based Cosmology with power-law inflation
We study the inflationary era of the Universe in a modified cosmological scenario based on the gravity-thermodynamics conjecture with Barrow entropy instead of the usual Bekenstein-Hawking one. The former arises from the effort to account for quantum gravitational effects on the horizon surface of black holes and, in a broader sense, of the Universe. First, we extract modified Friedmann equations from the first law of thermodynamics applied to the apparent horizon of a Friedmann- Robertson-Walker Universe in (n + 1)-dimensions. Assuming a power-law behavior for the scalar inflaton field, we then investigate how the inflationary dynamics is affected in Barrow cosmological setup. We find that the inflationary era may phenomenologically consist of the slow-roll phase, while Barrow entropy is incompatible with kinetic inflation. By demanding observationally consistency of the scalar spectral index and tensor-to-scalar ratio with recent Planck data, we finally constrain Barrow exponent to $\Delta\lesssim10^{-4}$.
Giuseppe Gaetano Luciano
2023-01-29T18:31:04Z
http://arxiv.org/abs/2301.12509v1
# Constraining Barrow entropy-based Cosmology with power-law inflation ###### Abstract We study the inflationary era of the Universe in a modified cosmological scenario based on the gravity-thermodynamics conjecture with Barrow entropy instead of the usual Bekenstein-Hawking one. The former arises from the effort to account for quantum gravitational effects on the horizon surface of black holes and, in a broader sense, of the Universe. First, we extract modified Friedmann equations from the first law of thermodynamics applied to the apparent horizon of a Friedmann-Robertson-Walker Universe in \((n+1)\)-dimensions. Assuming a power-law behavior for the scalar inflaton field, we then investigate how the inflationary dynamics is affected in Barrow cosmological setup. We find that the inflationary era may phenomenologically consist of the slow-roll phase, while Barrow entropy is incompatible with kinetic inflation. By demanding observationally consistency of the scalar spectral index and tensor-to-scalar ratio with recent Planck data, we finally constrain Barrow exponent to \(\Delta\lesssim 10^{-4}\). ## I Introduction The effort to understand the statistical mechanics of black holes [1] has opened up new scenarios in modern theoretical physics, including the study of the AdS/CFT correspondence [2; 3] and the investigation of the connection between gravity and thermodynamics. Beyond their intrinsic interest, both these two lines of research might potentially have a deep impact upon the development of quantum gravity, mainly because they are the most successful realizations of the holographic principle [4; 5]. While the AdS/CFT correspondence is based on the description of the background geometry in terms of anti-de Sitter vacuum solutions, the interplay between gravity and thermodynamics finds its conceptualization in the so-called _gravity-thermodynamics_ conjecture [6; 7; 8], which states that Einstein field equations are nothing but the gravitational counterpart of the laws of thermodynamics applied to spacetime [9]. Besides, in the cosmological context such a conjecture allows to extract Friedmann equations by implementing the first law of thermodynamics on the apparent horizon of the Universe [10; 11; 12; 13]. In the original formulation the gravity-thermodynamic conjecture applies Bekenstein-Hawking (BH) area law \(S_{BH}=A/A_{0}\) to the Universe apparent horizon of surface area \(A=4\pi r_{hor}^{2}\) and radius \(r_{hor}\).1 Nevertheless, generalized forms of BH entropy have been discussed in recent literature, motivated by either nonextensive [14; 15] or quantum gravity [16] arguments. To the latter class belongs Barrow entropy, which deforms BH area-law to Footnote 1: Here and henceforth we work in natural units \(\hbar=c=G=k_{B}=1\). Accordingly, the Planck area is \(A_{0}=4G=4\). \[S\,=\,\left(\frac{A}{A_{0}}\right)^{1+\Delta/2}\,,\qquad 0\leq\Delta\leq 1\,, \tag{1}\] where Barrow exponent \(\Delta\) embeds quantum gravitational corrections. In particular, \(\Delta=1\) corresponds to the maximal departure from BH entropy, which is instead recovered for \(\Delta=0\). Though being proposed for black holes [16], Eq. (1) is also applied within the cosmological framework, giving rise to modified Friedmann equations that predict a richer phenomenology comparing to the standard one [17]. In addition, one can rephrase the holographic principle in terms of Barrow entropy, obtaining Barrow holographic dark energy (BHDE) (see, for instance [18; 19; 20; 21; 22; 23; 24; 25] for recent applications). Comparison of the above constructions with observations sets upper limits on Barrow exponent [26; 27; 28; 29; 30], which slightly deviates from zero, as expected. In physical cosmology, inflation is supposed to be a crucial era in the evolution of the Universe, consisting of a very short-lived, but extremely accelerated expansion phase occurred right after the Big Bang. Originally proposed in [31; 32; 33; 34], it has been getting increasing attention over the years, becoming one of the two pillars of the present cosmological model along with the late time acceleration [35; 36; 37]. In spite of this, the origin of inflation has not been well understood yet. The most commonly adopted scenario is that it has been driven by a particular form of dark energy represented by a scalar field with slow rolling assumptions [38]. Alternative models have been recently proposed in [39; 40; 41; 42; 43; 44; 45; 46]. The inflationary phase has also been studied in connection with holographic dark energy [47; 48; 49], motivated by the plausible role of the latter as a mechanism responsible for the late time cosmic acceleration. Starting from the above premises, in this work we study the evolution and inflation of the Universe in the context of Barrow entropy-based Cosmology. In this sense, our analysis should be regarded as a preliminary attempt to explore the effects of quantum gravity on the dynamics of the Universe. In particular, we apply Barrow formula (1) to the entropy associated with the apparent horizon of a \((n+1)\)-dimensional homogeneous and isotropic (Friedmann Robertson Walker-like) Universe, assuming that the matter inside the horizon is represented by a scalar field with a potential. In this setting, modified Friedmann equations are derived from the first law of thermodynamics and compared with the result of [50] for the specific case of \(n=3\). Furthermore, we investigate the early inflationary dynamics of Barrow cosmology with the power-law potential function. Contrary to nonextensive (Tsallis-like) scenario [51], where it has been shown that inflation may consist of both slow roll- and kinetic-phases, here we find that only the first stage is eligible, the kinetic energy era being incompatible with the allowed values of Barrow exponent \(\Delta\). After computing the characteristic inflation parameters, we infer an upper bound on \(\Delta\) in compliance with recent observational constraints on the scalar spectral index and the tensor-to-scalar ratio. We finally comment on the consistency of our results with other approaches in literature aimed at exploring inflation driven by BHDE. The remainder of the work is structured as follows: in the next Section, we derive modified Friedmann equations from Barrow entropy. Sec. III is devoted to to the study of the inflationary era in BHDE, while conclusions and outlook are summarized in Sec. IV. ## II Modified Friedmann equations in Barrow cosmology Let us consider a homogenous and isotropic Friedmann-Robertson-Walker (FRW) Universe of spatial curvature \(k\). We first set notation by following [21] and focusing on \((3+1)\)-dimensions. To be as general as possible, the derivation of the modified Friedmann equations in Barrow Cosmology is then performed for the \((n+1)\)-dimensional case, with \(n\geq 3\). For a \((3+1)\)-dimensional FRW Universe, the line element can be written as \[ds^{2}\,=\,h_{bc}dx^{b}dx^{c}+\tilde{r}^{2}\left(d\theta^{2}+\sin^{2}\theta\, d\phi^{2}\right), \tag{2}\] where we have denoted the metric of the \((1+1)\)-dimensional subspace by \(h_{bc}=\text{diag}[-1,a^{2}/(1-kr^{2})]\). Moreover, \(x^{b}=(t,r)\), \(\tilde{r}=a(t)r\), \(a(t)\) is the (time-dependent) scale factor and \(r\) the comoving radius. Following [52], the dynamical apparent horizon is obtained from the geometric condition \[h^{bc}\partial_{b}\tilde{r}\,\partial_{c}\tilde{r}=0\,. \tag{3}\] For the FRW Universe (2), explicit calculations yield \[\tilde{r}_{A}=\frac{1}{\sqrt{H^{2}+k/a^{2}}}\,, \tag{4}\] where \(H=\dot{a}(t)/a(t)\) is the Hubble parameter and the overhead dot indicates derivative respect to the cosmic time \(t\). The apparent horizon has an associated temperature \[T=\,\frac{\kappa}{2\pi}=-\frac{1}{2\pi\tilde{r}_{A}}\left(1-\frac{\dot{\tilde {r}}_{A}}{2H\tilde{r}_{A}}\right), \tag{5}\] where \(\kappa\) represents the surface gravity. Clearly, for \(\dot{\tilde{r}}_{A}\leq 2H\tilde{r}_{A}\) we have \(T\leq 0\). To avoid meaningless negative temperatures, one can define \(T=|\kappa|/2\pi\). Furthermore, it is possible to assume that \(\dot{\tilde{r}}_{A}\ll 2H\tilde{r}_{A}\) in an infinitesimal time interval \(dt\), which amounts to keeping the apparent horizon radius fixed. This implies the approximation \(T\simeq 1/2\pi\tilde{r}_{A}\)[11]. We now suppose that the matter content of the Universe is represented by a scalar field \(\phi\) characterized by a perfect fluid form. The corresponding Lagrangian is given by \(\mathcal{L}_{\phi}=X-V(\phi)\), where \(X=-\frac{1}{2}h^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\) and \(V(\phi)\) are the kinetic and (spatially homogenous) potential terms, respectively. In turn, the stress-energy tensor is \[T_{\mu\nu}=(\rho_{\phi}+p_{\phi})u_{\mu}u_{\nu}+p_{\phi}h_{\mu\nu}\,, \tag{6}\] where \(u_{\mu}\) is the four-velocity of the fluid and \[\rho_{\phi}=\frac{\dot{\phi}^{2}}{2}+V(\phi)\,, \tag{7a}\] \[p_{\phi}=\frac{\dot{\phi}^{2}}{2}-V(\phi). \tag{7b}\] represent its energy density and pressure, respectively [46]. In turn, the conservation equation \(\nabla_{\mu}T^{\mu\nu}=0\) gives the continuity equation \[\dot{\rho}_{\phi}+3H(\rho_{\phi}+p_{\phi})=0\,. \tag{8}\] Combining Eqs. (7) and (8), we obtain the dynamics equation of the canonical scalar field as \[\ddot{\phi}+3H\dot{\phi}+\partial_{\phi}V=0\,, \tag{9}\] where the term containing the Hubble constant serves as a kind of friction term resulting from the expansion. ### Modified Friedmann equations in \((n+1)\) dimensions The above ingredients provide the basics to derive the modified Friedmann equations in Barrow entropy-based Cosmology. Here, we extract such equations from the first law of thermodynamics \[dE\,=\,TdS+WdV\,, \tag{10}\] applied to the apparent horizon of the FRW Universe in \((n+1)\)-dimensions, where \[W=(\rho_{\phi}-p_{\phi})/2\,, \tag{11}\] is the work density associated to the Universe expansion and \[S\,=\,\gamma\left(\frac{A}{A_{0}^{(n-1)/2}}\right)^{1+\Delta/2}\,, \tag{12}\] is the generalized Barrow entropy. We have denoted the \(n\)-dimensional horizon surface by \(A=n\Omega_{n}\tilde{r}_{A}^{n-1}\), where \(\Omega_{n}\equiv\frac{\pi^{n/2}}{\Gamma(n/2+1)}\) is the angular part of the \(n\)-dimensional sphere and \(\Gamma\) the Euler's function. The dimensionless constant \(\gamma\) is such that \(\gamma\to 1\) for \(n=3\), so that Eq. (1) is restored in this limit. Its explicit expression shall be fixed later. In passing, we mention that an alternative derivation of modified Friedmann equations can be built upon Padmanabhan's paradigm of emergent gravity [53], which states that the spatial expansion of our Universe can be understood as the consequence of the emergence of space with the progress of cosmic time. Now, by taking into account that the total energy of the Universe inside the \(n\)-dimensional volume \(V=\Omega_{n}\tilde{r}_{A}^{n}\) is \(E=\rho_{\phi}V\), we have \[dE=Vd\rho_{\phi}+\rho_{\phi}dV=\Omega_{n}\tilde{r}_{A}^{n}\dot{\rho}_{\phi}\, dt\,+\,\rho_{\phi}\Omega_{n}n\tilde{r}_{A}^{n-1}d\tilde{r}_{A}\,. \tag{13}\] This relation can be further manipulated by resorting to the generalized continuity equation \[\dot{\rho}_{\phi}+nH(\rho_{\phi}+p_{\phi})=0\,, \tag{14}\] to give \[dE\,=\,-\Omega_{n}\tilde{r}_{A}^{n}nH\left(\rho_{\phi}+p_{\phi}\right)dt\,+\, \rho_{\phi}\,\Omega_{n}n\tilde{r}_{A}^{n-1}d\tilde{r}_{A}\,. \tag{15}\] On the other hand, by differentiating the entropy (12) we get \[dS=\gamma\left(\frac{1}{A_{0}^{(n-1)/2}}\right)^{1+\Delta/2}n\Omega_{n}\left( 1+\frac{\Delta}{2}\right)\left(n-1\right)\left(n\Omega_{n}\tilde{r}_{A}^{n-1 }\right)^{\Delta/2}\tilde{r}_{A}^{n-2}d\tilde{r}_{A}\,. \tag{16}\] By plugging Eqs. (13)-(16) into (10), we arrive to \[H\left(\rho_{\phi}+p_{\phi}\right)dt=\frac{\gamma\left(n-1\right)\left(1+ \frac{\Delta}{2}\right)\left(n\Omega\tilde{r}_{A}^{n-1}\right)^{\Delta/2}}{2 \pi\tilde{r}_{A}^{3}}\left(\frac{1}{A_{0}^{(n-1)/2}}\right)^{1+\Delta/2}dr\,. \tag{17}\] With the further use of the continuity equation (14), this becomes \[-\frac{2\pi\left(A_{0}^{(n-1)/2}\right)^{1+\Delta/2}}{\gamma\,n\left(n-1\right) \left(1+\frac{\Delta}{2}\right)\left(n\Omega\right)^{\Delta/2}}\,d\rho_{\phi} = \tilde{r}_{A}^{(n-1)\Delta/2-3}d\tilde{r}_{A}\,. \tag{18}\] Integrating both sides, we are led to \[\tilde{r}_{A}^{(n-1)(1+\Delta/2)-n-1}=\frac{\pi\left[4-(n-1)\,\Delta\right] \left(A_{0}^{(n-1)/2}\right)^{1+\Delta/2}}{\gamma\,n\left(n-1\right)\left(1+ \frac{\Delta}{2}\right)\left(n\Omega\right)^{\Delta/2}}\,\rho_{\phi}\,, \tag{19}\] where the integration constant has been fixed by imposing the boundary condition \(8\pi\rho_{\phi}=\Lambda\simeq 0\). Finally, with the help of the definition (4), we obtain \[\left(H^{2}+\frac{k}{a^{2}}\right)^{1-(n-1)\Delta/4} = \frac{8\pi G_{eff}^{(n-1)/2}}{3}\sigma\rho_{\phi}\,, \tag{20}\] where we have defined \[\sigma\equiv\frac{3}{n-2}\,\frac{\left[n+1-(n-1)\left(1+\frac{ \Delta}{2}\right)\right]}{n\left(2-\Delta\right)}\,, \tag{21}\] and we have set \[\gamma = \frac{\pi^{(n-1)\Delta/4}}{2\left(n\Omega_{n}\right)^{\Delta/2} 4^{(1+\Delta/2)(1-n)/2}}\,\frac{(n-2)}{(n-1)}\left(\frac{2-\Delta}{2+\Delta} \right)^{(3-n)/2}\,. \tag{22}\] Furthermore, we have introduced the effective gravitational constant [21] \[G_{eff} = \frac{A_{0}}{4}\left(\frac{2-\Delta}{2+\Delta}\right)\left(\frac {A_{0}}{4\pi}\right)^{\Delta/2}\,. \tag{23}\] Some comments are in order here: first, we notice that for \(n=3\), we have \(\gamma\to 1\), consistently with the discussion below Eq. (12). The same is true for \(\sigma\), so that Eq. (20) for \(n=3\) becomes \[\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2} = \frac{8\pi G_{eff}}{3}\rho_{\phi}\,. \tag{24}\] This is nothing but the first modified Friedmann equation derived in [21] when \(\rho_{\phi}\equiv\rho\) (normal matter). Furthermore, the limit \(\Delta\to 0\) correctly reproduces the standard Friedmann equation \[H^{2}+\frac{k}{a^{2}} = \frac{8\pi}{3}\frac{A_{0}}{4}\rho_{\phi}\,. \tag{25}\] As a final remark, it must be emphasized that, due to the positive definition of the energy density, Eqs. (20) and (21) imply the upper bound \[n+1-(n-1)\left(1+\frac{\Delta}{2}\right)>0 \Longrightarrow \Delta<\frac{4}{n-1}\,, \tag{26}\] which is obviously satisfied for any allowed value of \(n\). Now, from the time derivative of Eq. (24), one can easily obtain the second modified Friedmann equation as follows \[2H\left[1-(n-1)\,\frac{\Delta}{4}\right]\left(H^{2}+\frac{k}{a^{2}}\right)^{-( n-1)\Delta/4}\left(\frac{\ddot{a}}{a}-H^{2}-\frac{k}{a^{2}}\right)=\frac{8\pi G _{eff}^{(n-1)/2}}{3}\sigma\dot{\rho}_{\phi}\,. \tag{27}\] By use of the continuity equation (14), this gives \[\left[1-(n-1)\,\frac{\Delta}{4}\right]\left(H^{2}+\frac{k}{a^{2}}\right)^{-( n-1)\Delta/4}\left(\frac{\ddot{a}}{a}-H^{2}-\frac{k}{a^{2}}\right)=-\frac{4\pi G _{eff}^{(n-1)/2}}{3}\,\sigma n\left(\rho_{\phi}+p_{\phi}\right). \tag{28}\] Replacing \(\rho_{\phi}\) by the first Friedmann equation (20), we find after some simplification \[\left[4-\left(n-1\right)\Delta\right]\frac{\ddot{a}}{a}\left(H^{2}+\frac{k}{a^{2} }\right)^{-\left(n-1\right)\Delta/4}+\left[2n-4+\Delta\left(n-1\right)\right] \left(H^{2}+\frac{k}{a^{2}}\right)^{1-\left(n-1\right)\Delta/4}=-\frac{16\pi G _{eff}^{(n-1)/2}}{3}\,\sigma\,np_{\phi}\,. \tag{29}\] This is the second modified Friedmann equation in Barrow Cosmology. Again, one can check that \(n=3\) gives back the result of [21] \[\left(2-\Delta\right)\frac{\ddot{a}}{a}\left(H^{2}+\frac{k}{a^{2}}\right)^{- \Delta/2}+\left(1+\Delta\right)\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2 }=\,-8\pi G_{eff}\,p_{\phi}\,. \tag{30}\] The further limit \(\Delta\to 0\) reproduces the standard second Friedmann equation, here rewritten as \[\dot{H}+H^{2}=-\frac{4\pi}{3}\left(\rho_{\phi}+3p_{\phi}\right), \tag{31}\] where we have used the relation \[\dot{H}=\frac{\ddot{a}}{a}-H^{2}\,. \tag{32}\] ## III Inflation in Barrow Cosmology Let us now move onto the study of the inflationary era of the Universe. Within the scalar theory framework considered above, the characteristic quantities to compute are the inflation slow-roll parameters, which are defined by \[\epsilon = -\frac{\dot{H}}{H^{2}}\,, \tag{33}\] \[\eta = -\frac{\ddot{H}}{2H\ddot{H}}\,. \tag{34}\] Slow-roll conditions assert that both these two parameters take very small values during inflation, i.e. \(\epsilon,\eta\ll 1\). In the slow-roll theoretical framework, the only requirement \(\epsilon\ll 1\) is actually needed to ensure the existence of an early inflationary era, Then, by imposing \(\dot{\phi}^{2},\dot{\phi}\ll 1\) on the equation of motion of the theory, the first Friedmann equation (20) under the slow-roll assumptions becomes \[H^{2}\simeq\left[\frac{8\pi G_{eff}}{3}\,V(\phi)\right]^{2/\left(2-\Delta \right)}, \tag{35}\] where we have focused on the case \(n=3\) and we have resorted to Eq. (7a). On the other hand, from the second Friedmann equation (30) we get \[\dot{H}\simeq\frac{3\dot{\phi}^{2}}{2\left(\Delta-2\right)}\left(\frac{8\pi G _{eff}}{3}\right)^{2/\left(2-\Delta\right)}V(\phi)^{\Delta/\left(2-\Delta \right)}\,. \tag{36}\] Combining Eqs. (35) and (36), the slow-roll parameters (33) and (34) take the form \[\epsilon \simeq \frac{3\dot{\phi}^{2}}{2\left(2-\Delta\right)}V(\phi)^{-1}\,, \tag{37}\] \[\eta \simeq -\left(\frac{8\pi G_{eff}}{3}V(\phi)\right)^{1/\left(\Delta-2 \right)}\left[\frac{\ddot{\phi}}{\dot{\phi}}+\frac{\dot{\phi}\,\Delta}{4-2 \Delta}\frac{\partial_{\phi}V(\phi)}{V(\phi)}\right]. \tag{38}\] Let us now remark that the above parameters should be computed at horizon crossing, where the fluctuations of the inflation field freeze [51]. The scalar spectral index of the primordial curvature perturbations and the tensor-to-scalar ratio are defined by \[n_{s} \simeq 1-6\epsilon+2\eta\,, \tag{39}\] \[r \simeq 16\epsilon\,, \tag{40}\] respectively, which also need to be evaluated at the horizon crossing. For later convenience, it is useful to introduce the e-folding time \[N=\int_{t_{i}}^{t_{f}}H(t)dt\,, \tag{41}\] where \(t_{i}\) (\(t_{f}\)) represents the initial (final) time of the inflationary era. Consistently with the above discussion, we consider \(t_{i}=t_{c}\) as the horizon crossing time, so that Eq. (41) can be rewritten as \(N=\int_{\phi_{c}}^{\phi_{f}}H\dot{\phi}^{-1}\,d\phi\), where we have used the notation \(\phi_{c}\equiv\phi(t_{c})\) and \(\phi_{f}\equiv\phi(t_{f})\). ### Slow-roll inflation with power-law potential We now examine inflation from the dynamical point of view. Toward this end, we assume a power-law behavior for the scalar potential \(V(\phi)\) in the form \[V(\phi)\simeq\phi^{m}\,, \tag{42}\] where \(m>0\) is the power-term. The latest observational data prefer models with \(m\sim\mathcal{O}(1)\) or \(m\sim\mathcal{O}(10^{-1})\), while \(m\geq 2\) is disfavored in the minimally coupled scalar field. Henceforth, we shall focus on such phenomenologically allowed values of \(m\). We also remark that power law inflation is a very useful model to assess approximation schemes for the computation of scalar power spectra, since its spectrum is exactly solvable2. Footnote 2: More generally, one can assume \(V(\phi)=V_{0}\phi^{m}\), where \(V_{0}\) is a positive constant with dimensions of \([E]^{4-m}\). However, since observational indices are shown to be independent of this quantity, we can set \(V_{0}=1\) in suitable units without loss of generality. In order to extract analytical solutions of the inflationary observable indices, we express \(\dot{\phi}\) and \(\ddot{\phi}\) in terms of the scalar field by using the slow-roll conditions. In this regard, let us observe that the evolution equation (9) can be rewritten as \[\dot{\phi}\simeq-\frac{1}{3H}\partial_{\phi}V\,. \tag{43}\] By plugging into (35), we get \[\dot{\phi}=-\frac{m}{3}\left(\frac{8\pi G_{eff}}{3}\right)^{1/( \Delta-2)}\,\phi^{[(2-\Delta)(m-1)-m]/(2-\Delta)}\,. \tag{44}\] We can now derive the expression of \(\phi_{f}\) by noticing that inflation is supposed to end when \(\epsilon(\phi_{f})\sim 1\). By inverting Eq. (37), we are led to \[\phi_{f}=\left[\frac{6\left(2-\Delta\right)}{m^{2}}\left(\frac{8 \pi G_{eff}}{3}\right)^{2/(2-\Delta)}\right]^{(2-\Delta)/[\Delta(2-m)-4]}\,. \tag{45}\] Similarly, insertion of Eqs. (35) and (44) into (41) allows to infer the following expression for the scalar field at horizon crossing \[\phi_{c}=\left\{\frac{m}{3\left(2-\Delta\right)}\left(\frac{8\pi G _{eff}}{3}\right)^{2/(\Delta-2)}\left\{\frac{m}{2}+N\left[4+\Delta\left(m-2 \right)\right]\right\}\right\}^{(2-\Delta)/[4+\Delta(m-2)]}\,. \tag{46}\] The scalar spectral index (39) and the tensor-to-scalar ratio (40) can be cast in terms of the power-term \(m\) and the e-folding time \(N\) as \[n_{s} \simeq 1-\frac{2\left[4+\Delta(m-2)+m\right]}{m\left\{1+\frac{2N}{m} \left[4+\Delta(m-2)\right]\right\}}\,, \tag{47}\] \[r \simeq \frac{16}{1+\frac{2N}{m}\left[4+\Delta(m-2)\right]}\,. \tag{48}\] emarkably, we see that the slow-roll indices only depend on the power-term \(m\) and Barrow parameter \(\Delta\). A similar result has been exhibited in the context of Tsallis deformation of entropy-area law [51]. In order to constrain Barrow exponent \(\Delta\), let us require consistency of Eqs. (47) and (48) with observations. Specifically, we consider Planck 2018 measurements, which set the following bounds on \(n_{s}\) and \(r\)[54] \[n_{s} = 0.9649\pm 0.0042\,\,\,\,(68\%\,\,\text{CL})\,\,\,\,\text{from Planck }\text{TT},\text{TE},\text{EE}+\text{lowE}+\text{lensing}\,,\] (49) \[r < 0.064\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, Now, the end of the kinetic inflation is set by the condition \(\eta(\phi_{f})\simeq 1\)[51], which gives from the definition (41) \[\phi_{c}=\left\{\left(\frac{8\pi G_{eff}}{3}\right)^{1/(\Delta-2)}\left(\frac{m+ 2}{2}\right)^{1/(\Delta-2)}\frac{m^{1/2}}{2(\Delta-2)}\left[2m+N\left[4+\Delta \left(m-2\right)\right]\right]\right\}^{2(2-\Delta)/[4+\Delta(m-2)]}. \tag{56}\] From Eqs. (54) and (55), we then get (see also Fig. 2) \[n_{s} = \left\{1+4m\left\{\frac{9}{\left(m+2\right)\left(\Delta-2\right)}\right.\right. \tag{57}\] \[+\left.\left.\frac{\left\{\frac{m^{1/2}}{2(\Delta-2)}\left(\frac{ m+2}{2}\right)^{1/(\Delta-2)}\left(\frac{8\pi G_{eff}}{3}\right)^{1/(\Delta-2)} \left[2m+N\left[4+\Delta\left(m-2\right)\right]\right]\right\}^{n\Delta/[2( \Delta-2)]}}{2m+N\left[4+\Delta\left(m-2\right)\right]}\right\}^{2(2-\Delta)/ [4+\Delta(m-2)]},\] \[r = \frac{96m}{\left(m+2\right)\left(2-\Delta\right)}\,. \tag{58}\] Unlike the previous scenario, we now find that observationally consistency is obtained, provided that \(\Delta\) assumes largely negative values. This occurs for both \(m\sim\mathcal{O}(1)\) and \(m\sim\mathcal{O}(10^{-1})\), as it can be easily seen from Eq. (58). However, such a condition is at odds with the assumption (1), implying that a kinetic inflation could not be explained within Barrow's framework. This is a remarkable difference with the case of inflation based on Tsallis entropy [51], which allows for kinetic phase too. Specifically, in that case the kinetic inflation is associated to a regime of decreasing horizon entropy and ensuing clumping of fluctuations in particular regions of spacetime. ## IV Discussion and Conclusions Inspired by Covid-19 fractal structure, the modified entropy-area law (1) has been proposed to take into account quantum gravitational effects on the black hole horizon surface [16]. In the lines of the gravity-thermodynamic conjecture, this paradigm has been applied to the Universe horizon too, the ensuing framework being known as Barrow Cosmology. Within this framework, we have studied the evolution of FRW Universe, assuming the matter content to be represented by a homogeneous scalar field in the form of a perfect fluid. As a first step, by using the first law of thermodynamics applied to the horizon of the FRW Universe, we have derived modified (\(\Delta\)-dependent) Friedmann equations. The obtained result has been used to analyze the inflationary era. Toward this end, we have supposed a power-law behavior for the scalar inflaton field. We have found that inflation in Barrow Cosmology can consist of the slow-roll phase only, the kinetic inflation being incompatible with the allowed values of Barrow deformation parameter. We have finally constrained Barrow exponent to \(\Delta\lesssim 10^{-4}\) by demanding consistency of the scalar spectral index and tensor-to-scalar ratio with recent observational Planck data. Other aspects deserve further analysis. Besides the background and inflationary evolution, it would be interesting to study the growth rate of matter density perturbations and structure formation. This is an important testing Figure 2: Plot of \(r\) versus the power-term \(m\) and Barrow parameter \(\Delta\). ground to discriminate among existing modified cosmological models. Preliminary investigation in this direction has been proposed in [55] in the context of both Tsallis and Barrow entropies, showing that the entropic deformation parameter significantly influences the growth of perturbations. Moreover, one can attempt to extend the present considerations to Cosmology based on Kaniadakis entropy [56], which is a self-consistent relativistic generalization of Boltzmann-Gibbs entropy with non-trivial cosmological implications [57]. In this way, a relationship between Barrow and Kaniadakis formalisms can be established. Finally, since our models is an effort to include quantum gravity corrections in the analysis of inflation, it is essential to examine the obtained results in connection with predictions from more fundamental theories of quantum gravity [58]. Work along these and other directions is under active consideration and will be presented elsewhere. **Acknowledgements** The author acknowledges the Spanish "Ministerio de Universidades" for the awarded Maria Zambrano fellowship and funding received from the European Union - NextGenerationEU. He is also grateful for participation in the COST Action CA18108 "Quantum Gravity Phenomenology in the Multimessenger Approach".
2307.08807
Anomaly Detection with Selective Dictionary Learning
In this paper we present new methods of anomaly detection based on Dictionary Learning (DL) and Kernel Dictionary Learning (KDL). The main contribution consists in the adaption of known DL and KDL algorithms in the form of unsupervised methods, used for outlier detection. We propose a reduced kernel version (RKDL), which is useful for problems with large data sets, due to the large kernel matrix. We also improve the DL and RKDL methods by the use of a random selection of signals, which aims to eliminate the outliers from the training procedure. All our algorithms are introduced in an anomaly detection toolbox and are compared to standard benchmark results.
Denis C. Ilie-Ablachim, Bogdan Dumitrescu
2023-07-17T19:44:52Z
http://arxiv.org/abs/2307.08807v1
# Anomaly Detection with Selective Dictionary Learning ###### Abstract In this paper we present new methods of anomaly detection based on Dictionary Learning (DL) and Kernel Dictionary Learning (KDL). The main contribution consists in the adaption of known DL and KDL algorithms in the form of unsupervised methods, used for outlier detection. We propose a reduced kernel version (RKDL), which is useful for problems with large data sets, due to the large kernel matrix. We also improve the DL and RKDL methods by the use of a random selection of signals, which aims to eliminate the outliers from the training procedure. All our algorithms are introduced in an anomaly detection toolbox and are compared to standard benchmark results. ## I Introduction Dictionary Learning (DL) is a representation learning method which aims to find a sparse representation for a set of signals \(\mathbf{Y}\), represented as a matrix with \(N\) columns (signals) of size \(m\). The representation is achieved by computing a dictionary \(\mathbf{D}\) of size \(m\times n\) and a sparse representation \(\mathbf{X}\) of size \(n\times N\) such that a good approximation \(\mathbf{Y}\approx\mathbf{D}\mathbf{X}\) is obtained. Most applications with dictionary learning are in problems with image denoising, inpainting, signal reconstruction, clustering or classification. In this paper we present novel methods for unsupervised learning, in particular outlier detection, using DL. The main idea is based on finding a suited dictionary, which is capable of well representing most signals in a dataset, while the outlier signals representation should obtain large errors. Considering the number of outliers significantly lower than the rest of the signals, we expect the dictionary optimization to generally follow the directions of the normal signals. Our developments cover both the standard and the nonlinear (kernel) DL. Anomaly detection (outlier detection) is the identification of a subset of signals that have a different representation in relation to the rest of the data. There are several successful anomaly detection methods, such as Isolation Forest (IForest) [1], Minimum Covariance Determinant (MCD) [2, 3], One-class SVM detector (OCSVM) or Principal Component Analysis (PCA) Outlier Detector [4]. There are also several successful sparse coding algorithms used for anomaly detection. An idea was presented in [5, 6]. These methods consider the data representation as a joint sparse linear combination of training data. By following this technique, the authors try to achieve a direct correlation between all the available signals. Naturally, non-correlated signals are considered as being anomalies. Another example is given in [7], where the anomalies are identified in terms of deviation from a trained model. This method tries to achieve good sparse representation for jointly distributed signals, while the other independent signals should be isolated. An overview of DL can be found in [8]. The paper is organized as follows. Section II introduces a natural way of solving outlier detection problems using DL algorithms. Section III formulates a new DL algorithm, called Selective Dictionary Learning, which aims to improve the anomaly detection algorithm by randomly selecting signals for the training procedure in order to discourage dictionary adaptation to outliers. In Section IV we present a reduced kernel version of the DL problem and its Selective form. Section V contains the experimental results, obtained by running tests on multivariate data and comparing the results with those of methods available in a Python toolkit for outlier detection. ## II Anomaly Detection via Dictionary Learning The DL problem is formulated as following \[\begin{array}{ll}\min_{\mathbf{D},\mathbf{X}}&\|\mathbf{Y}-\mathbf{D}\mathbf{X}\|_{F}^{2}\\ \text{s.t.}&\|\mathbf{x}_{\ell}\|_{0}\leq s,\ell=1:N\\ &\|\mathbf{d}_{j}\|=1,j=1:n,\end{array} \tag{1}\] where \(\left\lVert\cdot\right\rVert_{0}\) represents the \(0\)-pseudo-norm and \(s\) is the sparsity level. The standard dictionary learning problem can be solved by using simple strategies. In order to overcome the nonconvexity and the huge dimension of the problem, the optimization procedure is organized in two steps. This method is also known as DL by Alternate Optimization. In this way, the problem is divided in two subproblems: sparse coding and dictionary update. By alternating these two stages for a given number of iterations, the method can obtain good local solutions. An iteration consists of computing the sparse representation \(\mathbf{X}\), while the dictionary \(\mathbf{D}\) is fixed, and then successively updating the dictionary columns, named atoms, while the sparse representation is fixed. For sparse coding we use Orthogonal Matching Pursuit (OMP) [9]. For the dictionary update we use the Approximate version of the K-SVD algorithm (AK-SVD) [10, 11], which optimizes the atoms and their representations successively. A simple strategy for anomaly detection is to compute the representation error \[\mathbf{E}=\mathbf{Y}-\mathbf{D}\mathbf{X} \tag{2}\] and identify the signals that obtain bad representations. The score of signal \(i\) is simply the norm \(\|\mathbf{e}_{i}\|\) of the \(i\)-th column of \(\mathbf{E}\). The largest the error norm, the more likely that signal is an outlier. The underlying assumption is that signals that are alike can be better represented by the dictionary designed when solving (1). However, the dictionary size \(n\) and the sparsity level \(s\) must be taken smaller than usual, otherwise the representation may be uniformly good for all signals and even outliers can be well represented. A small dictionary favors good representations for signals that are similar, tuning the atoms for this purpose; a bad representation of the outliers has little effect on the objective of (1), since they are few. This trade-off is naturally obtained during the optimization. Of course, since sparse representation is linear, similarity and dissimilarity can be thought in terms of direction. Normal signals belong to a small number of low dimensional subspaces and the outliers lie on very different subspaces. This is a model that is appropriate for some anomaly detection problems but not suited for others. ## III Selective Dictionary Learning In the standard DL algorithm, during the training procedure, both stages could be affected by the presence of outliers in the training dataset. The problem of anomaly detection can be solved more easily if we could train the dictionary only on normal data. By neglecting the outliers from the training dataset, we expect to obtain higher representation errors for anomalies. However, this is not possible, since we do not know which signals are normal and which are outliers. To describe our strategy for eliminating most of the outliers from the training process, we introduce two new parameters in the DL algorithm: \(train\_perc\) (_training percent_) and \(train\_drop\_prec\) (_training dropout percent_). The first one represents the percent of data that are used during the sparse coding stage. At each iteration, we first apply a random sampling on the training data and only \(train\_perc\)% of the signals are used for sparse coding. In the dictionary update stage, we further drop off \(train\_drop\_perc\)% of the signals, namely those having the worst representations (largest representation errors). Although the first random selection can eliminate both normal signals and outliers from a training iteration, the representation of normal signals is less likely to suffer, since there are still signals in the current training set that are similar to them. On the contrary, outliers are more likely to lack good proxies and so their representation will worsen. The second random selection, that of signals with bad representation, aims to directly remove outliers from the training process. The dictionary will be updated to better represent the signals that already have good representations. Hence, again, the outliers representation will worsen, but the representation of normal signals not present in the current selection will not be significantly altered. The DL problem can be formulated by the use of a zero extended permutation matrix \(\mathbf{P}\) that is modified at each stage and has the role of randomly selecting the signals: \[\begin{array}{ll}\min_{\mathbf{D},\mathbf{X}}&\|\mathbf{YP}-\mathbf{D}\mathbf{X}\|_{F}^{2}\\ \text{s.t.}&\|\mathbf{x}_{\ell}\|_{0}\leq s,\ell=1:N\\ &\|\mathbf{d}_{j}\|=1,j=1:n.\end{array} \tag{3}\] ## IV Reduced Kernel Dictionary Learning Linear spaces can usually hinder good representations. In order to overcome this problem, the standard DL can easily be extended to a nonlinear space. This method is called Kernel Dictionary Learning (KDL) and was introduced in [12] and [13]. By this, we reproject each signal \(\mathbf{y}\) to a nonlinear space \(\phi(\mathbf{y})\), where \(\phi(\cdot)\) is a nonlinear function. The dictionary \(\mathbf{D}\) is also extended to \(\phi(\mathbf{Y})\mathbf{A}\), where \(\mathbf{A}\) is a matrix with unknown coefficients, taking the role of dictionary. The KDL problem is formulated as \[\begin{array}{ll}\min_{\mathbf{A},\mathbf{X}}&\|\varphi(\mathbf{Y})-\varphi(\mathbf{Y})\bm {A}\mathbf{X}\|_{F}^{2}\\ \text{s.t.}&\|\mathbf{x}_{\ell}\|_{0}\leq s,\ell=1:N\\ &\|\varphi(\mathbf{Y})\mathbf{a}_{j}\|=1,j=1:n.\end{array} \tag{4}\] The KDL problem can be solved similarly to the DL problem (1) if Mercer kernels are used, which allows the substitution of a scalar product of feature vectors \(\varphi(\mathbf{x})^{\top}\varphi(\mathbf{y})\) with a kernel function \(k(\mathbf{x},\mathbf{y})\). However, the problem becomes difficult when using large datasets, due to the large kernel matrix \(\varphi(\mathbf{Y})^{\top}\varphi(\mathbf{Y})\) that results. The size of the kernel matrix scales linearly with the volume of the data, which leads to a large volume of memory. Thus this strategy might not be tractable for problems with large datasets. In order to overcome this limitation we extend the dictionary \(\mathbf{D}\) to a smaller nonlinear space by \(\varphi(\mathbf{\widetilde{Y}})\mathbf{A}\), where \(\mathbf{\widetilde{Y}}\) represents a small batch of signals from the original dataset. Permuting the signals such that \(\mathbf{Y}=[\mathbf{\widetilde{Y}}\ \mathbf{\widetilde{Y}}]\), we can write \[\varphi(\mathbf{\widetilde{Y}})=\underbrace{[\varphi(\mathbf{\widetilde{Y}})\ \varphi(\mathbf{\widetilde{Y}})]}_{\mathbf{\varphi}(\mathbf{Y})}\underbrace{\left[\begin{array} []{c}\mathbf{I}\\ \mathbf{0}\end{array}\right]}_{\mathbf{P}}. \tag{5}\] The KDL problem becomes \[\begin{array}{ll}\min_{\mathbf{A},\mathbf{X}}&\|\varphi(\mathbf{Y})-\varphi(\mathbf{ \widetilde{Y}})\mathbf{A}\mathbf{X}\|_{F}^{2}\\ \text{s.t.}&\|\mathbf{x}_{\ell}\|_{0}\leq s,\ell=1:N\\ &\|\varphi(\mathbf{\widetilde{Y}})\mathbf{a}_{j}\|=1,j=1:n.\end{array} \tag{6}\] From (5) and (6) we obtain a new optimization problem \[\min_{\mathbf{A},\mathbf{X}}\ \ \ \|\varphi(\mathbf{Y})(\mathbf{I}-\mathbf{P}\mathbf{A}\mathbf{X})\|_{F}^{2}. \tag{7}\] We denote \[\mathbf{E}=\mathbf{I}-\mathbf{P}\mathbf{A}\mathbf{X} \tag{8}\] the representation error and \[\mathbf{F}=\left[\mathbf{I}-\mathbf{P}\sum_{i\neq j}\mathbf{a}_{i}\mathbf{x}_{i}^{T}\right]_{I_{j}} \tag{9}\] the representation error without the contribution of the current atom \(\mathbf{a}_{j}\); by \(I_{j}\) we denote the set of signal indices to whose representation \(\mathbf{a}_{j}\) contributes. In order to solve the optimization problem (6), we update the current atom while the other atoms and the representation are fixed. Removing the index \(j\) for a lighter notation, the atom update problem becomes \[\min_{\mathbf{a}} \left\|\varphi(\mathbf{Y})\left(\mathbf{F}-\mathbf{P}\mathbf{a}\mathbf{x}^{\top} \right)\right\|_{F}^{2}. \tag{10}\] Using the trace form of the squared Frobenius norm, the objective function becomes \[\begin{array}{l}\operatorname{Tr}\left[\left(\mathbf{F}^{\top}-\mathbf{x}\mathbf{a}^{ \top}\mathbf{P}^{\top}\right)\varphi^{\top}(\mathbf{Y})\varphi(\mathbf{Y})\left(\mathbf{F}- \mathbf{P}\mathbf{a}\mathbf{x}^{\top}\right)\right]=\\ =\operatorname{Tr}\left[\mathbf{F}^{\top}\mathbf{K}\mathbf{F}\right]-2\mathbf{x}^{\top}\mathbf{F }^{\top}\mathbf{K}\mathbf{P}\mathbf{a}+\|\mathbf{x}\|^{2}\mathbf{a}^{\top}\mathbf{P}^{\top}\mathbf{K}\bm {P}\mathbf{a}.\end{array} \tag{11}\] We compute the partial derivative of the objective function with respect to the current atom \[\frac{\partial(\cdot)}{\partial\mathbf{a}}=2\|\mathbf{x}\|^{2}\underbrace{\mathbf{P}^{ \top}\mathbf{K}\mathbf{P}}_{\mathbf{K}}\mathbf{a}-2\underbrace{\mathbf{P}^{\top}\mathbf{K}}_{\mathbf{K}^{ \top}}\mathbf{F}\mathbf{x} \tag{12}\] and so the optimal atom is \[\mathbf{a}=\left(\|\mathbf{x}\|^{2}\bar{\mathbf{K}}\right)^{-1}\hat{\mathbf{K}}^{\top}\mathbf{F} \mathbf{x}. \tag{13}\] The atom is normalized after each update; note that the normalizing factor is \(\left(\mathbf{a}^{\top}\bar{\mathbf{K}}\mathbf{a}\right)^{\frac{1}{2}}\) in order to obtain \(\|\mathbf{a}_{j}\|=1\), as required by the original DL problem. We call Reduced Kernel Dictionary Learning using a Sampled kernel (RKDL-S) the method solving problem (6) and summarize its update step in Algorithm 1. The optimal representation from step 6 is computed by setting to zero the partial derivative of (11) with respect to \(\mathbf{x}\). The sparse representation step, not listed here, is made using the Kernel OMP algorithm [13]. ``` Data: reduced kernel matrix \(\bar{\mathbf{K}}\in\mathbb{R}^{p\times p}\) partial kernel matrix \(\hat{\mathbf{K}}\in\mathbb{R}^{N\times p}\) current dictionary \(\mathbf{A}\in\mathbb{R}^{N\times n}\) representation matrix \(\mathbf{X}\in\mathbb{R}^{n\times N}\) Result: updated dictionary \(\mathbf{A}\) 1 Compute error \(\mathbf{E}=\mathbf{I}-\mathbf{P}\mathbf{A}\mathbf{X}\) 2for\(j=1\)to\(n\)do 3 Modify error: \(\mathbf{F}=\mathbf{E}_{\mathcal{I}_{j}}+\mathbf{P}\mathbf{a}_{j}\mathbf{X}_{j,\mathcal{I}_{j}}\) 4 Update atom: \(\mathbf{a}_{j}=\left(\|\mathbf{x}\|_{2}^{2}\bar{\mathbf{K}}\right)^{-1}\bar{\mathbf{K}}^{\top} \mathbf{F}\mathbf{X}_{j,\mathcal{I}_{j}}\) 5 Normalize atom: \(\mathbf{a}_{j}\leftarrow\mathbf{a}_{j}/\left(\mathbf{a}_{j}^{\top}\bar{\mathbf{K}}\mathbf{a}_{j} \right)^{\frac{1}{2}}\) 6 Update representation: \(\mathbf{X}_{j,\mathcal{I}_{j}}^{\top}\gets\mathbf{F}^{\top}\bar{\mathbf{K}}\mathbf{a}_{j}\) 7 Recompute error: \(\mathbf{E}_{\mathcal{I}_{j}}=\mathbf{F}-\mathbf{P}\mathbf{a}_{j}\mathbf{X}_{j,\mathcal{I}_{j}}\) ``` **Algorithm 1**RKDL-S RKDL-S achieves good results, but nevertheless in the training process there are chances to use abnormal signals, by the use of random sampling extraction. This fact can lead to a decrease in accuracy and performance. A better strategy that could overcome this problem would be to use a trained dictionary instead of \(\bar{\mathbf{Y}}\) signals. This can be achieved by using a dictionary, denoted \(\bar{\mathbf{D}}\), obtained from the linear cases in the previous sections. The corresponding optimization problem is \[\begin{array}{rl}\min_{\mathbf{A},\mathbf{X}}&\|\varphi(\mathbf{Y})-\varphi(\bar{\mathbf{D} })\mathbf{A}\mathbf{X}\|_{F}^{2}\\ \text{s.t.}&\left\|\mathbf{x}_{\ell}\right\|_{0}\leq s,\ell=1:N\\ &\left\|\varphi(\bar{\mathbf{D}})\mathbf{a}_{j}\right\|=1,j=1:n.\end{array} \tag{14}\] We name it RKDL-D, the last letter indicating the use of dictionary instead of sampled signals. In order to update the current atom, we rewrite the new optimization problem as follows \[\min_{\mathbf{a}_{j}}\left\|\varphi(\mathbf{Y})-\varphi(\bar{\mathbf{D}})\sum_{i\neq j}\bm {a}_{i}\mathbf{x}_{i}^{\top}-\varphi(\bar{\mathbf{D}})\mathbf{a}_{j}\mathbf{x}_{j}^{\top} \right\|_{F}^{2}. \tag{15}\] Expressing the Frobenius norm via its trace form, (15) becomes \[\begin{split}\min_{\mathbf{a}_{j}}\operatorname{Tr}\left[\left(\varphi ^{\top}(\mathbf{Y})-\sum_{i\neq j}\mathbf{x}_{i}\mathbf{a}_{i}^{\top}\varphi^{\top}(\bar {\mathbf{D}})-\mathbf{x}_{j}\mathbf{a}_{j}^{\top}\varphi^{\top}(\bar{\mathbf{D}})\right)\\ \left(\varphi(\mathbf{Y})-\varphi(\bar{\mathbf{D}})\sum_{i\neq j}\mathbf{a}_{i}\mathbf{x}_{i }^{\top}-\varphi(\bar{\mathbf{D}})\mathbf{a}_{j}\mathbf{x}_{j}^{\top}\right)\right]\!.\end{split} \tag{16}\] After the substitution of scalar products with the kernel function and neglecting the terms that do not depend on \(\mathbf{a}_{j}\) the final optimization problem is \[\begin{split}\min_{\mathbf{a}_{j}}\operatorname{Tr}\left[2\sum_{i\neq j }\mathbf{x}_{i}\mathbf{a}_{i}^{\top}K(\bar{\mathbf{D}},\bar{\mathbf{D}})\mathbf{a}_{j}\mathbf{x}_{j}^{ \top}+\mathbf{x}_{j}\mathbf{a}_{j}^{\top}\underbrace{K(\bar{\mathbf{D}},\bar{\mathbf{D}})}_{ \bar{\mathbf{K}}_{D}}\mathbf{a}_{j}\mathbf{x}_{j}^{\top}\right.\\ \left.-2\underbrace{K(\mathbf{Y},\bar{\mathbf{D}})}_{\bar{\mathbf{K}}_{D}}\mathbf{a }_{j}\mathbf{x}_{j}^{\top}\right].\end{split} \tag{17}\] Algorithm 1 can be easily modified for solving (17), following the same line of reasoning as above. In particular, the atom update relation is \[\mathbf{a}_{j}=\left(\|\mathbf{x}\|_{2}^{2}\bar{\mathbf{K}}_{\bar{\mathbf{D}}}\right)^{-1}(\hat{ \mathbf{K}}_{\bar{\mathbf{D}}}^{\top}+\bar{\mathbf{K}}_{\bar{\mathbf{D}}}R)\mathbf{X}_{j}\] and the representation update is \[\mathbf{X}_{j}^{\top}\leftarrow(\hat{\mathbf{K}}_{\bar{\mathbf{D}}}-R\bar{\mathbf{K}}_{\bar{ \mathbf{D}}})\mathbf{a}_{j},\] where we denoted \(\bar{\mathbf{K}}_{D}\) the reduced kernel matrix \(k(\bar{\mathbf{D}},\bar{\mathbf{D}})\), \(\hat{\mathbf{K}}_{D}\) the partial kernel matrix \(k(\mathbf{Y},\bar{\mathbf{D}})\) and \(\mathbf{R}=\mathbf{X}^{\top}\mathbf{A}^{\top}-\mathbf{X}_{j}\mathbf{a}_{j}^{\top}\) the transposition representation product with respect to the current atom \(\mathbf{a}_{j}\). The new method is summarized in Algorithm 2. Following the same strategy presented in Section III, the RKDL methods can easily be adapted to their Selective form. The Selective Reduced Kernel Dictionary Learning (SRKDL) problem is solved as the previous one, by introducing two additional steps for the randomly selection of signals, one for the kernel OMP subproblem and the second one for the matrix coefficients update subproblem. In both cases the random sampling selection is made according to the entire data set (including the abnormal signals). ## V Experiments In this section we present the main results obtained with the proposed DL algorithms for anomaly detection. All algorithms have been developed in Python and have been introduced in the framework of the PyOD [14] anomaly detection toolbox. For the evaluation, all vectors of a dataset were normalized and were split into two sets: 60% for training and 40% for testing. Each experiment was repeated ten times independently with random splits. In terms of performance, we measure and compute the mean of the area under the receiver operating characteristic (ROC) curve and the precision @ rank n score. We used 16 real-world datasets from different domains, more precisely those gathered in ODDS (Outlier Detection DataSets)1 and used as benchmark in PyOD, and 2 synthetic datasets. Footnote 1: [http://odds.cs.stonybrook.edu/](http://odds.cs.stonybrook.edu/) All the algorithms were implemented in Python on a Desktop PC with Ubuntu 20.04 as operating system, having a processor of base frequency of 2.90 GHz (Max Turbo Frequency 4.80 GHz) and 80GB RAM memory (although a 16 GB RAM memory is sufficient). During the experiments, ten different rounds were run. The execution time, receiver operating characteristic value and precision n score were measured based on the average of all rounds. For the nonlinear versions we used two different kernels: radial basis function kernel \(k(\mathbf{x},\mathbf{y})=\exp{(-\gamma||\mathbf{x}-\mathbf{y}||_{2}^{2})}\) and polynomial kernel \(k(\mathbf{x},\mathbf{y})=(\gamma\mathbf{x}^{\top}\mathbf{y}+\alpha)^{\beta}\). The hyperparameters of the kernel functions were chosen according to a grid search. Based on the average results on all the datasets, they were set as following: \(\gamma=1/m\), \(\alpha=1\) and \(\beta=3\), for the synthetic datasets, while for the rest we used \(\gamma=0.1/m\) for the rbf kernel and \(\gamma=10/m\) for the polynomial kernel; we remind that \(m\) is the size of a signal. All the implementations are available at [https://github.com/denisilie94/pyod-dl](https://github.com/denisilie94/pyod-dl), including the two synthetic datasets. The first synthetic dataset was generated based on two different sparse coded sets of signals. Using two dictionaries, \(D_{i}\), the dictionary for inliers, and \(D_{o}\), the dictionary for outliers, two sets of signals were generated having the sparsity constraint \(s=4\). For the numerical experiment we set the number of inliers \(N_{i}=512\) and number of outliers \(N_{o}=64\), while the dictionary size are \(n_{i}=50\) and \(n_{o}=400\). The signals size was set to \(m=64\). For the outliers signals we used an overcomplete dictionary, since its representation ability is much more diverse than in the case of the dictionary for inliers. The second dataset consists of random samples from two normal (Gaussian) distributions, of different mean and standard deviation. We kept the same number of normal and abnormal signals of size 64 as in the previous dataset. The two Gaussian distributions were generated so that the distribution of normal signals clearly overlaps with the distribution of abnormal signals. More exactly the inlier mean and variance are \(\mu_{i}=0\) and \(\sigma_{i}=0.5\), while the outliers parameters are \(\mu_{o}=-0.1\) and \(\sigma_{o}=0.45\). For the DL methods we used small dictionaries of size \(n=50\), while the sparsity constraint was \(s=5\). All the dictionaries were trained using \(20\) iterations using the AKSVD method. For the SDL method the \(train\_perc=0.7\) and \(train\_drop\_perc=0.4\). For the RKDL method, the size of the matrices \(\mathbf{\overline{Y}}\) from (6) and \(\mathbf{\bar{D}}\) from (14) was set to \(10\%\) of the number of signals. The selective version of RKDL used the parameters \(train\_perc=0.8\) and \(train\_drop\_perc=0.3\). The results show the good behaviour of our algorithms in detecting outliers via sparse coding. In terms of performance, the DL methods obtain competitive results. The main results are summarized in Tables I, III for the public PyOD methods and in Tables II, IV for the DL methods. In all the tables we highlight the best three results from both sets of methods (PyOD and DL) taken together. For the synthetic datasets, we noticed that the PyOD methods do not obtain good results. The DL methods obtain better classification results for the dataset generated with sparse coding and the dataset with Gaussian distribution. For ODDS datasets, the overall results are predominantly better for PyOD methods. However, there are a few datasets where DL methods stand out as being better. For example, for the _cardio_ dataset, the DL methods achieve the third place in top, while for the _ionosphere_ and _satellite_ datasets it occupies the second and third place. An interesting dataset is _vertebral_ where the DL methods are the best, occupying all three positions of the top. In general, the SDL method achieve better results than the DL method, but this is not always true. Depending on how the random selection of signals is made, there are chances that abnormal signals to be used during the training procedure. This is possible for the datasets with a very high percentage of outliers or small datasets. The same statement is valid for the KDL vs SKDL comparison. On the other hand, comparing the standard methods with the kernel methods, we notice that the second ones obtain better results. Moreover, the selective strategy improves the invariance of dictionaries to representing abnormal signals. The RKDL-D and SRKDL-D methods often \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Data & Samples & Dim. & Out. Perc. & ABOD & CBLOF & FB & HBOS & IForest & KNN & LOF & MCD & OCSVM & PCA \\ \hline dl\_out & 576 & 64 & 11,1111 & 0.84496 & 0.52271 & 0.58823 & 0.48706 & 0.50225 & 0.56253 & 0.59419 & **0.8734** & 0.5106 & 0.49162 \\ \hline 2gauss\_out & 576 & 64 & 11,1111 & 0.00633 & 0 & 0.19077 & 0.47946 & 0.28239 & 0 & 2.6359 & 0.00115 & 3.6793 & 0.54262 \\ \hline arrhythmia & 452 & 274 & 14,6018 & 0.7635 & 0.78382 & 0.77807 & **0.82193** & **0.7996** & **0.7861** & 0.78667 & 0.7867 & 0.78718 & 0.78168 \\ \hline cardio & 1831 & 21 & 9.6122 & 0.56917 & 0.81003 & 0.58673 & 0.8351 & 0.91844 & 0.72363 & 0.57357 & 0.82715 & **0.9484** & **0.95038** \\ \hline glass & 214 & 9 & 4.2056 & 0.79507 & 0.84125 & **0.87261** & 0.73887 & 0.74977 & **0.85076** & **0.8644** & 0.79006 & 0.63266 & 0.6747 \\ \hline ionosphere & 351 & 33 & 35.8974 & 0.92476 & 0.89718 & 0.83704 & 0.56144 & 0.85411 & 0.92674 & 0.87535 & **0.9556** & 0.84192 & 0.7962 \\ \hline letter & 1600 & 32 & 0.25 & **0.87825** & 0.78306 & **0.86605** & 0.95288 & 0.64011 & **0.87866** & 0.89539 & 0.8074 & 0.61182 & 0.5283 \\ \hline lympho & 148 & 18 & 4.0541 & 0.91097 & 0.96731 & 0.97528 & **0.99569** & **0.99288** & 0.9745 & 0.97709 & 0.91125 & 0.9787 & **0.98467** \\ \hline mnist & 7603 & 100 & 9.2069 & 0.78153 & 0.84041 & 0.72046 & 0.57419 & 0.80673 & 0.84813 & 0.71608 & **0.86661** & **0.85286** \\ \hline mask & 3062 & 166 & 3.1679 & 0.18444 & 0.7264 & 0.57419 & 0.80673 & 0.82607 & 0.89587 & 0.82607 & 0.99971 & 1 & 0.99955 \\ \hline optdigits & 5216 & 64 & 2.8758 & 0.46674 & **0.7692** & 0.44336 & **0.87325** & **0.70608** & 0.73076 & 0.45084 & 0.3979 & 0.49972 & 0.5056 \\ \hline pendigits & 6870 & 16 & 2.2707 & 0.68776 & 0.83907 & 0.45953 & 0.92381 & **0.94964** & 0.74865 & 0.46975 & 0.83439 & **0.93031** & **0.93525** \\ \hline pimta & 768 & 8 & 34,958 & **0.67938** & 0.65871 & 0.62345 & **0.69995** & 0.67798 & **0.7081** & 0.62705 & 0.67528 & 0.6215 & 0.64811 \\ \hline satellite & 6435 & 36 & 31.6395 & 0.57137 & 0.74942 & 0.55717 & 0.75811 & 0.6937 & 0.68364 & 0.55727 & **0.80304** & 0.66224 & 0.59884 \\ \hline satimage-2 & 5803 & 36 & 1.2325 & 0.81896 & **0.59922** & 0.45701 & 0.98042 & **0.99384** & 0.9536 & 0.45774 & **0.95953** & 0.9978 & 0.98218 \\ \hline vertebral & 240 & 6 & 12.5 & 0.42615 & 0.43309 & 0.41658 & 0.32625 & 0.39276 & 0.38166 & 0.40811 & 0.39158 & 0.44308 & 0.40269 \\ \hline vowels & 1456 & 12 & 3.4341 & **0.9699** & 0.92221 & **0.94252** & 0.67267 & 0.75966 & **0.968** & 0.490496 & 0.8071 & 0.78021 & 0.60267 \\ \hline wbc & 378 & 30 & 5.5556 & 0.90473 & 0.920205 & 0.93254 & **0.95163** & 0.93073 & **0.9362** & **0.93488** & 0.92102 & 0.93189 & 0.91587 \\ \hline \end{tabular} \end{table} TABLE I: ROC Performance - PyDD methods \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Data & Samples & Dim. & Out. Perc. & ABOD & CBLOF & FB & HBOS & IForest & KNN & LOF & MCD & OCSVM & PCA \\ \hline dl\_out & 576 & 64 & 11,1111 & 0.84966 & 0.52271 & 0.58823 & 0.48706 & 0.50225 & 0.56253 & 0.59419 & **0.8734** & 0.5106 & 0.49162 \\ \hline 2gauss\_out & 576 & 64 & 11,1111 & 0.00633 & 0 & 0.19077 & 0.47946 & 0.28239 & 0 & 2.6359 & 0.00105 & 3.6793 & 0.54262 \\ \hline arrhythmia & 452 & 274 & 14,6018 & 0.7635 & 0.78382 & 0.77807 & **0.82193** & **0.79966** & **0.7861** & 0.77866 & 0.77867 & 0.78718 & 0.7816 & 0.78158 \\ \hline cardio & 1831 & 21 & 9.61222 & 0.56917 & 0.81003 & 0.58673 & 0.8351 & 0.91844 & 0.72363 & 0.57357 & 0.82715 & **0.95484** & **0.95038** \\ \hline glass & 214 & 9 & 4.2056 & **0.79507** & 0.84125 & **0.87261** & 0.73887 & 0.74977 & **0.820576** & **0.8644** & 0.79060 & 0.63236 & 0.67476 \\ \hline ionosphere & 351 & 33 & 35.8974 & 0.92476 & 0.89718 & 0.83704 & 0.56144 & 0.85411 & 0.92674 & 0.87535 & **0.95660** & 0.84192 & 0.7962 \\ \hline letter & 1600 & 32 & 0.25 & **0.87825** & 0.78306 & **0.86605** & 0.95288 & 0.64011 & **0.87866** & 0.89539 & 0.8074 & 0.6182 & 0.5283 \\ \hline lympho & 148 & 18 & 4.0541 & 0.91097 & 0.96731 & 0.97528 & **0.99569** & **0.99288** & 0.9745 & 0.97709 & 0.91125 & 0.9787 & **0.98467** \\ improve the results. In general, the trained dictionary, \(\mathbf{D}\), is better adapted for the representation of the normal signals. However, it is likely that the trained dictionaries contain atoms that are beneficial in the nonlinear representation of all signals, including the outliers. The execution time of DL methods is usually larger than that of the PyOD methods. For example, for the _musk_ dataset, which is among the largest, DL and SDL take about 6 seconds, i.e., not much more than MCD, which needs about 4 seconds; RKDL algorithms take between 9 and 11 seconds, while SRKDL variant are slightly faster, with 7-10 seconds. The other PyOD algorithms are at least \(10\) times faster than the methods presented in the article. ## VI Conclusions In this paper we have presented a novel unsupervised method for outlier detection, based on Dictionary Learning and Kernel Dictionary Learning. We have introduced a reduced kernel DL version that is suitable for problems with large datasets. The kernel reduction technique is based on choosing a small sample of signals from the original dataset, which will further be used for the nonlinear extension. Another way to represent the kernel is to use a dictionary initially trained in with the standard DL algorithm. Both methods are accompanied by improved versions based on a random selection of the data used in the training procedure. This ensures invariance in the representation of normal signals, while the capabilities of the dictionaries for the representation of abnormal signals decrease. Based on these results, we demonstrated that sparse learning can easily isolate the outliers from the normal signals, while obtaining competitive results with other unsupervised methods. All the developed algorithms were introduced in an outlier detection toolbox.
2305.17186
Accretion onto a static spherically symmetric regular MOG dark compact object
In astrophysics, the process of a massive body acquiring matter is referred to as accretion. The extraction of gravitational energy occurs as a result of the infall. Since it converts gravitational energy into radiation, accretion onto dark compact objects, e.g. black holes, neutron stars, and white dwarfs is an extremely significant process in the astrophysical context. Accretion process is a fruitful way to explore the features of modified gravity (MOG) theories by testing the behavior of their solutions associated with dark compact objects. In this paper, we study the motion of electrically neutral and charged particles moving in around a regular spherically symmetric MOG dark compact object to explore their related innermost stable circular orbit (ISCO) and energy flux. Then, we turn to investigate the accretion of perfect fluid onto the regular spherically symmetric MOG dark compact object. We obtain analytical expressions for four-velocity and proper energy density of the accreting fluid. We see that the MOG parameter increases the ISCO radius of either electrically neutral or charged test particles while it decreases the corresponding energy flux. Moreover, the energy density and the radial component of the four-velocity of the infalling fluid decrease by increasing the MOG parameter near the central source.
Kourosh Nozari, Sara Saghafi, Fateme Aliyan
2023-05-26T18:22:14Z
http://arxiv.org/abs/2305.17186v1
# Accretion onto a static spherically symmetric regular MOG dark compact object ###### Abstract In astrophysics, the process of a massive body acquiring matter is referred to as accretion. The extraction of gravitational energy occurs as a result of the infall. Since it converts gravitational energy into radiation, accretion onto dark compact objects, e.g. black holes, neutron stars, and white dwarfs is an extremely significant process in the astrophysical context. Accretion process is a fruitful way to explore the features of modified gravity (MOG) theories by testing the behavior of their solutions associated with dark compact objects. In this paper, we study the motion of electrically neutral and charged particles moving in around a regular spherically symmetric MOG dark compact object to explore their related innermost stable circular orbit (ISCO) and energy flux. Then, we turn to investigate the accretion of perfect fluid onto the regular spherically symmetric MOG dark compact object. We obtain analytical expressions for four-velocity and proper energy density of the accreting fluid. We see that the MOG parameter increases the ISCO radius of either electrically neutral or charged test particles while it decreases the corresponding energy flux. Moreover, the energy density and the radial component of the four-velocity of the infalling fluid decrease by increasing the MOG parameter near the central source. Dark Compact Object, Regular Spacetime, Modified Gravity, Accretion Process. pacs: 04.50.Kd, 04.70.-s, 04.70.Dy, 04.20.Jb ###### Contents * I Introduction * II Action and field equations of STVG theory * II.1 Regular MOG static spherically symmetric dark compact object * III Motion of test particle in MOG dark compact object spacetime * III.1 Motion of electrically neutral test particle * III.1.1 Stable circular orbits around regular MOG dark compact object * III.1.2 Radiant energy flux * III.2 Motion of electrically charged test particle * III.2.1 Stable circular orbits around regular MOG dark compact object for electrically charged particles * IV Accretion onto regular MOG dark compact object * IV.1 Dynamical equations * IV.2 Dynamical parameters * IV C Mass evolution D Critical Accretion ## V Summary and Conclusions ### I Introduction Phenomenologically, dark compact objects are an extensive family of astrophysical objects, which include black holes, neutron stars, white dwarfs, etc. In theoretical point of view, dark compact objects could be predicted in the context of extended gravity theories as well as in scenarios of the beyond the standard model of particle physics [1]. Recently, observations of LIGO/Virgo proved the existence of binary black holes mergers through detection of gravitational waves [2; 3; 4; 5], and additionally the Event Horizon Telescope (EHT) revealed the existence of supermassive black holes in center of galaxy M87 [6; 7; 8; 9; 10; 11; 12; 13] and Milky Way [14; 15; 16; 17; 18; 19]. Therefore, it can be naturally anticipated that future advances in the field of gravitational wave astronomy and very long baseline interferometry will reveal new species of compact objects. On the other hand, it is fascinating to understand how and at what limits a dark compact object tends to be a black hole by increasing its compactness, which makes interesting the study of dark compact objects from a mathematical viewpoint. General Theory of Relativity (GR) designed by Albert Einstein, besides a lot of achievements in explaining observations and predicting astonishing phenomena, is not yet the complete theory to describe gravitational interaction and corresponding events in the Universe. Reproduction of the rotation curves of nearby galaxies [20; 21], mass profiles of galaxy clusters [22; 23], intrinsic singularities at the center of black holes, etc are some examples of the failures of GR. Additionally, GR requires the cosmological constant term \(\Lambda\) to explain the positively accelerated expansion of the Universe at late-time [24; 25]. One interesting way to reform GR is to restructure the geometric part of GR through different approaches that can e.g. result in the so-called MOdified Gravity (MOG), which is a Scalar-Tensor-Vector (STVG) theory to describe gravitational interaction [26], proposed and developed by John W. Moffat. A massive vector field \(\phi\) in addition to three scalar field as the mass of the vector field \(\tilde{\mu}\), the effective gravitational constant \(G\), and the vector field coupling \(\xi\) are responsible for expressing the gravitational effects of spacetime in MOG setup. The MOG theory has several achievements in describing astrophysical observations, such as clarifying the rotation curves of many galaxies and the dynamics of galactic clusters without dark matter [27; 28; 29; 30; 31; 32], in addition to compatibility with Planck 2018 data [33]. Moreover, several black hole solutions including non-rotating and rotating ones [34] even with extra dimensions [35], cosmological solutions [36; 37; 38] and also, non-stationary solutions for inhomogeneity distributions of mass-energy in spacetime [39] are released within the framework of MOG theory in recent years. Also, many theoretical and observational efforts have been done to understand the MOG theory features and how it work in different situations [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. Interestingly, the solution describing the regular rotating and non-rotating MOG dark compact object has been recently explored in Ref. [54]. The shadow behaviour of the regular rotating and non-rotating MOG dark compact object is investigated in Ref. [55]. Accretion is a process of particles being dragged onto a dark compact object. This process releases extra energy into surroundings, which is a source of some astronomical phenomena [56; 57]; for instance the production of powerful jets, high-energy radiation, and quasars. A flattened structure made by rotating gaseous materials that slowly spiral into a massive central body is called an accretion disk. Accretion disks typically form around compact objects when interstellar matter exists. Accretion disks of compact objects are results of rotating gaseous materials in unstable bounded orbits [56; 57]. Under some conditions, the gas particles fall into gravitational potential of the compact objects, which causes gravitational energy in the form of heat. The inner portion of the accretion disk cools down as a result of the conversion of some heat into radiation [56; 57]. The electromagnetic spectrum of the emitted radiation can be analyzed when it reaches radio, optical, or X-ray telescopes. The motion of the gas particles, which may also be related to the structure and nature of the central mass, determines the properties of this radiation. As a result, studying accretion disk emission spectra can provide fruitful astrophysical data. Hence the accretion disks of compact objects drawn a lot of attention and have been studied in several cases in the literature [58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79]. The regular non-rotating (spherically symmetric) MOG dark compact object [54], which can be formed from the collapse of stellar object, can be tested in astrophysical phenomena, e.g. accretion process. It is the reason we found it interesting to study accretion disk onto the regular MOG dark compact object. In this regard, we also aim to study the motion of electrically neutral and charged test particles moving in this spacetime, and explore their corresponding energy flux. The rest of the paper is organized as follows. In Section II, we review the MOG field equations, and then, we introduce the regular MOG dark compact object spacetime and its features. Next, we study the motion of electrically neutral and charged test particles travelling in the regular MOG dark compact object spacetime in Section III. Then, we investigate the static spherically symmetric accretion in Section IV. Finally, we end with some conclusions in Section V. ## II Action and field equations of STVG theory The total action in the theory of STVG is in the form of [26] \[S=S_{GR}+S_{M}+S_{\phi}+S_{S}\,, \tag{1}\] where \(S_{GR}\) is the Einstein-Hilbert action, \(S_{M}\) is the action of all possible matter sources, \(S_{\phi}\) is the action of the (spin 1 graviton) vector field \(\phi^{\mu}\) possessing the mass \(\tilde{\mu}\) as one of the scalar fields in the theory, and \(S_{S}\) is the action of three scalar fields, which can be expressed as follows \[S_{GR}=\frac{1}{16\pi}\int d^{4}x\sqrt{-g}\frac{1}{G}R\,, \tag{2}\] \[S_{\phi}=-\int d^{4}x\sqrt{-g}\left(\frac{1}{4}B^{\mu\nu}B_{\mu\nu}+V_{1}( \phi)\right)\xi\,, \tag{3}\] \[S_{S} = \int d^{4}x\sqrt{-g}\left[\frac{1}{G^{3}}\left(\frac{1}{2}g^{\mu \nu}\nabla_{\mu}G\nabla_{\nu}G-V_{2}(G)\right)+\frac{1}{\tilde{\mu}^{2}G} \left(\frac{1}{2}g^{\mu\nu}\nabla_{\mu}\tilde{\mu}\nabla_{\nu}\tilde{\mu}-V_{ 3}(\tilde{\mu})\right)\right. \tag{4}\] \[\left.+\frac{1}{G}\left(\frac{1}{2}g^{\mu\nu}\nabla_{\mu}\xi \nabla_{\nu}\xi-V_{4}(\xi)\right)\right]\,,\] in which \(g_{\mu\nu}\) is the background metric tensor and \(g\) is the corresponding determinant, \(R\) is the Ricci scalar constructed by contracting \(R_{\mu\nu}\) as the Ricci tensor, \(G\) is a scalar field in the setup, which known as the enhanced Newtonian parameter, \(\xi\) is third scalar field in the setup as the vector field coupling, \(V_{1}(\phi)\), \(V_{2}(G)\), \(V_{3}(\tilde{\mu})\), and \(V_{4}(\xi)\) are the corresponding potentials of the vector field \(\phi^{\mu}\), and three scalar field \(G\), \(\tilde{\mu}\), and \(\xi\), respectively, and \(B_{\mu\nu}=\partial_{\mu}\phi_{\nu}-\partial_{\nu}\phi_{\mu}\), and also \(\nabla_{\mu}\) stands for the covariant derivative in the spacetime. In the STVG theory, \(T_{\mu\nu}={}^{(M)}T_{\mu\nu}+{}^{(\phi)}T_{\mu\nu}+{}^{(S)}T_{\mu\nu}\) is the total stress-energy tensor, in which the stress-energy tensor of matter sources is \({}^{(M)}T_{\mu\nu}\), the stress-energy tensor of the scalar fields is \({}^{(S)}T_{\mu\nu}\), and the stress-energy tensor of the vector field is \[{}^{(\phi)}T_{\mu\nu}=-\frac{1}{4}\left(B_{\mu}^{\ \sigma}B_{\nu\sigma}- \frac{1}{4}g_{\mu\nu}B^{\sigma\lambda}B_{\sigma\lambda}\right)\,, \tag{5}\] for which \(V_{1}(\phi)=0\). One can find the full field equations of the STVG framework by variation of the action \(S\) concerning the inverse of the metric tensor, which yields [26] \[G_{\mu\nu}+G\left(\nabla^{\gamma}\nabla_{\gamma}\frac{1}{G}g_{\mu\nu}-\nabla_{ \mu}\nabla_{\nu}\frac{1}{G}\right)=8\pi GT_{\mu\nu}\,, \tag{6}\] in which the Einstein tensor is defied as \(G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\). ### Regular MOG static spherically symmetric dark compact object The line element of the regular MOG static spherically symmetric dark compact object were found under the following assumptions [54] * The vector field is massless, i.e., \(\tilde{\mu}=0\), since one can prove that for MOG compact objects, e.g., black holes possessing horizons, the mass of the vector field in the setup is zero. * The enhanced Newtonian parameter \(G\) is defined as a constant depending on the free dimensionless parameter \(\alpha\) so that \(G=G_{N}(1+\alpha)\) where \(G_{N}\) is the Newtonian constant. Furthermore, the gravitational source charge of the vector field is \(Q_{g}=\sqrt{\alpha G_{N}}M\) where \(M\) is the source mass. Here, we set \(G_{N}=1\). * The vector field coupling is set to unity, i.e., \(\xi=1\). * The matter-free field equations of STVG setup is considered since the MOG dark compact object is a vacuum solution of the framework. The above assumptions result in \(S_{M}=S_{S}=0\) and consequently, we have \({}^{(M)}T_{\mu\nu}={}^{(S)}T_{\mu\nu}=0\). Thus, the field equations (6) now reduce to the following form \[G_{\mu\nu}=8\pi(1+\alpha)^{(\phi)}T_{\mu\nu}\,. \tag{7}\] Solving the last equation by following the procedure introduced in Ref. [54] leads to the line element of the regular MOG static spherically symmetric dark compact object as follows \[ds^{2}=f(r)dt^{2}-\frac{1}{f(r)}dr^{2}-r^{2}d\Omega^{2}\,, \tag{8}\] where \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\varphi^{2}\) is the line element of the unit 2-sphere, and also we have defined \[f(r)=1-\frac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{2}} +\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{ 2}}\,, \tag{9}\] which satisfies the weak energy condition [80; 81]. The MOG dark compact object possesses a critical value for \(\alpha\) as \(\alpha_{crit}=0.674\)[54], so that for \(\alpha\leq\alpha_{crit}\) it has two horizons. It is worth mentioning that the (spin 1 graviton) vector field produces a repulsive gravitational force, which prevents the collapse of the MOG dark compact object to a MOG black hole with horizon. Setting \(\alpha=0\) in the line element (8) recovers the Schwarzschild black hole in GR. Moreover, the asymptotic behavior of the MOG compact object in the limit of \(r\rightarrow\infty\) is deduced as follows \[f(r)\approx 1-\frac{2(1+\alpha)M}{r}+\frac{\alpha(1+\alpha)M^{2}}{r^{2}}\,. \tag{10}\] When, \(\alpha\leq\alpha_{crit}\), the two horizons of the regular MOG static spherically symmetric dark compact object in the limit of \(r\rightarrow\infty\) can be found as \[r_{\pm}=M\left(1+\alpha\pm\sqrt{1+\alpha}\right)\,. \tag{11}\] When, \(\alpha>\alpha_{crit}\), there is a naked regular MOG static spherically symmetric dark compact object with no horizon. On the other hand, approaching the source, i.e., \(r\to 0\), the MOG dark compact object behaves to the form \[f(r)\approx 1-\frac{r^{2}}{M^{2}}\left(\frac{2\sqrt{1+\alpha}-\sqrt{\alpha}}{(1+ \alpha)\alpha^{\frac{3}{2}}}\right)\,. \tag{12}\] Therefore, the spacetime metric of the MOG dark compact object is regular so that \(f(0)=1\). Additionally, one can verify that the Kretschmann scalar \(R^{\mu\nu\lambda\sigma}R_{\mu\nu\lambda\sigma}\) in addition to the Ricci scalar \(R\) in the spacetime metric are regular at \(r=0\). For the static spherically symmetric system, the gravitational redshift \(z\) at the asymptotic distance \(r\) to an observer is gathered as follows \[z(r)=\frac{1}{\sqrt{f(R)}}-1\,, \tag{13}\] where the radius of the MOG dark compact object is \(R\). For \(\alpha<\alpha_{crit}\), in the limit of \(r\rightarrow\infty\), the gravitational redshift of the compact object becomes infinite on the horizon \(r_{+}\) and for \(\alpha>\alpha_{crit}\) it has a finite value. Based on the observational data, however, one anticipates that the regular MOG dark compact object is adequately dark to be compatible with binary X-ray observations, so that \(\alpha\sim\alpha_{crit}\)[54]. ## III Motion of test particle in MOG dark compact object spacetime The geodesic structure of the spacetime of the MOG compact object governs the trajectory of a test particle. In this section, we investigate the time-like geodesics around the regular MOG static spherically symmetric dark compact object through Lagrangian formalism [47; 48; 82; 83; 84]. Under temporal translation and rotation around the axes of symmetry, the line element (8) of the MOG dark compact object associated with the metric coefficient (9) is invariant since this spacetime is static and spherically symmetric. Therefore, the spacetime of the regular MOG dark compact object possesses two Killing vectors as follows \[\begin{split}{}^{(t)}\zeta^{\mu}\frac{\partial}{\partial x^{\mu} }&=(1,0,0,0)\frac{\partial}{\partial x^{\mu}}=\frac{\partial}{ \partial t}\,,\\ {}^{(\varphi)}\zeta^{\mu}\frac{\partial}{\partial x^{\mu}}& =(0,0,0,1)\frac{\partial}{\partial x^{\mu}}=\frac{\partial}{ \partial\varphi}\,.\end{split} \tag{14}\] These Killing vectors imply two conserved (constants) quantities for the motion of the test particle in the spacetime, which we aim to find them in the following. We plan to investigate the trajectory of both electrically neutral and charged test particles motion around the regular MOG dark compact object. ### Motion of electrically neutral test particle The Lagrangian of a test particle moving in the spacetime of the regular MOG dark compact object is expressed as \[\mathcal{L}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}\,, \tag{15}\] where over-dot stands for derivative with respect to the affine parameter \(\tau\). The four-velocity of the test particle is defined as \(\dot{x}^{\mu}\equiv u^{\mu}=(u^{t},u^{r},u^{\theta},u^{\varphi})\). We interested in the planar motion of the particle on the equatorial plane with \(\theta=\frac{\pi}{2}\). Thus, utilizing the Euler-Lagrange equation \[\frac{d}{d\tau}\left(\frac{\partial\mathcal{L}}{\partial\dot{x}^{\mu}}\right) -\frac{\partial\mathcal{L}}{\partial x^{\mu}}=0\,, \tag{16}\] one can find two conserved quantities of the particle motion corresponding with two Killing vectors as follows \[\frac{dt}{d\tau}=\dot{t}\equiv u^{t}=\frac{E}{f(r)}=\frac{E}{\left(1-\frac{2(1+ \alpha)Mr^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{\frac{3}{2}}}+\frac{\alpha(1+ \alpha)M^{2}r^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{2}}\right)}\,, \tag{17}\] \[\frac{d\varphi}{d\tau}=\dot{\varphi}\equiv u^{\varphi}=\frac{L}{r^{2}}\,, \tag{18}\] where \(E\) and \(L\) as two conserved quantities are the total energy and the total angular momentum per unit mass of the particle, respectively. Moreover, using the Euler-Lagrange equation, we can find \(\frac{d\theta}{d\tau}=\dot{\theta}\equiv u^{\theta}=0\) in addition to \[\frac{dr}{d\tau}=\dot{r}\equiv u^{r}=\left[-f(r)\left(1-\frac{E^{2}}{f(r)}+ \frac{L^{2}}{r^{2}}\right)\right]^{\frac{1}{2}}\,. \tag{19}\] Based on the normalization condition for the four-velocity of the test particle, i.e., \(u^{\mu}u_{\mu}=1\) and utilizing Eqs. (9), (17), and (18) one can find \[\dot{r}^{2}=E^{2}-V_{eff}\,, \tag{20}\] where \(V_{eff}\) is the effective potential of the test particle, which is defined as \[V_{eff}=f(r)\left(1+\frac{L^{2}}{r^{2}}\right)=\left(1-\frac{2(1+\alpha)Mr^{2} }{(r^{2}+\alpha(1+\alpha)M^{2})^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{ 2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{2}}\right)\left(1+\frac{L^{2}}{r^{2}} \right)\,. \tag{21}\] Effective potential analysis is significant in studying geodesic structure. The location of the circular orbits, for example, is determined by the local extremum of the effective potential. Figure 1 illustrates the behavior of the effective potential \(V_{eff}\) for the MOG dark compact object in comparison with the Schwarzschild case in GR. From Fig. 1 we see that increasing the value of the parameter \(\alpha\) leads to increment of the effective potential. Figure 1: _The illustration of \(V_{eff}\) of the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\). The black solid line is for the case of Schwarzschild solution in GR._ Stable circular orbits around regular MOG dark compact object The main characteristic of circular orbits is \(\dot{r}=\ddot{r}=0\) or equivalently \(u^{r}=\dot{u}^{r}=0\). Hence, from Eqs. (17)-(19) one can verify that for circular orbits, \(E^{2}=V_{eff}\) and consequently, \(\frac{dV_{eff}}{dr}=0\) must be satisfied. Solving these two equations simultaneously by using Eqs. (9), (17), and (18) results in the following relations for the total (specific) energy \(E\), total (specific) angular momentum \(L\), and the angular velocity \(\Omega_{\varphi}\equiv\frac{d\varphi}{dt}=\frac{u^{\varphi}}{u^{r}}\) for the test particle in MOG dark compact object background \[E^{2}=\frac{2f^{2}(r)}{2f(r)-rf^{\prime}(r)}=\frac{\left(\alpha(1+\alpha)M^{2} +r^{2}\right)^{3}}{y_{1}}\left(1-\frac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1 +\alpha)M^{2}\right)^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r ^{2}+\alpha(1+\alpha)M^{2}\right)^{2}}\right)^{2}\,, \tag{22}\] \[L^{2}=\frac{r^{3}f^{\prime}(r)}{2f(r)-rf^{\prime}(r)}=\frac{y_{2}(1+\alpha)Mr^ {4}}{y_{1}\sqrt{\alpha(1+\alpha)M^{2}+r^{2}}}\,, \tag{23}\] \[\Omega_{\varphi}^{2}=\frac{1}{2r}f^{\prime}(r)=\frac{y_{2}(1+\alpha)M}{\left( \alpha(1+\alpha)M^{2}+r^{2}\right)^{\frac{3}{2}}}\,, \tag{24}\] where a prime stands for differentiation with respect to radial coordinate \(r\) and also we have defined \[y_{1}\equiv r^{6}+\alpha^{3}(1+\alpha)^{3}M^{6}+3\alpha^{2}(1+\alpha)^{2}M^{ 4}r^{2}+(1+\alpha)Mr^{4}\left(5\alpha M-3\sqrt{\alpha(1+\alpha)M^{2}+r^{2}} \right)\,, \tag{25}\] \[y_{2}\equiv r^{4}-\alpha Mr^{2}\left(y_{1}+(1+\alpha)M\right)-\alpha^{2}(1+ \alpha)M^{3}\left(2(1+\alpha)M-y_{1}\right)\,. \tag{26}\] According to Eqs. (22)-(24) one can see that the condition \(2f(r)-rf^{\prime}(r)>0\) for existence of the circular orbits is required in order the total energy and the total angular momentum to be real. Figure 2 demonstrates the behavior of the \(E^{2}\) versus \(r\) from which we can see that growing the parameter \(\alpha\) leads to amplify the specific energy of the test particle in the spacetime of the regular MOG dark compact object while far from the source, the energy becomes almost constant. The corresponding curve of the Schwarzschild solution in GR is also shown in Fig. 2 which has always smaller values than the regular MOG dark compact object case. Figure 3 is the illustration of \(L^{2}\) versus \(r\) associated with the regular MOG dark compact object in comparison with Schwarzschild solution in GR for different values of \(\alpha\), so that again increasing it results in growing the value of \(L^{2}\). All of these figures have smaller values of \(E^{2}\) and \(L^{2}\) than the corresponding ones in the case of the Schwarzschild solution in GR. In Fig. 4 we see the curves of \(\Omega_{\varphi}^{2}\) versus \(r\) for the regular MOG dark compact object in comparison with the Schwarzschild case in which we see that increasing \(\alpha\) leads to reduction of the value of \(\Omega_{\varphi}^{2}\) so that the curve of the Schwarzschild case contains higher values of \(\Omega_{\varphi}^{2}\) than corresponding ones in the regular MOG dark compact object. The location of the stable circular orbits correspond to the local minimum of the effective potential. Accordingly, an innermost (marginally) stable circular orbit (ISCO) needs the conditions \[\frac{dV_{eff}}{dr}=0\,,\qquad\frac{d^{2}V_{eff}}{dr^{2}}=0\,, \tag{27}\] to be satisfied. The existence of ISCO, \(r_{{}_{ISCO}}\) is purely a relativistic phenomenon. Instead of classical mechanics in which the effective potential possesses just one minimum; in GR however, the effective potential can generate either a local maximum and minimum or no extremum, relying on the choice of \(L\) in the effective potential. A stable outer and an unstable inner circular orbit for the test particle is related to this extremum. ISCO is where the stable and unstable circular orbits coincide for a specific value of \(L\). Due to the complexity of the metric coefficient function (9) the explicit analytical form of ISCO associated with the regular MOG dark compact object is not available. Hence, solving equation set (27) numerically by using Wolfram Mathematica (v13.1) results in numerical values of the ISCO for the test particle moving in the spacetime of the MOG dark compact object. To do this, we set \(M=1\). Then, for three different values of the MOG parameter \(\alpha\) in Table 1 we collect the numerical values of \(r_{{}_{ISCO}}\), \(L_{{}_{ISCO}}\), and \(E_{{}_{ISCO}}\) for the regular static spherically symmetric MOG dark compact object. On the other hand, we know that for Schwarzschild black hole in GR, the ISCO is \(r_{{}_{ISCO}}=6M\). Therefore, from Table 1 we see that increasing the value of \(\alpha\) leads to grow the ISCO associated with the regular MOG dark compact object. Figure 3: _The behavior of \(L^{2}\) of the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\). The black solid line is for the case of Schwarzschild solution in GR._ Figure 2: _The plot of \(E^{2}\) of the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\). The black line is for the case of Schwarzschild solution in GR._ #### iii.1.2 Radiant energy flux In accretion process, the falling particles at infinity from rest will accrete onto the source mass. During the process, the gravitational energy of these falling particles will release and then convert into the electromagnetic radiation [56; 84]. One can express the radiation flux of the accretion disc around the central mass in the following form, which depends on the specific angular momentum, the specific energy, and the angular velocity of the falling test particle [56; 84] \[\mathcal{F}(r)=-\frac{\dot{M}}{4\pi}\frac{\Omega_{\varphi}^{\prime}}{\sqrt{-g }\left(E-L\Omega_{\varphi}\right)^{2}}\int_{r_{ISCO}}^{r}\left(E-L\Omega_{ \varphi}\right)L^{\prime}dr\,, \tag{28}\] where \(\dot{M}\) is the accretion rate and \(g=\det(g_{\mu\nu})=-r^{4}\sin^{2}\theta\) is the determinant of the background metric tensor associated with the line element (8), so that on the equatorial plane, we have \(g=-r^{4}\). Inserting Eqs. (22)-(24) into Eq. (28) and also, using numerical data in Table 1, one can find an approximate expression for radiation flux as Figure 4: _The illustration of \(\Omega_{\varphi}^{2}\) of the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\). The black solid line is for the case of Schwarzschild solution in GR._ follows \[\begin{split}\mathcal{F}(r)&\approx-\frac{(1+\alpha) \dot{M}M^{\frac{3}{2}}y_{1}y_{3}\left(\alpha(1+\alpha)M^{2}+r^{2}\right)^{\frac{ 5}{4}}}{96\pi(r-3M)r^{\frac{3}{2}}\sqrt{(1+\alpha)My_{2}}\left((1+\alpha)Mr^{3 }y_{2}-rf(r)\left(\alpha(1+\alpha)M^{2}+r^{2}\right)^{\frac{7}{2}}\right)^{2} }\\ &\times\left(18(\alpha-2)Mr^{2}+6(\alpha+2)r^{3}-M^{2}\alpha(6M+ 79r)-4(3+\alpha)\sqrt{3M}r^{\frac{3}{2}}(3M-r)\tanh^{-1}\left[\sqrt{\frac{3M}{r }}\right]\right)\,,\end{split} \tag{29}\] where we have defined \[y_{3}\equiv M\alpha r^{3}\left(4\sqrt{\alpha(1+\alpha)M^{2}+r^{2}}+9(1+\alpha) M\right)+4\alpha^{2}(1+\alpha)M^{3}r\left(3(1+\alpha)M-2\sqrt{\alpha(1+\alpha)M^{2 }+r^{2}}\right)-3r^{5}\,. \tag{30}\] Figure 5 is the illustration of energy flux \(\mathcal{F}(r)\) over \(r\) associated with the regular MOG dark compact object so that Fig. 4(a) is related to \(\alpha=0.09\) and Fig. 4(b) is related to \(\alpha=0.2\). From Fig. 5 we see that the energy flux in the setup is zero at \(r<r_{{}_{ISCO}}\) and then at \(r=r_{{}_{ISCO}}\) it grows from zero to infinity at \(r>r_{{}_{ISCO}}\) and after that, it again becomes zero at far from the source. Comparing Figs. 4(a) and 4(b) demonstrates that increasing the value of \(\alpha\) leads to decrease the energy in the setup. It also should be noted that the thermodynamical equilibrium is basic demand for the model describing the steady-state accretion disk. As a result, the radiation emitted from the accretion disk surface is equivalently as the black body spectrum [46; 56; 84]. This means that the energy flux and the effective temperature of the accretion disk can be related with the well-known Stefan-Boltzman law \(\mathcal{F}(r)=\sigma_{{}_{SB}}T^{4}\) in which \(\sigma_{{}_{SB}}\) is Stefan-Boltzman constant. Therefore, from this law, one can find the effective temperature \(T\) of the accretion disk. Furthermore, at the distance \(d\) with the inclination angle \(\gamma\) to the central mass, the luminosity of accretion disk can be found as [84; 46; 56] \[L(\upsilon)=4\pi d^{2}I(\upsilon)=\frac{8\cos\gamma}{\pi}\int_{r_{{}_{ISCO}}} ^{r}\int_{0}^{2\pi}\frac{\upsilon_{e}^{3}r}{\exp\left[\frac{\upsilon}{T} \right]-1}\,d\varphi\,dr\,, \tag{31}\] where \(I(\upsilon)\) is the thermal energy flux as function of frequency \(\upsilon\), while \(\upsilon_{e}=\upsilon(1+z)\) is the emitted frequency at the redshift \(z\). Calculating the luminosity of accretion disk from the above equation is not possible, analytically due to the complexity of the relations. Figure 5: _The behavior of \(\mathcal{F}(r)\) versus \(r\) for the regular static spherically symmetric MOG dark compact object._ ### Motion of electrically charged test particle Assuming magnetic coupling process [85; 86; 87; 88; 89] in the vicinity of regular MOG dark compact object, the energy and angular momentum can be transferred from the dark compact object to the accretion disk. Therefore, on the horizon of the dark compact object, the strength of the magnetic field is expressed as [48] \[B_{h}=\frac{1}{r_{h}}\sqrt{2m_{p}\dot{M}c}\,, \tag{32}\] where the index (\(h\)) stands for horizon, \(c\) is the speed of light, and \(m_{p}\) is the magnetization parameter so that \(m_{p}=1\) means the equipartition state for the accretion and magnetic coupling process. Theoretical and experimental evidence demonstrate that the magnetic field can be exist in the surroundings of black holes and other compact objects [90; 91; 92]. Here, we suppose a weak magnetic field whose energy cannot influence the background geometry [93]. Accordingly, this type of regular MOG dark compact object is called weakly magnetized. Following the procedure introduced in Refs. [48; 89; 91], we aim to calculate the magnetic field in the surroundings of the regular MOG dark compact object. The Killing vectors introduced in Eq. (14) satisfy the following Killing vector equation [94] \[\Box\zeta^{\mu}=0\,, \tag{33}\] where \(\Box=\partial_{\mu}\partial^{\mu}\) is the d'Almmbert operator. In Lorentz gauge, the above equation is equivalent with the Maxwell equation for four-potential \[A^{\mu}_{;\mu}=0\,, \tag{34}\] in which \(A^{\mu}\) is the four-potential and " ; " shows covariant derivative. The expression \[A^{\mu}=\frac{B}{2}\,^{(\nu)}\zeta^{\mu}=\left(0,0,0,\frac{B}{2}\right)\,, \tag{35}\] is related to a weak magnetic field, which is homogeneous at the spatial infinity with the strength \(B\). Moreover, the magnetic field four-vector can be defined as follows \[B^{\mu}=-\frac{\epsilon^{\mu\nu\lambda\sigma}}{\sqrt{-g}}F_{ \lambda\sigma}w_{\nu}\,, \tag{36}\] where \(\epsilon^{\mu\nu\lambda\sigma}\) is the Levi-Civita symbol, \(F_{\lambda\sigma}=A_{\nu;\mu}-A_{\mu;\nu}\) is the Maxwell tensor, and\(w_{\nu}\) is the four-velocity of a local observer at rest, which can be written as \[w_{\nu}=\left(\frac{1}{\sqrt{f(r)}},0,0,0\right)\,. \tag{37}\] Utilizing Eqs. (34)-(37) results in the magnetic field four-vector expression, which on the equatorial plane is as follows \[B^{\nu}=\left(0,0,-\frac{B\sqrt{f(r)}}{r},0\right)\,. \tag{38}\] It is assumed that the magnetic field is directed upward along the z-axis at spatial infinity [95]. Figure 6 demonstrates the illustration of \(B^{\theta}\) in vicinity of the regular MOG dark compact object versus \(r\) for different values of \(\alpha\). We see from Fig. 6 that far from the regular MOG dark compact object, the magnetic field is almost vanishing. Also, the effect of \(\alpha\) on \(B^{\theta}\) is to reduce its strength. The Lagrangian of an electrically charged test particle with rest mass \(m\) and electric charge \(q\) travelling in the spacetime of the regular MOG dark compact object is expressed as \[\tilde{\mathcal{L}}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{ \nu}+\frac{1}{m}A_{\mu}\dot{x}^{\mu}\,. \tag{39}\] Similar to the previous section, using Euler-Lagrange equation (16), on the equatorial plane we can find \[\dot{t}\equiv\tilde{u}^{t}=\frac{\tilde{E}}{f(r)}=\frac{\tilde{E}}{\left(1-\frac{ 2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{\frac{3}{2}}}+ \frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{2 }}\right)}\,, \tag{40}\] and \[\dot{\varphi}\equiv\tilde{u}^{\varphi}=\frac{\tilde{L}}{r^{2}}-\frac{qB}{2m}\,, \tag{41}\] where \(\tilde{u}^{\mu}\) is the four-velocity of the electrically charged test particle, \(\tilde{E}\) and \(\tilde{L}\) are the specific energy and specific angular momentum of the electrically charged test particle respectively. Again, on the equatorial plane, one can employ the normalization condition \(\tilde{u}^{\mu}\tilde{u}_{\mu}=1\) to gain \[\dot{r}^{2}=\tilde{E}^{2}-\tilde{V}_{eff}\,, \tag{42}\] where \(\tilde{V}_{eff}\) is the effective potential of the electrically charged test particle, which is \[\begin{split}\tilde{V}_{eff}&=f(r)\left(1+r^{2} \left(\frac{\tilde{L}}{r^{2}}-\frac{qB}{2m}\right)^{2}\right)\\ &=\left(1-\frac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+\alpha)M^ {2}\right)^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r^{2}+ \alpha(1+\alpha)M^{2}\right)^{2}}\right)\left(1+r^{2}\left(\frac{\tilde{L}}{r^ {2}}-\frac{qB}{2m}\right)^{2}\right)\,.\end{split} \tag{43}\] Figure 7 is the illustration of \(\tilde{V}_{eff}\) versus \(r\) for the electrically charged test particle moving in the spacetime of the regular MOG dark compact object for different values of \(\alpha\). From Fig. 3 we see that increasing \(\alpha\) firstly leads to increase the effective potential while in far from the source decreases it. Figure 6: _The illustration of \(B^{\theta}\) around the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\), where we have set \(M=1\). The black solid line is for the case of Schwarzschild solution in GR._ iv.3.1 Stable circular orbits around regular MOG dark compact object for electrically charged particles Similar to previous section, the conditions \(E^{2}=V_{eff}\) and \(\frac{dV_{eff}}{dr}=0\) must be satisfied for circular orbits. Therefore, one can solve these equations simultaneously by using Eqs. (9), (40), and (41) to obtain the following relations \[\tilde{E}^{2}=f(r)\left(1+\frac{r\left(\sqrt{B^{2}q^{2}rf(r)^{2}-m^{2}rf^{ \prime}(r)^{2}+2m^{2}f(r)f^{\prime}(r)}-Bq\sqrt{r}f(r)\right)^{2}}{m^{2}\left( rf^{\prime}(r)-2f(r)\right)^{2}}\right)\,, \tag{44}\] \[\tilde{L}=\frac{2r^{\frac{3}{2}}\sqrt{B^{2}q^{2}rf(r)^{2}-m^{2}rf^{\prime}(r) ^{2}+2m^{2}f(r)f^{\prime}(r)}-Bqr^{3}f^{\prime}(r)}{4mf(r)-2mrf^{\prime}(r)}\,, \tag{45}\] \[\tilde{\Omega}_{\varphi}^{2}=\frac{\left(\sqrt{B^{2}q^{2}rf(r)^{2}-m^{2}rf^{ \prime}(r)^{2}+2m^{2}f(r)f^{\prime}(r)}-Bq\sqrt{r}f(r)\right)^{2}}{2r\left(f (r)\left(B^{2}q^{2}r^{2}+2m^{2}\right)-r\left(Bq\sqrt{r}\sqrt{B^{2}q^{2}rf(r) ^{2}-m^{2}rf^{\prime}(r)^{2}+2m^{2}f(r)f^{\prime}(r)}+m^{2}f^{\prime}(r) \right)\right)}\,. \tag{46}\] Eqs. (44)-(46) in the limit of \(q\to 0\) reduce to Eqs. (22)-(24). The location of ISCO for the massive electrically charged test particle moving in regular MOG dark compact object satisfies the conditions (27). As we previously mentioned, the explicit analytical form of ISCO for the electrically charged test particle associated with the regular MOG dark compact object is not available due to complexity of the metric coefficient (9). Thus, one can numerically solve the equations set (27) by using, for instance, Wolfram Mathematica (v13.1) to obtain the numerical values of the ISCO for the electrically charged test particle moving in the spacetime of the MOG regular dark compact object. To do this, in Table 2 we again set \(M=1\) and for different values of \(\alpha\) we collect the numerical values of \(\tilde{r}_{{}_{ISCO}}\), \(\tilde{L}_{{}_{ISCO}}\), and \(\tilde{E}_{{}_{ISCO}}\) corresponding with the electrically charged test particle for the regular static spherically symmetric MOG dark compact object. Table 2 demonstrates that increasing the value of \(\alpha\) causes to increase the ISCO radius of electrically charged test particle associated with the regular MOG dark compact object. Comparing Tables 1 and 2 shows us that the values of ISCO related to the Figure 7: _The illustration of \(\tilde{V}_{eff}\) of the massive electrically charged test particle moving in the spacetime of the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\). The black solid line is for the case of Schwarzschild solution in GR._ weakly magnetised regular MOG dark compact object are smaller than the corresponding ones related to the regular MOG dark compact object. Therefore, the electric charge of the test particle and the magnetic field in the vicinity of the source affect the ISCO radius by reducing it. Similar to previous section, one can find the energy flux associated with the massive electrically charged particle moving in the regular MOG dark compact object spacetime by inserting Eqs. (44)-(46) into Eq. (28). Also, the corresponding luminosity of the accretion disk can be found by Eq. (31). However, due to the lengthy and complexity of the related equations, it cannot be solved analytically. ## IV Accretion onto regular MOG dark compact object In this section, we aim to find the basic dynamical equations and parameters associated with the accretion onto the regular MOG dark compact object following the procedure performed in Refs. [76; 78]. To do this, we take into account the spherically symmetric accretion in the equatorial plane with \(\theta=\frac{\pi}{2}\). Additionally, we assume that the accreting matter is inflowing perfect fluid onto the regular MOG dark compact object. ### Dynamical equations The perfect fluid stress-energy tensor is expressed as \[T^{\mu\nu}=(p+\rho)v^{\mu}v^{\nu}-pg^{\mu\nu}\,, \tag{47}\] where \(p\), \(\rho\), and \(v^{\mu}\) are pressure, energy density, and four-velocity of the perfect fluid, respectively. On the equatorial plane, the only non-vanishing four-velocity components are \(v^{\mu}=(v^{t},v^{r},0,0)\). To be precise, the four-velocity of the perfect fluid \(v^{\mu}\) and the four-velocity of the test particle \(u^{\mu}\) in previous section are equivalent since the inflowing fluid, in fact, travels on the time-like geodesics creating the accretion disk around the MOG compact object. Therefore, the trajectory of inflowing fluid and the test particle in previous section are identical. In other words, the test particle in the previous section is assumed here as perfect fluid. On the other hand, according to the normalization condition for the four-velocity of the perfect fluid \(v^{\mu}v_{\mu}=1\) one can find \[v^{t}=\frac{\sqrt{f(r)+(v^{r})^{2}}}{f(r)}=\frac{\sqrt{1-\frac{2(1+\alpha)Mr^ {2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r ^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{2}}+(v^{r})^{2}}}{1-\frac{2(1+\alpha)Mr^{ 2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r ^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{2}}}\,, \tag{48}\] where the condition \(v^{r}<0\) must be satisfied since the accretion is an inward flow of matter while the assumption \(v^{t}>0\) is taken into account because we interested in forward flow in time. From the conservation of the stress-energy tensor, i.e., \(T^{\mu\nu}_{;\nu}=0\) in which \((;)\) stands for covariant derivative, we can find the following relation \[(p+\rho)v^{r}r^{2}\sqrt{1-\frac{2(1+\alpha)Mr^{2}}{(r^{2}+\alpha(1+\alpha)M^ {2})^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{(r^{2}+\alpha(1+\alpha) M^{2})^{2}}+(v^{r})^{2}}=A_{0}\,, \tag{49}\] where \(A_{0}\) is a constant of integration. Additionally, we can project the stress-energy tensor conservation law onto the perfect fluid four-velocity to the form of \[v_{\mu}T^{\mu\nu}_{;\nu}=0\,, \tag{50}\] which results in the following relation \[\frac{\rho^{\prime}}{p+\rho}+\frac{(v^{r})^{\prime}}{v^{r}}+\frac{2}{r}=0\,. \tag{51}\] By integrating, the last equation yields \[r^{2}v^{r}\exp\left[\int\frac{d\rho}{p+\rho}\right]=-A_{1}\,, \tag{52}\] where \(A_{1}\) is a constant of integration. Since the condition \(u^{r}<0\) holds, one can deduce \[(p+\rho)\exp\left[-\int\frac{d\rho}{p+\rho}\right]\sqrt{1-\frac{2(1+\alpha)Mr^ {2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{\frac{3}{2}}}+\frac{\alpha(1+ \alpha)M^{2}r^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{2}}+(v^{r})^{2} }=A_{2}\,, \tag{53}\] where \(A_{2}\) is an integration constant. Equation of mass flux in the setup is given by \[(\rho v^{\mu})_{;\mu}=0\,, \tag{54}\] where on the equatorial plane results in the following relation \[\rho v^{r}r^{2}=A_{3}\,, \tag{55}\] where \(A_{3}\) is an integration constant. ### Dynamical parameters Isothermal fluids with the equation of state \(p=\omega\rho\) where \(\omega\) is the equation of state parameter are taken into account. During the motion of these fluids, the temperature remains constant. Consequently, Eqs. (52), (53), and (55) yields \[\frac{p+\rho}{\rho}\sqrt{1-\frac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+ \alpha)M^{2}\right)^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r ^{2}+\alpha(1+\alpha)M^{2}\right)^{2}}+(v^{r})^{2}}=A_{4}\,, \tag{56}\] where \(A_{4}\) is an integration constant. Inserting \(p=\omega\rho\) into the test equation yields the \(v^{r}\) as follows \[v^{r}=\left(\frac{1}{\omega+1}\right)\sqrt{A_{4}^{2}-\left(\omega+1\right)^{ 2}\left(1-\frac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^ {\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r^{2}+\alpha(1+\alpha) M^{2}\right)^{2}}\right)}\,. \tag{57}\] Figure 8 is the graph of \(v^{r}\) versus \(r\) for the regular static spherically symmetric MOG dark compact object in comparison with Schwarzschild black hole in GR. From Fig. 8, we see that the fluid with the radial element of its four-velocity corresponding with each curve (associated with each value of \(\alpha\)) begins to move towards the regular MOG dark compact object from rest at large \(r\), as previously mentioned. Then, it approaches the regular static spherically symmetric MOG dark compact object to again reach the rest state. Furthermore, from Fig. 4, we see that decreasing the parameter \(\alpha\) leads to increase the value of \(v^{r}\) far from the source. Moreover, the curve of Schwarzschild case goes to infinity by approaching the regular static spherically symmetric MOG dark compact object. The proper energy density of the fluid can easily be determined as follows \[\rho=\left(\frac{A_{3}}{r^{2}}\right)\frac{(\omega+1)}{\sqrt{A_{4}^{2}-(\omega+1 )^{2}\left(1-\frac{2(1+\alpha)Mr^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{\frac{3}{2} }}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{2}}\right)}}\,. \tag{58}\] Figure 9 demonstrates the illustration of \(\rho\) versus \(r\) for the regular static spherically symmetric MOG dark compact object in comparison with Schwarzschild black hole in GR. ### Mass evolution The central source mass of a black holes as well as a dark compact object is a dynamic quantity over time. Accretion process, for example, leads to grow their mass by accreting the surrounding matter onto them. The mass change measure or accretion rate of the regular MOG dark compact object can be obtained through \(\dot{M}\equiv\frac{dM}{dt}=-\int T_{t}^{r}dS\) in which the surface element of the object is \(dS=\left(\sqrt{-g}\right)d\theta d\varphi\) and also \(T_{t}^{r}=(p+\rho)v_{t}v^{r}\). Consequently, the accretion rate \(\dot{M}\) can be obtained as \[\dot{M}=-4\pi r^{2}v^{r}(p+\rho)\sqrt{1-\frac{2(1+\alpha)Mr^{2}}{(r^{2}+\alpha (1+\alpha)M^{2})^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r^{2} +\alpha(1+\alpha)M^{2}\right)^{2}}+(v^{r})^{2}}\equiv-4\pi A_{0}\,, \tag{59}\] where the definitions \(A_{0}\equiv-A_{1}A_{2}\) and \(A_{2}\equiv\left(p_{\infty}+\rho_{\infty}\right)\sqrt{f\left(r_{\infty}\right)}\) are assumed. Finally, we gain \[\dot{M}=4\pi A_{1}M^{2}\left(p_{\infty}+\rho_{\infty}\right)\sqrt{f\left(r_{ \infty}\right)}\,. \tag{60}\] One can use Eq. (60) to obtain a relation between the initial mass \(M_{i}\) and the mass in arbitrary time \(t\) as follows \[M_{t}=\frac{M_{i}}{1-\frac{t}{t_{cr}}}\,, \tag{61}\] where the critical accretion time is defined as \(t_{cr}=\left(4\pi A_{1}M_{i}(p+\rho)\sqrt{f\left(r_{\infty}\right)}\right)^{-1}\). At \(t=t_{cr}\), the mass of the regular MOG dark compact object approaches infinity in a finite time. Figure 8: _The illustration of \(v^{r}\) of the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\). The black solid line is for the case of Schwarzschild solution in GR._ ### Critical Accretion In accretion process, the inward flow of the fluid from rest at far from the source (regular MOG dark compact object) begins to move and continues to accelerate due to the gravitational field of the central source. During the inward flow motion of the fluid towards the source, it reaches sonic (critical) point, where the four-velocity of the fluid coincides the local speed of sound \(c_{s}\). From this critical point to the central source, the inward flow accelerated motion has supersonic velocities. A radial velocity gradient is needed to find the critical point. The derivatives of Eqs. (55) and (56) yield \[\frac{\rho^{\prime}}{\rho}+\frac{(v^{r})^{\prime}}{(v^{r})}+\frac{2}{r}=0\,, \tag{62}\] and \[\frac{\rho^{\prime}}{\rho}\left(\frac{d\ln[p+\rho]}{d\ln[\rho]}-1\right)+ \frac{v^{r}(v^{r})^{\prime}\left(\alpha(1+\alpha)M^{2}+r^{2}\right)^{\frac{7}{ 2}}+y_{2}(1+\alpha)Mr}{(\alpha(1+\alpha)M^{2}+r^{2})^{\frac{7}{2}}\left(1- \frac{2(1+\alpha)Mr^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{\frac{7}{2}}}+\frac{ \alpha(1+\alpha)M^{2}r^{2}}{(r^{2}+\alpha(1+\alpha)M^{2})^{2}}+(v^{r})^{2} \right)}=0\,. \tag{63}\] Eqs. (62) and (63) result in the following relation \[\frac{d\ln[v^{r}]}{d\ln[r]}=\frac{\mathcal{D}_{1}}{\mathcal{D}_{2}}\,, \tag{64}\] where we defined \[\mathcal{D}_{1}\equiv-2V^{2}+\frac{y_{2}(1+\alpha)Mr^{2}}{(\alpha(1+\alpha)M^ {2}+r^{2})^{\frac{7}{2}}\left(1-\frac{2(1+\alpha)Mr^{2}}{(r^{2}+\alpha(1+ \alpha)M^{2})^{\frac{7}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{(r^{2}+\alpha(1 +\alpha)M^{2})^{2}}+(v^{r})^{2}\right)}\,, \tag{65}\] and \[\mathcal{D}_{2}\equiv V^{2}-\frac{(v^{r})^{2}}{1-\frac{2(1+\alpha)Mr^{2}}{(r ^{2}+\alpha(1+\alpha)M^{2})^{\frac{7}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{ (r^{2}+\alpha(1+\alpha)M^{2})^{2}}+(v^{r})^{2}}\,, \tag{66}\] Figure 9: _The illustration of \(\rho\) of the regular static spherically symmetric MOG dark compact object versus \(r\) for different values of \(\alpha\). The black solid line is for the case of Schwarzschild solution in GR._ in which \[V^{2}\equiv\frac{d\ln[p+\rho]}{d\ln[\rho]}-1\,. \tag{67}\] When the condition \(\mathcal{D}_{1}=\mathcal{D}_{2}=0\) is satisfied, the critical points occur. This condition first gives us \[V_{cr}^{2}=\frac{rf^{\prime}(r)}{4f(r)+rf^{\prime}(r)}\,, \tag{68}\] so that the positivity of its denominator determines the range of the critical radius by the following inequality \[4\left(1-\frac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{ \frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r^{2}+\alpha(1+\alpha)M ^{2}\right)^{2}}\right)+\frac{2y_{2}(1+\alpha)Mr^{2}}{\left(\alpha(1+\alpha)M ^{2}+r^{2}\right)^{\frac{7}{2}}}>0\,. \tag{69}\] Additionally, the condition for critical points give us \[(v_{cr}^{r})^{2}=\frac{1}{4}rf^{\prime}(r)=\frac{y_{2}(1+\alpha)Mr^{2}}{2 \left(\alpha(1+\alpha)M^{2}+r^{2}\right)^{\frac{7}{2}}}\,, \tag{70}\] where the index \((cr)\) in Eqs. (68) and (70) stands for critical values. Finally, the local sound speed \(c_{s}^{2}=\frac{dp}{d\rho}\) can be found as \[c_{s}^{2}=-1+A_{4}\sqrt{1-\frac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+ \alpha)M^{2}\right)^{\frac{3}{2}}}+\frac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r ^{2}+\alpha(1+\alpha)M^{2}\right)^{2}}+(v^{r})^{2}}\,. \tag{71}\] ## V Summary and conclusions In this paper, we explored the accretion onto the regular spherically symmetric MOG dark compact object as well the electrically neutral and charged particles motion in its spacetime, by following the Lagrangian formalism. We found out that the effective potential of the neutral particle moving in the spacetime of the regular MOG dark compact object increases by increasing the value of the parameter \(\alpha\), while for the effective potential of the electrically charged particle, it is not the case far from the source. Moreover, we demonstrated that the parameter \(\alpha\) of the MOG setup amplifies the specific energy and angular momentum of the test particle, while it decreases the angular velocity. We also showed that the parameter \(\alpha\) increases the ISCO radius of either electrically neutral or charged test particles. We saw, however, that the ISCO radius of the electrically neutral (charged) particle associated with the regular MOG dark compact object is larger (smaller) than the corresponding one in the Schwarzschild black hole in GR. By treating the energy flux of the accretion disk related to the neutral particle, we proved that the energy flux peaks after reaching the ISCO and then falls to zero extremely fast, while the parameter \(\alpha\) of the MOG setup decreases it. Furthermore, the radial component of the four-velocity and the energy density of the accreting fluid reduce by growing the parameter \(\alpha\) near the source, while it is not the case at far from the source.
2301.05075
Kinematic Evidence of an Embedded Protoplanet in HD 142666 Identified by Machine Learning
Observations of protoplanetary disks have shown that forming exoplanets leave characteristic imprints on the gas and dust of the disk. In the gas, these forming exoplanets cause deviations from Keplerian motion, which can be detected through molecular line observations. Our previous work has shown that machine learning can correctly determine if a planet is present in these disks. Using our machine learning models, we identify strong, localized non-Keplerian motion within the disk HD 142666. Subsequent hydrodynamics simulations of a system with a 5 Jupiter-mass planet at 75 au recreates the kinematic structure. By currently established standards in the field, we conclude that HD 142666 hosts a planet. This work represents a first step towards using machine learning to identify previously overlooked non-Keplerian features in protoplanetary disks.
J. P. Terry, C. Hall, S. Abreau, S. Gleyzer
2023-01-12T15:18:38Z
http://arxiv.org/abs/2301.05075v2
# Kinematic Evidence of an Embedded Protoplanet in HD 142666 Identified by Machine Learning ###### Abstract Observations of protoplanetary disks have shown that forming exoplanets leave characteristic imprints on the gas and dust of the disk. In the gas, these forming exoplanets cause deviations from Keplerian motion, which can be detected through molecular line observations. Our previous work has shown that machine learning can correctly determine if a planet is present in these disks. Using our machine learning models, we identify strong, localized non-Keplerian motion within the disk HD 142666. Subsequent hydrodynamics simulations of a system with a 5 Jupiter-mass planet at 75 au recreates the kinematic structure. By currently established standards in the field, we conclude that HD 142666 hosts a planet. This work represents a first step towards using machine learning to identify previously overlooked non-Keplerian features in protoplanetary disks. Hydrodynamics -- Radiative transfer -- Accretion disks -- Methods: numerical -- Catalogs -- Planets and satellites: formation 0000-0002-4671-2884]J. P. Terry 0000-0002-8870-7885]C. Hall 0000-0002-4882-7885]S. Abreau 0000-0002-4880-3870]S. Gleyzer ## 1 Introduction Protoplanetary accretion disks are the sites of planet formation. The newest generation of telescopes, such as the Atacama Large Millimeter/submillimeter Array (ALMA), have unprecedented capabilities for observing protoplanetary disks. For the first time, we can not only resolve disks themselves, but also quantify the motion of the dust and gas within them. Disks display a striking variety of structures such as rings (ALMA Partnership et al., 2015; Dipierro et al., 2018), likely caused by dust trapping due to forming planets (Pinilla et al., 2012; Dipierro et al., 2015), and spirals (Perez et al., 2016), which may be caused by forming planets (e.g. Dong et al., 2015) or another mechanism such as gravitational instability (e.g. Dong et al., 2015; Hall et al., 2018; Meru et al., 2017). This new information has greatly advanced our understanding of the processes underlying the formation and evolution of planetary systems. Planets and physical processes, such as gravitational instability, influence the motion within the disk. This causes the material to deviate from simple Keplerian motion. Comparing the observed motion against purely Keplerian motion provides information on the bodies and processes present in the disk (Hall et al., 2020; Paneque-Carreno et al., 2021; Longarini et al., 2021; Pinte et al., 2022; Bae et al., 2022; Terry et al., 2022). Non-Keplerian motion has been used to uncover a variety of structures, including localized perturbations associated with gaps and planets (Teague et al., 2018; Pinte et al., 2018, 2019, 2020) as predicted by Perez et al. (2015). Kinematic analysis is limited by our ability to accurately identify non-Keplerian motion. The deviations can be small and frequently occur in noisy images. It is therefore not only difficult and slow to identify them, but there is also the strong possibility of overlooking their occurrence. Any signature that is overlooked is a missed opportunity to detect either a forming planet or some other process, such as the GI-Wiggle indicative of gravitational instability (Hall et al., 2020) or the vertical shear instability (Barraza-Alfaro et al., 2021). Machine learning (ML) provides a useful tool for this task. ML has quickly become ubiquitous in both society and the sciences, everything from self-driving cars (Bojarski et al., 2016) to medicine (Parmar et al., 2015). Recent efforts in astronomy have made it clear that machine learning is a powerful method even with simulated training data (Jo and Kim, 2019; Moller and de Boissiere, 2020; Alexander et al., 2020). Machine learning, and in particular computer vision, excels at the analysis of images (Voulodimos et al., 2018). In some cases, it has even been shown to outperform humans (Zhou et al., 2021). It is therefore naturally suited for application to the noisy datasets in observational astronomy. Using ML models developed in a previous work (Terry et al., 2022), we identify a strong and localized deviation from Keplerian motion in HD 142666. Using the current widely accepted field standard method (Teague et al., 2018; Pinte et al., 2018, 2019), we perform smoothed particle hydrodynamic (SPH) simulations to recreate the kinematic structure of the disk. The agreement is significant when a 5 M\({}_{J}\) planet is included at 75 au. We conclude that HD 142666 hosts a planet. The paper is arranged as follows: Section 2 describes the models and simulations used. Section 3 shows the results of applying the models and simulating the system. Section 4 gives our conclusions. ## 2 Methods ### Machine Learning We use the ML models described in Terry et al. (2022) and describe them here for completeness. We use two different architectures: EfficientNetV2 (Tan and Le, 2021) and RegNet (Xu et al., 2022). All models were made using PyTorch (Paszke et al., 2019), albeit with significant modifications to the default models and hyperparameters. We denote these models as EN47, EN61, EN75, RN47, RN61, and RN75. Table 1 gives performance metrics for the models: model accuracy at 50% and 95% decision thresholds and the area under the receiver operating characteristic curve (AUC). The models were trained using synthetic observations from the MCFOST(Pinte et al., 2006, 2009) radiative transfer code. MCFOST inputs were drawn from 1000 PHANTOM(Price et al., 2018) SPH simulations of systems with and without planets (Terry et al., 2022). Each MCFOST calculation outputs a position-position-velocity cube from \({}^{13}\)CO transition lines (\(J=2\to 1\) and \(J=3\to 2\)). The cubes were convolved spatially and spectrally and noise was added in order to replicate current observational capabilities. The model inputs (i.e. radiative transfer outputs) are images of dimension \(C\times H\times W\), where \(C\) is the number of input channels, \(H\) is the height of the image, and \(W\) is the width of the image (here, \(H=W=600\) pixels). A typical grayscale or RGB image will have \(C\)=1 or 3, respectively. We instead input an entire position-position-velocity cube. Observations vary significantly in the number of channels that cover the disk, but the typical range is between \(\approx\)40-100. To address this, we train three different implementations of each model, which gives us a total of six models. The difference between each implementation is the number of input velocity channels (\(C\)=47, 61, or 75). Each model outputs a two-component vector such that the sum of the components is 1, i.e. it has undergone softmax activation (Goodfellow et al., 2016). This can be interpreted as the probability that the given input belongs to a certain class, i.e. planet- vs no-planet class. The models also output images of their internal activation structure, which we consider to be the more important output in this context. While the models were not trained to pinpoint the locations of planets -- a job more suited for semantic image segmentation (Minaee et al., 2022) --the activation structure can inform us which regions the model finds important when making its classification decision. Terry et al. (2022) found that the activations were able to highlight velocity channels with non-Keplerian motion in systems that host planet(s). To this end, we apply our previously trained models to ALMA data of the HD 142666 system. We inspect the softmax values and activation structures to gain insight into whether a planet might be in the system and, if so, where its signature is the strongest. ### Observational Data The HD 142666 data was taken from the DSHARP catalogue (Andrews et al., 2018; Huang et al., 2018). Data includes \({}^{12}\)CO line emission (\(J=2\to 1\)) and 1.25 mm continuum images. The system was imaged with a beam with FWHM of \(77\times 61\) mas (\(\approx\)\(11\times 9\) au) with an RMS noise of 1.3 mJy beam\({}^{-1}\); channels have a 0.35 km s\({}^{-1}\) resolution (Andrews et al., 2018). Figure 1 shows selected velocity channels overlaid on the continuum. The image was cropped to focus on the disk, and a subset of velocity channels was used. The channels were reshaped to \(600\times 600\) pixels and normalized such that all pixel values were between 0 and 1. ### Hydrodynamical Simulations We run a suite of SPH simulations using PHANTOM, varying the mass of the embedded planet between 1 and 5 M\({}_{J}\). For each simulation, we create channel maps using MCFOST in the same way that the original training data was made. The kink is approximately 75 au from the center of the disk, so we place a planet at this distance. System parameters are taken from Rubinstein et al. (2018); Andrews et al. (2018); Huang et al. (2018). The stellar mass, temperature, and radius are 2.0 M\({}_{\odot}\), 7500 K, and 2.2 R\({}_{\odot}\), respectively. The disk has a mass of 0.0533 M\({}_{\odot}\), an inner radius of 1.3 au, and an outer radius of 150 au. The system is inclined at 62 degrees with a position angle of 162 and an azimuth of 72 degrees. It is located 148 pc from Earth. The SPH outputs are used to create line emission maps to mimic ALMA capabilities. These calculations are done using the MCFOST radiative transfer code (Pinte et al., 2006, 2009). Each calculation uses \(10^{8}\) photon packets and includes carbon/silicate dust (Draine and Lee, 1984) with a dust-to-gas ratio of 1:100. The resulting outputs were convolved spatially and spectrally to match the observed line emission resolution. ## 3 Results and Discussion Figure 2 shows that HD 142666 has a strong, localized kink that is detected by the ML models. The kink is particularly visible in the upper middle (\(\Delta v=-1.75\) km/s) channel. The lower row shows activation structures that roughly correspond to the above channels. The average softmax value is over 0.84, which means that the models predict the probability that the input for HD 142666 contains a planet to be over 84%. This prompts further scrutiny of the activations, which we use to determine the most probable channel that contains the kink. The strength and localization of the newly identified kink are reminiscent of the kinks in HD 163296 and HD 97048. As with HD 163296, the kink in the gas is outside of the radial extent of the continuum disk. Both of these disks were found to host planets after SPH simulations containing a planet recreated the kinematic structure observed in CO observations (Teague et al., 2018; Pinte et al., 2018, 2019). We apply this same method to HD 142666 to demonstrate that the kink identified by our models is consistent with kinks identified by conventional means in HD 163296 and HD 97048. We found that a simulation of a protoplanetary disk with a 5 M\({}_{J}\) planet most accurately reproduced the observation. Figure 3 shows the results. A localized kink in the vicinity of the planet is clear in the upper left panel Figure 3 (\(\Delta v=-2.3\) km/s). This kink is visible to a lesser extent in the \(\Delta v=-2.0\) km/s in the upper right panel of Figure 3, which is also the case in Figure 1. There is strong agreement between this feature and the non-Keplerian channel identified by our models: both display a kink of approximately the same shape and size at approximately the same radial location. This can be seen in the lower left and right panels of Figure 3. Note that the simulation and observation do not display the strongest kink in the same velocity channel. This is simply a relic of the finite temporal resolution of the simulation, which makes it extremely unlikely that the simulation will be saved when the planet is exactly coincident with the observation. The temporal resolution of the simulation was increased to mitigate this effect, but it persists to some extent. We conclude that HD 142666 hosts a planet. We note that our conclusion is confirmed using the same methods described by Teague et al. (2018); Pinte et al. (2018, 2019). However, what is new about our approach is that the non-Keplerian motion was first identified by ML models, highlighting a protoplanet candidate that had previously been missed upon visual analysis. Verification of the evidence is still done using the same methodology as previous works (Pinte et al., 2018, 2019). We strongly advocate that this should always be done for any potential discovery. ### Future Work and Limitations This work shows that machine learning can effectively identify non-Keplerian motion even if it is missed by humans. However, our work can be improved upon. The primary limitation is the fact that localising non-Keplerian motion was not the explicit goal of these models when they were trained. Their purpose was classification without any attempt of segmentation or object detection. Models specifically designed to pinpoint deviations would likely be more effective. Rather than inspecting activation structures --of which there can be hundreds --the model would directly output a prediction of the location. This would be a more precise and straightforward method to detect the non-Keplerian signature, but it would not remove the need to perform follow-up simulations. We intend to explore this possibility in future works. Networks such as PGNets (Zhang et al., 2022) and DPNNet-2.0 (Auddy et al., 2021) offer a potentially fruitful route that would increase the accuracy and speed of the analysis of disks and channels highlighted by our models. These networks are designed to infer planetary mass from continuum images. One could use our models to determine if it is likely that a disk hosts a planet and, if so, feed the corresponding continuum images into the secondary networks. The predicted planet mass could then be used as a starting point for followup simulations rather than simply starting an uninformed parameter sweep. This would speed up the verification step of the procedure. Such a pipeline could be useful to explore in future works. ## 4 Conclusion We have applied ML models created by Terry et al. (2022) to the DSHARP data of HD 142666. All models Figure 2: HD 142666 structure (\({}^{12}\)CO: \(J=2\to 1\)) and activations. Upper left: \(\Delta v=-1.4\) km/s channel with kink circled in white. Upper middle: \(\Delta v=-1.75\) channel with kink circled in white. Upper right: \(\Delta v=-2.1\) channel. Bottom row: selected mean-subtracted activations that roughly correspond to the channels in the upper row. Activations are from three different models (EN61, EN47, and RN61, respectively). Line emission beams are the cyan ellipses in the lower right of the upper row panels. Figure 1: Line emission overlaid on continuum. Left: \(\Delta v=-1.4\) km/s channel. Middle: \(\Delta v=-1.75\) channel. Right: \(\Delta v=-2.1\) channel. The continuum beam is in magenta, and the line emission beam is in cyan. \begin{table} \begin{tabular}{l c c c c c c} \hline Value & EN47 & EN61 & EN75 & RN47 & RN61 & RN75 \\ \hline Accuracy at 50\% cutoff (\%) & \(97\pm 0.5\) & \(97\pm 0.5\) & \(93\pm 0.7\) & \(78\pm 1.1\) & \(98\pm 0.4\) & \(95\pm 0.6\) \\ Accuracy at 95\% cutoff (\%) & \(96\pm 0.5\) & \(94\pm 0.5\) & \(88\pm 0.9\) & \(65\pm 1.3\) & \(96\pm 0.6\) & \(92\pm 0.7\) \\ AUC & \(0.99\pm 0.002\) & \(0.99\pm 0.003\) & \(0.98\pm 0.003\) & \(0.86\pm 0.010\) & \(>0.99\pm 0.001\) & \(0.98\pm 0.032\) \\ \hline \end{tabular} \end{table} Table 1: Model performance metrics from Terry et al. (2022a). Figure 3: HD 142666 simulation results. Upper left: \(\Delta v=-2.3\) km/s channel from the simulation (convolved beam in lower right). Upper right: \(\Delta v=-2.0\) km/s channel from the simulation (convolved beam in lower right). Lower left: observed continuum overlaid with contours of simulated \(\Delta v=-2.3\) km/s channel. Lower right: observed continuum overlaid with simulated (cyan) and observed (white) channels. Continuum beam is in magenta, and the line emission (simulated and observed) beam is in cyan. The system includes a 5 M\({}_{J}\) planet at 75 au. The simulated channels have the continuum and background subtracted for clarity. The planet’s location is indicated with an x. strongly predict the presence of at least one planet. The activation structures highlight a strong, unreported, and localized kink. An SPH simulation with a 5 M\({}_{J}\) planet at 75 au is able to recreate the newly identified kinematic structure. By the previously established benchmarks and methods for kinematic planet detection, we conclude that HD 142666 hosts a planet. This work demonstrates the utility of applying machine learning to the analysis of protoplanetary disks. By highlighting non-Keplerian features in the disk, ML models are able to guide planet-detection efforts. The signatures of the planet were previously overlooked by human analysts, and the traditional analysis was only performed because of the information given by the models. We anticipate that this method can identify new non-Keplerian features in both existing and future protoplanetary observations. ## 5 Acknowledgements This paper makes use of the following ALMA data: ADS/JAO.ALMA #2016.1.00484.L. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. J.T. was a participant in the 2022 Machine Learning for Science (ML4SCI) Google Summer of Code program. S.G. was supported in part by the National Science Foundation Award No. 2108645. This study was supported in part by resources and technical expertise from the Georgia Advanced Computing Resource Center, a partnership between the University of Georgia's Office of the Vice President for Research and Office of the Vice President for Information Technology.
2310.14639
The role of spin-orbit interaction in low thermal conductivity of Mg$_3$Bi$_2$
Three-dimensional layered Mg$_3$Bi$_2$ has emerged as thermoelectric material due to its high cooling performance at ambient temperature, which benefits from its low lattice thermal conductivity and semimetal character. However, the semimetal character of Mg$_3$Bi$_2$ is sensitive to spin-orbit coupling (SOC). Thus, the underlying origin of low lattice thermal conductivity needs to be clarified in the presence of the SOC. In this work, the first-principles calculations within the two-channel model are employed to investigate the effects of the SOC on the phonon-phonon scattering on the phonon transport of Mg$_3$Bi$_2$. Our results show that the SOC strongly reduces the lattice thermal conductivity (up to $\sim 35$ %). This reduction originates from the influence of the SOC on the transverse acoustic modes involving interlayer shearing, leading to weak interlayer bonding and enhancement anharmonicity around 50 cm$^{-1}$. Our results clarify the mechanism of low thermal conductivity in Mg$_3$Bi$_2$ and support the design of Mg$_3$Bi$_2$-based materials for thermoelectric applications.
Nguyen Tuan Hung
2023-10-23T07:25:52Z
http://arxiv.org/abs/2310.14639v1
# The role of spin-orbit interaction in low thermal conductivity of Mg\({}_{3}\)Bi\({}_{2}\) ###### Abstract Three-dimensional layered Mg\({}_{3}\)Bi\({}_{2}\) has emerged as thermoelectric material due to its high cooling performance at ambient temperature, which benefits from its low lattice thermal conductivity and semimetal character. However, the semimetal character of Mg\({}_{3}\)Bi\({}_{2}\) is sensitive to spin-orbit coupling (SOC). Thus, the underlying origin of low lattice thermal conductivity needs to be clarified in the presence of the SOC. In this work, the first-principles calculations within the two-channel model are employed to investigate the effects of the SOC on the phonon-phonon scattering on the phonon transport of Mg\({}_{3}\)Bi\({}_{2}\). Our results show that the SOC strongly reduces the lattice thermal conductivity (up to \(\sim 35\) %). This reduction originates from the influence of the SOC on the transverse acoustic modes involving interlayer shearing, leading to weak interlayer bonding and enhancement anharmonicity around 50 cm\({}^{-1}\). Our results clarify the mechanism of low thermal conductivity in Mg\({}_{3}\)Bi\({}_{2}\) and support the design of Mg\({}_{3}\)Bi\({}_{2}\)-based materials for thermoelectric applications. The demand for green energy with net-zero gas emissions requires the development of sustainable energy-related technologies, in which thermoelectricity is one of the promising technologies that can convert heat energy into electrical energy without gas emissions. A thermoelectric (TE) device is mainly fabricated from TE material, which takes nearly one-third of the total cost of the device [1]. Some of the best TE materials are Bi\({}_{2}\)Te\({}_{3}\), PbTe, and their related alloys, which were discovered around the 1950s and used as commercial TE materials [2; 3]. However, these materials are limited for wide applications due to the rare and expensive of the Te element. Therefore, during the past decade, there have been significant efforts to search for non-Te materials, such as \(\alpha\)-MgAgSb [4], Mg\({}_{3}\)Bi\({}_{2}\)[5; 6; 7; 8], Bi\({}_{2}\)Se\({}_{3}\)[9], and SnSe [10] crystals. Among them, Mg\({}_{3}\)Bi\({}_{2}\) crystal is an interesting material to study the fundamental transport properties since it shows not only a high cooling performance with a large temperature difference of \(\sim 91\) kelvin [5] but also topological character [11; 12]. In previous work [12], we showed that a spinless Mg\({}_{3}\)Bi\({}_{2}\) could be a type-II nodal line semimetal, in which the conduction and valence bands intersect in the form of a line (called the nodal line) [13]. This feature leads to van Hove singularities near the nodal line energy and enhances the TE power factor [12]. However, the nodal line character of Mg\({}_{3}\)Bi\({}_{2}\) is suppressed by the spin-orbit coupling (SOC), which often happens with the Bi element. By considering the SOC, Mg\({}_{3}\)Bi\({}_{2}\) becomes a normal semimetal with a tiny band gap. Then, the electronic transport properties change significantly because of the missing van Hove singularities [12]. On the other hand, the SOC also can affect the thermal conductivity of the materials. Tian _et al._[14] have reported that the phonon lifetimes (or anharmonicity) of PbSe and PbTe are larger with the SOC, which leads to twice larger thermal conductivity with the SOC than that without the SOC at room temperature. Wu _et al._[15] also showed that the SOC leads to enhanced lattice thermal conductivity of SnSe (up to \(\sim 60\)%) compared without the SOC. On the other hand, Li _et al._[16] showed that the SOC does not affect the lattice thermal conductivity of Mg\({}_{2}\)Si and Mg\({}_{2}\)Sn due to the relatively small discrepancies between their calculations and the experimental data. These studies suggest that the SOC will play an essential role in the thermal transport in Mg\({}_{3}\)Bi\({}_{2}\). The previous reports [17; 18] focus only on the thermal properties of Mg\({}_{3}\)Bi\({}_{2}\) without SOC. Thus, the role of the SOC in the low thermal conductivity of Mg\({}_{3}\)Bi\({}_{2}\) still needs to be clarified to better understand the thermoelectric properties of Mg\({}_{3}\)Bi\({}_{2}\). It is noted that we can not suppress the intrinsic spin-orbit interaction in the materials. Thus, it is difficult to observe the effect of SOC on the lattice thermal conductivity by experiment. In this situation, a theoretical calculation needs to be performed first to investigate the lattice thermal conductivity of Mg\({}_{3}\)Bi\({}_{2}\) with both cases of the SOC and without SOC. In this Letter, we investigate the lattice thermal properties of Mg\({}_{3}\)Bi\({}_{2}\) with and without SOC to clarify the influence of the SOC on phonon dispersion and lattice thermal conductivity, \(\kappa_{l}\). By using the phonon Boltzmann transport with first-principle calculations, we found that the SOC reduces \(\kappa_{l}\) by about 35%, while it was reported to enhance \(\kappa_{l}\) of PbTe, PbSe, and SnSe [14; 15]. Furthermore, the first-principle calculations within the phonon-phonon interaction underestimate \(\kappa_{l}\) compared with experimental data. Thus, we applied the two-channel model for \(\kappa_{l}\), which accounts for the correction term by the Cahill-Watson-Pohl (CWP) formula [19]. The phonon dispersion is calculated by the density-functional-perturbation theory (DFPT) with the Quantum ESPRESSO package [21; 22; 23]. The fully-relativistic and scalar-relativistic ultra-soft pseudopotentials with the Perdew-Burke-Ernzerhof (PBE) functional [24] use for the calculations with the SOC and without the SOC, respectively. All positions of the atoms and lattice constants are optimized by the BFGS quasi-newton algorithm [23], in which the convergence values for the forces and stress components are 0.0001 Ry/a.u. [3] and 0.005 GPa, respec tively. The obtained lattice constants of Mg\({}_{3}\)Bi\({}_{2}\) are \(a=b=4.683\) A and \(c=7.396\) A, which are consistent with the previous works [12; 17]. Cutoff energy of 60 Ry, **k**-points mesh of \(10\times 10\times 6\), and **q**-points mesh of \(5\times 5\times 3\) for all calculations are selected based on the convergence test. In Fig. 1(a), we show the phonon dispersions of Mg\({}_{3}\)Bi\({}_{2}\) with the SOC (solid line) and without SOC (dashed line). Only the low-frequency regime is plotted to see the difference between the solid and dashed lines easily. We noted that the high-frequency regime above the phonon band gap could contribute less than 10% to the total thermal conductivity [25]. The phonon dispersions reproduce the inelastic x-ray scattering (IXS) spectra [20], in which the case of the SOC shows a better fitting with the experimental data for the soft phonon S1 around 26 cm\({}^{-1}\) at the M point. The main difference between the phonon frequency with the SOC and without the SOC is found at S1 (\(\sim 5\) cm\({}^{-1}\)) and S2 (\(\sim 8\) cm\({}^{-1}\)) at the M point, as shown in Fig. 1(a). The S1 and S2 phonon modes are the interlayer shearing modes in the Mg\({}_{3}\)Bi\({}_{2}\) with \(P\overline{3}m1\) space group [18; 20], as shown in Fig. 1(b). The Mg\({}_{3}\)Bi\({}_{2}\) structures consist of alternating [Mg(2)\({}_{2}\)Bi\({}_{2}\)] and [Mg(1)] atom layers. We thus expect weak bonding between [Mg(2)\({}_{2}\)Bi\({}_{2}\)] and [Mg(1)] layers, resulting in a small shear strength. Here, we calculate the elastic modulus, including the bulk \(B\), Young \(E\), and shear \(G\) modulus, using the Voigt-Reuss-Hill approximation [26; 27] with the Thermo\({}_{\text{-}}\)pw code [28]. The obtained results for the cases with and without the SOC are listed in Table 1, which are consistent with the experiment data (\(B=38.39\) GPa, \(G=13.39\) GPa, and \(E=35.98\) GPa using resonant ultrasound spectroscopy [18]). The shear modulus of Mg\({}_{3}\)Bi\({}_{2}\) is much softer than compounds with similar structures, such as CaMg\({}_{2}\)As\({}_{2}\), YbMg\({}_{2}\)Sb\({}_{2}\) or BaMg\({}_{2}\)P [18], resulting in soft phonon modes of S1 and S2. Another soft phonon mode related to the interlayer shearing is found around 30 cm\({}^{-1}\) at the \(L\) point, as shown in Fig. 1(b). However, the SOC does not affect this phonon mode. In order to investigate the effect of the SOC on the lattice thermal conductivity \(\kappa_{l}\) of Mg\({}_{3}\)Bi\({}_{2}\), we calculate the two-channel model [29; 30; 31], which is defined as follows \[\kappa_{l}=\kappa_{\text{ph}}+\kappa_{\text{diff}}, \tag{1}\] where \(\kappa_{\text{ph}}\) is the phonon channel, which is defined by [32] \[\kappa_{\text{ph}}=\frac{1}{N_{\mathbf{q}}V}\sum_{\nu\mathbf{q}}\hbar\omega_{\nu\mathbf{q} }v_{\nu\mathbf{q}}^{2}\tau_{\nu\mathbf{q}}\frac{\partial n_{\nu\mathbf{q}}}{\partial T}, \tag{2}\] where \(N_{\mathbf{q}}\) is the number of \(\mathbf{q}\) points and \(V\) is the volume of the unit cell. \(\omega_{\nu\mathbf{q}}\), \(v_{\nu\mathbf{q}}\), and \(\tau_{\nu\mathbf{q}}\) is the phonon frequency, the phonon group velocity, and the phonon lifetime of the phonon mode \(\nu\) at \(\mathbf{q}\) vector, respectively. \(n_{\nu\mathbf{q}}=(e^{\hbar\omega_{\nu\mathbf{q}}/k_{B}T}-1)^{-1}\) is the Bose-Einstein distribution function. Here, \(\kappa_{\text{ph}}\) and \(\tau_{\nu\mathbf{q}}\) are calculated by solving the phonon Boltzmann transport equation, as implemented in the ShengBTE code [33] using \(16\times 16\times 16\) integration meshes, based on the second-order force constants calculated the DFPT [21; 22] and third-order force constants calculated with a \(3\times 3\times 3\) supercell and up to the five-nearest neighbors using thirdorder.py [33]. \(\kappa_{\text{diff}}\) is the diffusion channel in the disordered crystal, which is described by the Cahill-Watson-Pohl (CWP) model as [19] \[\kappa_{\text{diff}}=\left(\frac{\pi}{6}\right)^{1/3}k_{B}\rho^{2/3}\sum_{i}v_ {i}\left(\frac{T}{\theta_{i}}\right)^{2}\int\limits_{0}^{\theta_{i}/T}\frac{x^ {3}e^{x}}{(e^{x}-1)^{2}}\text{d}x, \tag{3}\] Figure 1: (a) Phonon dispersions of Mg\({}_{3}\)Bi\({}_{2}\) along the high-symmetry points in the low-frequency regime with SOC (red solid line) and without SOC (blue dashed line). The symbols represent experimental data by inelastic x-ray scattering measurements [20], in which \(\bigtriangledown\), \(\bigtriangleup\), and \(\square\) markers correspond to the phonon frequencies at 80, 300, and 600 K. (b) Atomic displacements of Mg\({}_{3}\)Sb\({}_{2}\) correspond to the transverse acoustic phonon modes S1 \(\sim\) 30 cm\({}^{-1}\) and S2 \(\sim\) 50 cm\({}^{-1}\) at the M point, which are related to the shearing modes between the Bi-Mg(2) and Mg(1) layers. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Mg\({}_{3}\)Bi\({}_{2}\) & \(B\) & \(G\) & \(E\) & \(r\) & \(v_{l}\) & \(v_{t}\) \\ \hline With SOC & 31.81 & 16.44 & 42.07 & 0.28 & 3086.99 & 1707.55 \\ \hline Without SOC & 35.44 & 18.29 & 46.81 & 0.28 & 3247.72 & 1795.77 \\ \hline \hline \end{tabular} \end{table} Table 1: Bulk \(B\), shear \(G\), and Young \(E\) modulus (GPa), Poisson ratio \(r\), and longitudinal \(v_{l}\) and transverse sound velocities \(v_{t}\) (m/s) of Mg\({}_{3}\)Bi\({}_{2}\). where \(\rho\) is the number density of atoms, \(x\) is the dimensionless integration variable, \(\theta_{i}=v_{i}(\hbar/k_{B})(6\pi^{2}\rho)^{1/3}\) is the Debye temperature, where \(v_{i}\) is the sound speed of acoustic branch \(i\), including one longitudinal sound velocity \(v_{L}=\sqrt{(B+4G/3)/\rho}\) and two transverse sound velocity \(v_{T}=\sqrt{G/\rho}\). The calculated \(v_{L}\), \(v_{T}\), \(B\) and \(G\) are given in Table 1. In Fig. 2, we show the phonon lifetime at \(T=300\) K of Mg\({}_{3}\)Bi\({}_{2}\) with and without the SOC. The effect of the SOC on the phonon lifetime is considered only for \(\kappa_{\rm ph}\), in which the phonons frequencies below \(\omega<80\) cm\({}^{-1}\) mainly dominate on \(\kappa_{\rm ph}\) due to the strong frequency dependence of the Umklapp scattering (\(\tau\propto 1/\omega^{2}\)) [32]. For \(80<\omega<110\) cm\({}^{-1}\), there is no phonon lifetime because of the phonon band-gap region. For \(\omega>110\) cm\({}^{-1}\), the average value of phonon lifetime is about 0.1 ps, which is one order of magnitude smaller than that for \(\omega<80\) cm\({}^{-1}\) (\(\sim 1\) ps). Thus, the contribution of the SOC to \(\kappa_{\rm ph}\) becomes an important factor when \(\omega<80\) cm\({}^{-1}\). In particular, the SOC leads to a reduction in the phonon lifetime around 50 cm\({}^{-1}\), which corresponds to the interlayer shearing mode S2, as shown in Fig. 1(b). We note that the high anharmonicity (i.e., low phonon lifetime) is also found at \(\omega-30\) cm\({}^{-1}\). This is the contribution of both the shearing mode S1 at the M point and another shearing mode (\(\sim 30\) cm\({}^{-1}\)) at the L point (see Fig. 1(a)). Since the SOC does not affect the shearing mode at the L point, the phonon lifetime with \(\omega<30\) cm\({}^{-1}\) does not change significantly with the presence of the SOC. Besides that, the SOC also does not affect the phonon lifetime in the CWP model since the CPW model assumes that the phonon lifetime is half the period of oscillation [29] (i.e., \(\tau=\pi/\omega\)). In Fig. 3, we show the thermal conductivity \(\kappa_{\rm f}\) of Mg\({}_{3}\)Bi\({}_{2}\) as a function of the temperature \(T\). The obtained \(\kappa_{\rm ph}\) is almost isotropic in Mg\({}_{3}\)Bi\({}_{2}\). At 300 K, \(\kappa_{\rm ph}^{xx}=\kappa_{\rm ph}^{yy}=0.37;\kappa_{\rm ph}^{zz}=0.36\) W/mK for the case with SOC and \(\kappa_{\rm ph}^{xx}=\kappa_{\rm ph}^{yy}=0.53;\kappa_{\rm ph}^{zz}=0.59\) W/mK for the case without SOC. Thus, the average value of \(\kappa_{\rm ph}=(\kappa_{\rm ph}^{xx}+\kappa_{\rm ph}^{yy}+\kappa_{\rm ph}^{ zz})/3\) is plotted in Fig. 3. We note that the Mg\({}_{3}\)Bi\({}_{2}\) has a 3D layered structure, and it shows a two-dimensional (2D) electron character, i.e., electrons mostly move in [Mg(2)\({}_{2}\)Bi\({}_{2}\)] layer [12]. However, using quantitative analysis of chemical bonding, Zhang _et al._[34] showed that the interlayer and intralayer bonds of Mg\({}_{3}\)Bi\({}_{2}\) are largely ionic with partial covalent nature. Thus, Mg\({}_{3}\)Bi\({}_{2}\) exhibits a nearly isotropic three-dimensional (3D) bonding network, leading to mostly isotropic \(\kappa_{\rm ph}\). Interestingly, such 2D-electron and 3D-phonon transports of Mg\({}_{3}\)Bi\({}_{2}\) are opposite to 3D-electron and 2D-phonon transports of SnSe [35]. We can see that \(\kappa_{\rm ph}\) is reduced by about 35% enhanced anharmonicity of the S2 shearing mode. Since the SOC affects the shearing modulus, it also reduces the longitudinal and transverse sound velocities (see Table 1). Thus, \(\kappa_{\rm diff}\) is also reduced by the SOC (\(\kappa_{\rm diff}=0.40\) and 0.42 W/Km with SOC and with SOC, respectively). As shown in Fig. 3, \(\kappa_{\rm ph}\) much lower than the experimental observation [5; 6; 8] when \(T>50\) K due to neglect of the temperature-dependent anharmonic renormalization of the phonon frequencies [36]. By using \(\kappa_{\rm diff}\) to correct this term, \(\kappa_{\rm ph}+\kappa_{\rm diff}\) can reproduce the experimental observation [5; 6; 8]. In conclusion, we have employed a two-channel model to study the effects of the SOC on the phonon dispersion, phonon anharmonicity, and thermal conductivity of Mg\({}_{3}\)Bi\({}_{2}\). Our calculations reproduce well the experimental data. The SOC not only enhances the anharmonicity of the interlayer shearing mode but also reduces the longitudinal and transverse sound velocities. Therefore, the SOC can have a considerable impact on thermal transport properties. Our calculations suggest a potential way for manipulating phonon transport by tuning the SOC. Figure 2: Phonon lifetime \(\tau\) of Mg\({}_{3}\)Bi\({}_{2}\) at room temperature \(T=300\) K is plotted as a function of phonon frequency \(\omega\) with SOC (red dots) and without SOC (blue dots). The black solid and dashed cures give the minimum lifetime \(\tau=\pi/\omega\) from the CWP formula and \(\tau\propto 1/\omega^{2}\) from the Umklapp scattering, respectively. ## Acknowledgments N.T.H. acknowledges financial support from the Frontier Research Institute for Interdisciplinary Sciences, Tohoku University.
2302.03154
Conversation Regression Testing: A Design Technique for Prototyping Generalizable Prompt Strategies for Pre-trained Language Models
Pre-trained language models (LLMs) such as GPT-3 can carry fluent, multi-turn conversations out-of-the-box, making them attractive materials for chatbot design. Further, designers can improve LLM chatbot utterances by prepending textual prompts -- instructions and examples of desired interactions -- to its inputs. However, prompt-based improvements can be brittle; designers face challenges systematically understanding how a prompt strategy might impact the unfolding of subsequent conversations across users. To address this challenge, we introduce the concept of Conversation Regression Testing. Based on sample conversations with a baseline chatbot, Conversation Regression Testing tracks how conversational errors persist or are resolved by applying different prompt strategies. We embody this technique in an interactive design tool, BotDesigner, that lets designers identify archetypal errors across multiple conversations; shows common threads of conversation using a graph visualization; and highlights the effects of prompt changes across bot design iterations. A pilot evaluation demonstrates the usefulness of both the concept of regression testing and the functionalities of BotDesigner for chatbot designers.
J. D. Zamfirescu-Pereira, Bjoern Hartmann, Qian Yang
2023-02-06T23:25:35Z
http://arxiv.org/abs/2302.03154v1
# Conversation Regression Testing: ###### Abstract. Pre-trained language models (LMs) such as GPT-3 can carry fluent, multi-turn conversations out-of-the-box, making them attractive materials for chatbot design. Further, designers can improve LM chatbot utterances by prepending textual _prompts_ - instructions and examples of desired interactions - to its inputs. However, prompt-based improvements can be brittle; designers face challenges systematically understanding how a prompt strategy might impact the unfolding of subsequent conversations across users. To address this challenge, we introduce the concept of Conversation Regression Testing. Based on sample conversations with a baseline chatbot, Conversation Regression Testing tracks how conversational errors persist or are resolved by applying different prompt strategies. We embody this technique in an interactive design tool, _BotDesigner_, that lets designers identify archetypal errors across multiple conversations; shows common threads of conversation using a graph visualization; and highlights the effects of prompt changes across bot design iterations. A pilot evaluation demonstrates the usefulness of both the concept of regression testing and the functionalities of BotDesigner for chatbot designers. + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote †: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none: + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: none + Footnote: copyrighted: none + Footnote: copyrighted: none conversational contexts the pre-trained LM is likely to fail and (2) how frequent or damaging each failure or failure mode is, in order to identify the right problems to solve with prompts. Next, designers need to (3) identify a prompt strategy that can fix the target failure in its original conversational context, and finally, to assess its _generalizability_ and _robustness_ systematically, that is, assessing (4) whether it can fix similar failures in other conversational contexts, and whether it might cause new errors across the numerous ways the conversations can unfold subsequently for different users. These are challenging tasks (Friedman et al., 2017; Goyal et al., 2017; Goyal et al., 2017). A few HCI researchers have started to create workflows and tools that aid prompt strategy design, for example, for human-LM collaborative writing (Zamfirescu et al., 2018). However, such workflows and tools for chatbots are extremely rare. Instead, chatbot designers often experimented prompts _ad-hoc_ using tools such as GPT-3 Playground (Zamfirescu et al., 2018); Some even treated prompt strategy design as _"rolling the dice"_(Zamfirescu et al., 2018). It remains unclear how designers can holistically analyze the highly-contextual errors LMs make across conversations (challenges 1, 2), or how they can resolve the errors without unknowingly causing new errors in preceding or subsequent conversations (challenges 3, 4). As a step toward more systematic and rigorous prompt strategy prototyping, we introduce the concept of _Conversation Regression Testing_. Taking inspiration from software regression testing, _Conversation Regression Testing_ uses the conversational contexts where a baseline LM has failed (or notably succeeded) as reusable test cases and helps designers track the effects of prompt strategy updates on these test cases. This approach allows designers to freely experiment with many prompt strategies to address a particular error in context, while ensuring the system's overall stability and a trajectory of continuous improvements. Operationalizing this concept, we then present BotDesigner, a prompt strategy prototyping tool that integrates the _Conversation Regression Testing_ workflow into an interactive machine learning analysis tool (one that tracks model performance across iterations and provides insights into what changes yield what performance improvements.) Such tools have shown remarkable traction with designers and developers in non-conversational domains (e.g., Weights and Biases (Bordesigner et al., 2017)). BotDesigner consists of four components: * **Conversation Collector**, an interface for collecting sample conversations between a baseline LM-based chatbot and real-world users (or crowd workers); * **Annotator**, an interface for inspecting and cataloging the problematic (or successful) utterances made by the baseline bot, across many conversations with multiple users. These errors are opportunities for prompts to help, as well as test cases for _Conversation Regression Testing_; * **Visualizer**, a graphic visualization that aids designers to identify archetypal errors by showing the baseline bot's failures and successes against the backdrop of common end-user-LM conversation patterns. These archetypal errors help designers to prioritize their prompt design efforts; * **Regression Tester**, features that embody _Conversation Regression Testing_. When designers experiment with a new prompt strategy, these features enable them to track whether the target error persists or gets resolved, or if new errors have appeared, as a result of the new strategy. This paper presents the concept of _Conversation Regression Testing_, the implementation of BotDesigner, and a small user evaluation study that preliminarily demonstrates the usefulness of both for chatbot designers when designing instructional chatbots. This paper makes two contributions, one conceptual and one technical. The primary contribution is the concept of _Conversation Regression Testing_ for prompt strategy design. While most prior work focused on exploration-and-ad-hoc-testing stage of prompt design, _Conversation Regression Testing_ offers an initial workflow to for assessing prompt strategies' robustness and generalizability. Secondly, the technical contribution of this paper lies in the techniques for implementing BotDesigner. It presents a novel conversation visualization technique that visualizes common conversation patterns across many discrete conversations between an LM and various users. It can be useful for developing many other human-LM interaction analysis or design tools. BotDesigner also implements an interface for _Conversation Regression Testing_, a technique that can be valuable for prototyping prompts for many other LM applications beyond conversational interactions. ## 2. Related Work We briefly review three threads of related work: 1) workflows and tools for interactively improving NLP model performance and 2) for improving conversational UX, and finally 3) prior conversation visualization techniques and analytical tools. ### NLP Modeling Workflows and Tools NLP modeling workflows and tools roughly fall under three categories (Goyal et al., 2017). _Fully supervised learning_, where a task-specific model is trained on a dataset of input-output examples for the task, has long played a central role in machine learning (ML) and natural language processing (NLP). Because fully labeled datasets are often insufficient for learning high-quality models, interactive NLP tools for improving model performance focused heavily on assisting feature engineering; providing models with the appropriate inductive bias to learn from this limited data. Towards this goal, supervised NLP tools most often embodied one of the two workflows: * Tools such as LightSIDE (Goyal et al., 2017) assist NLP modelers to define and extract salient features from raw data. These tools adopted a five-step workflow that many seminal interactive ML tools (e.g., Crayons (Crayons, 2016), ModelTracker (Bordesigner et al., 2017), Gestalt (Viebles et al., 2017), and Weights and Biases (Bordesigner et al., 2017)) have pioneered: Modelers (i) inspect raw data; (ii) label data or extract features from the data, sometimes with the assist of ML; (iii) train an initial model, (iv) classify, view, and correct the model's outputs, and (v) iterate on this process while the tools track the model's performance improvements and provide insight into what changes yield the improvements. * The second workflow emerged in response to the criticism that the above workflow left out considerations of ML amateurs(Bordesigner et al., 2017). Researchers created _"human-centered ML tools"_ that added end-users to every step of the first workflow (e.g., allowing them to provide traces of their natural interaction with the model for model training (Krause et al., 2017) and transfer learning (Krause et al., 2017), nominate features [11], demonstrate desired model behaviors [(42)], etc.) These tools demonstrated that integrating an understanding and natural interaction data of end-users into ML workflow can improve both UK and model performance [(1)]. In 2017-2019, the standard way of NLP modeling shifted to "_pre-train and fine-tune_", with fully supervised learning playing an ever-shrinking role [(25)]. This paradigm embodies a two-step-only, no-longer-task-specific ML workflow. * Modelers pre-train a model with a fixed architecture on large, unlabeled textual data. In this process, the pre-trained LM learns general-purpose language features that can be used for a wide range of tasks (e.g., predicting the next line of code or prose, document summarization, biomedical question answering, translation, and more.) GPT [(9; 39)] and BERT [(13)] exemplify families of pre-trained LMs. * Modelers adapt the pre-trained LM to the particular interaction task at hand through fine-tuning. In this paradigm, the main focus of model tuning turned from feature to objective engineering, designing the training objectives for both pre-training and fine-tuning. As a result, most aforementioned interactive ML tools no longer apply. While a few commercial general-purpose ML tools (e.g., Azure [(45)]) can support this new workflow, we did not find interactive NLP tools tailored for this workflow in our literature search. The past two years have been witnessing another paradigm shift in NLP: the rise of the "_pre-train, prompt, and predict_" paradigm [(25)]. This paradigm follows roughly the 2-step workflow above. However, instead of adapting pre-trained LMs to particular tasks via objective engineering, modelers reformulate the tasks to look more like those solved during the original LM training with the help of a textual _prompt_. For example, GPT-3 can automatically translate users' natural language requests to html code using the prompt template/strategy "web code description: <natural language request> html:<html> css: <css> javascript: <js> [(18)]. Modelers curate a large set of such prompts using a template and retrain the LM with them [(14; 3; 11)]. In this paradigm, the main focus of model tuning turned to prompt engineering, designing the appropriate prompts and prompt strategies that yield the desired model behaviors. Further, because many prompts are human-readable, prompts also present renewed opportunities to engage end-users in the modeling process. Tools have emerged to enable crowd workers or end-users to contribute queries and prompt strategies [(14; 3)]. Noteworthily, even for experts, identifying robust and generalizable prompt strategies requires extensive trial and error, where modelers iteratively experiment and assess the effects of various prompt strategies on concrete input-output pairs, before assessing them more systematically on large conversation datasets. A well-established prompt design workflow does not yet exist. How a prompt or a prompt strategy may directly impact model outputs, or how it modifies pre-trained LM's billions of parameters during re-training, are both active areas of NLP research [(25; 35)]. ### Prototyping Chatbot UX A well-established workflow exists for designing and prototyping multi-turn conversational interactions and experiences ("_chatbot UX_", for short) [(10; 12; 19; 20; 34; 37)]. Following this workflow, chatbot designers first (i) identify the chatbot's functionality or persona and draft ideal user-bot conversations, for example, through Wizard-of-Oz or having experts drafting scripts; (ii) create a dialogue flow template (e.g., "_greeting message, questions to collect user intention_,..._"); and finally (iii) fill the template with supervised NLP models (e.g., user intention classifier, response generator, etc.) Many tools that support this process exist supporting this process, for example, Google Dialogflow and Facebook Messenger tools for step (ii) and (iii). While highly valuable, these conversation-template-oriented tools are ill-fitted for pre-trained LMs. However, chatbot design tools for the pre-train-and-prompt paradigm are extremely rare. The closest related work is _AI Chains[(41)]_, a tool for exploring human-LM collaborative writing interactions. It allows designers to construct a chain of LMs where the output of one LM becomes the input for the next, and to test the resulting interactions themselves. The tool successfully enabled designers to explore prompt and chaining strategies more efficiently and strategically [(40)]. However, it is unclear whether the resulting strategies were effective or robust beyond the few interactions contexts that the designers experimented with. ### Conversation Visualization and Analysis Prior work on visualizing conversations has either focused on visualizing the structure of a dyadic (email) [(38)] or multi-party conversation [(15; 44)] over time (newsgroups, etc); or, they've sought to create a more abstract, higher-level picture of the topics covered in a conversation [(4)]. Our needs here are different, since we're considering the unique settings of multiple independent conversations about the same topic--visualizing which pieces are shared and which are unique to each conversation. Some related work does also touch on the adjacent task of visualizing the structure of multiple _tutorials_ (rather than conversations) covering a single topic, exploring which pieces are shared and which are unique to each tutorial [(21; 33)]. ## 3. Conversation Regression Testing We wanted to help chatbot designers to freely prototype and systematically evaluate prompt strategies, thereby empowering them to leverage pre-trained LMs and prompts in their design. To this end, we introduce the concept of _Conversation Regression Testing_. ### Definition and Benefits _Conversation Regression Testing_ is an iterative workflow for prototyping and evaluating prompt strategies. Following this workflow, chatbot designers start by identifying a baseline prompt strategy (or an off-the-shelf pre-trained LM, i.e. with no prompt strategy). They then carry out the following complementary activities (Figure 2): 1. _Collect human-LM conversations_: Collect a diverse set of conversations between the baseline LM and end-users through crowdsourcing or in-person user studies; 2. _Inspect and catalog LM errors and successes in context:_ Inspect the errors and successes both in the contexts where they occurred and in aggregate, across the myriad ways the baseline user-LM conversations have unfolded; add noteworthy user-LM conversation turns to a suite of regression test cases; 3. _Identify an archetypical error_ based on how frequent or damaging each error or error pattern is; develop intuitions of possible new prompt strategies for addressing the error; 4. _Identify a locally-effective prompt strategy:_ Experiment with new prompt strategies to fix a particular archetypical error in the conversational context where it originally occurred; Identify one locally effective prompt strategy; 5. _Regression test for robustness and generalizability:_ Apply the locally effective prompt strategy to the entire regression test suite, inspecting its robustness (whether it has fixed similar failures in other conversational contexts) and generalizability (whether it has caused new errors across the numerous ways the conversations can unfold subsequently for different users). If not, iterate on step 4-5 or even collect more conversations (step 1-5) before proceeding. If positive for both, continue; 6. _Iterate while tracking:_ Consider the robust and generalizable prompt strategy as a new baseline, iterate on the whole process (step 1-6) while tracking which errors have been resolved versus persisted. Central to this workflow are the concepts of _conversation regression testing_ and _prompt prototyping in human-LM conversational contexts_. They highlight the benefits of _Conversation Regression Testing_ over existing common practices. Benefits over current chatbot UX prototyping workflowSimilar to software regression test suites (Bartmann et al., 2017), conversation regression test suites enable chatbot designers to track the effects of prompt strategy updates on many discrete conversations with different users. This approach is particular valuable for prompt strategy design, because UX improvements and breakdowns caused by prompts are often brittle. In comparison to the current UX practice where designers tend to test their prompt strategies on the utterances they themselves authored in an ad-hoc manner (Kraus et al., 2017; Kraus et al., 2018), conversation regression test cases enable designers to freely experiment with many prompt strategies, without unknowingly causing new errors in preceding or subsequent conversations. Importantly, _Conversation Regression Testing_ is not _merely_ Regression Testing applied to prompt design. _Conversation Regression Testing_ is a rapid and iterative prototyping process. Each iteration resolves an error or an error mode without regression. This is different from software regression tests, whose use is typically limited to when new program updates reintroduces _old_ errors (hence the name _regression_.) Benefits over current NLP practice._Conversation Regression Testing_ highlights the importance of the use of user-LM conversation texts throughout the prompt strategy prototyping process. Designers inspect errors and test new strategies, both in the original user-LM conversational contexts where errors (or successes) occurred. This is a departure from current common NLP practice, where modelers typically evaluated prompt strategies on pre-curated human-human conversation datasets. Taking a lesson from human-centered ML work, end-user interactions with a model - particularly their reactions to its errors - should not be an afterthought. ### Conversation Regression Testing In Practice: An Example Design Process Let us ground the concepts and workflow of _Conversation Regression Testing_ and their benefits in a concrete example. Consider ourselves chatbot designers who are creating an _ExerciseBot_, a voice-based conversational agent that walks users through a set of physical exercises that they can perform at their desk. Following the _Conversation Regression Testing_ workflow, we can rapidly prototype various prompt strategies in-context and systematically evaluate their robustness and generalizability: We start by identifying a baseline prompt strategy. Here we use GPT-3's text-davinci-001 model (setting temperature = 0) out-of-the-box. We use the simple combination of a set of publicly available exercise instructions and a request to "_instruct the user in completing each exercise step-by-step_" as our baseline (Table 2); 1. _Collect human-LM conversations:_ We collect 30 conversations between the baseline bot and 10 Mechanical Turk workers, which yields many creative yet realistic utterances that we could hardly anticipate ("_At my age I'm going to have to break them up._" _Is it more effective to do all [exercises] at once?_"). 2. _Inspect and catalog errors and successes in context:_ We found that the baseline prompt strategy is sufficient to create a passable chatbot that, most often, naturally walked users through the exercise step-by-step (e.g., User: "_At my age I'm going to have to break them up._" Bot: "_That's ok, just try to complete all 5 reps._") We also identified a number of error patterns. For example, the "skip a step" error is that the bot skips a step when walking users through the exercises. The "_unsympathetic_" error is where the bot routinely ignores user requests for help ("_Can we try an easier exercise?_") or expressions of distress ("_Ow, that hurt_._") We collected these conversations as substrates for our _Conversation Regression Testing_ test suite. \begin{table} \begin{tabular}{l} \hline \hline **Baseline prompt** \\ \hline Consider the following set of exercises: \\ 1. Tricep Dips. Scoot to the front of your chair, with \\ both hands facing forward, [...] \\ 2. Seated Leg Lifts. Grab the sides of your chair [...] \\ [...] \\ Instruct the user in completing each exercise \\ step-by-step. \\ Don’t skip any steps. \\ \hline \hline \end{tabular} \end{table} Table 2. The baseline and improved prompts in the _ExerciseBot_ design example. 3. _Identify an archetypical error._ We chose to focus on the "_skip a step_" error, since it causes confusion if not physical danger during the exercises. It also has frequently caused breakdowns in subsequent conversations when users requested clarifications. 4. _Identify a locally-effective prompt strategy:_ After extensive experimentation, we resolved the "_skip a step_" error by simply appending the explicit instruction "Don't skip any steps." to the end of the baseline prompt, before the user-bot conversations begin. Another locally-effective strategy is to number the sub-steps within each step of the exercises in the initial prompt (Table 2.) 5. _Regression test for robustness and generalizability:_ Applying the two new strategies to the previously curated test cases, we noticed that the explicit instruction strategy consistent resolves the "_skip a step_" error, while the numbering-the-steps strategy only worked for some exercises. However, in some contexts, the explicit instruction strategy caused a side effect: It makes the bot's stubbornly stick to the step-by-step exercise instructions, even when users said this step is too hard. It could worsen the "_unsympathetic_" error. With this trade-off in mind, we iterate on step 4-5, exploring additional prompt strategies that may work even better. We could also choose to collect additional conversations (for example, on a different set of exercises), thereby identifying new patterns of errors and success (steps 1 and 2). This approach allows us to fully understand the extent to which the new prompt strategy is robust and generalizable before adopting it. 6. _Iterate on this process to tackle additional errors_ while tracking ExerciseBot's behavior changes using the _Conversation Regression Testing_ test suite. ## 4. BotDesigner: A Tool that Operationalizes _Conversation_ Regression Testing We present BotDesigner, a chatbot prompt strategy prototyping tool that operationalizes the _Conversation Regression Testing_ workflow described in SS3.2. ### System Overview BotDesigner enables _Conversation Regression Testing_ with the following functionality: (1) **A conversation collection** interface that enables the crowdsourcing of a set of baseline conversations with a baseline GPT-3 based chatbot; this interface enables step (1) described in SS3.2. (2) A **conversation visualization** and **annotation** interface that shows conversation flow across multiple users' conversations (for a single _task_, defined in SS4.2) using a graph interface, highlighting which utterances are common across conversations, and aiding in the categorization and tagging (annotation) of individual problematic or particular successful bot-provided utterances for targeted improvement or maintenance. This interface enables steps (2)-(3) from SS3.2. (3) A **At utterance testing** interface that situates individual problematic utterances in context and highlights changes to those utterances caused by updates to the bot. This interface enables steps (4)-(5) from SS3.2. In conjunction with a built-in code editor, these interfaces support iteration over chatbot prompt designs. Figure 2. _Conversation Regression Testing_ workflow and BotDesigner data flow. ### Inputs BotDesigner relies on three types of input data: **conversations**, **tasks**, and **templates**, representing, respectively, individual multi-turn user interactions with a specific bot (_conversation_), a set of structured instructions that make up the user's task (_task_), and a set of prompts comprising a specific point design for a chatbot (_chatbot template_). Although we believe _Conversation Regression Testing_ can be usefully applied to any type of chatbot, we chose to focus on _task-oriented instructional_ interactions because of the opportunities for aggregation offered by similarities across multiple conversations by multiple users focused on the same task. **Conversations** are specific multi-turn interactions collected by BotDesigner, consisting of a dialog data structure that includes each conversation partner's utterances as well as any error annotations provided _post facto_ by the designer or human conversation partner. Each conversation is attached to the specific template and recipe used to generate the bot's utterances. **Tasks** are specific structured task descriptions comprised of a name, description, and set of steps the user is expected to complete. Some tasks may also include metadata such as a list of the items required to complete the task. **Chatbot templates** describe the set of prompts that are sent as a prefix to the backing LM (GPT-3 in the case described here). Each template contains instructions for (1) how to convert a structured **task** of the appropriate type into plain text, suitable for inclusion into the LM text prompt, and (2) code describing how to lay out, in the prompted text, the turn-by-turn dialog-in-progress that is stored in the **conversation**. Templates also describe how the LM output should be parsed and the bot's response utterance extracted. See Fig. 3 for an example. ### Using BotDesigner BotDesigner supports each of the four steps of _Conversation Regression Testing_: In **conversation collection** mode, BotDesigner requests utterances from the user, generates a full prompt, sends it to GPT-3's API requesting a prediction for the following tokens, receives GPT-3's response, extracts the predicted bot utterance, and displays it to the user. See Figure 4 for an example of this interface. Figure 7 shows a screenshot of BotDesigner's _prompt testing_ interface being used to evaluate a new prompt template. This view groups all tagged utterances (by tag) and displays the utterances in each group with two lines of context before and after each tagged utterance. Utterances with multiple tags are duplicated in each group. In the specific screenshot in Fig. 7, a new prompt template is being applied to baseline conversations for utterances bearing the skip tag. In this example, every utterance now includes the correct first step; additionally, the second conversation snippet's utterance has also changed to explicitly address the user's prior utterance requesting that the bot "[...]hang on while I get a chair". The ability to quickly see the effects of prompt changes allows designers to rapidly iterate on ideas and quickly eliminate approaches that don't work for a specific utterance, or don't work across a whole class of utterances, to converge on approaches that offer the most "bang for the buck" in terms of improved outcomes while avoiding regressions. in conversation 2. To resolve these, a "decycling" operation splits one of the two merged nodes back into separate nodes and updates the graph edges to preserve the original conversational flows. The resulting conversational DAG is laid out and displayed using the d3-dag extension to d3.js. Regression TestingTo evaluate whether a particular **template** change affects any of the identified problematic utterances, BotDesigner replays conversations containing errors and displays any modified responses. Two implementation approaches are possible for this task: a system could either perform an "individual replay" by assuming all conversational turns prior to the error will occur as in the original conversation, and test only whether the error utterance is changed; or, it could perform a "total replay" in which every conversational turn is replayed and any changed utterances are flagged for user review. Both approaches have merit; the "total replay" approach is more consistent with the "regression testing" concept--certainly, a designer would not want to inadvertently introduce problematic utterances where none previously existed--but providing clear feedback requires identifying which conversational turns have changed in trivial ways, itself a nontrivial task. For BotDesigner, we default to the "individual replay" in an attempt to reduce noise, and accept the resulting short-term trade-off in accuracy that allows more rapid iteration--but leaving designers with the need to perform more extensive testing before deployment. ## 5. Evaluation To evaluate the effectiveness of BotDesigner in aiding conversational agent design, and to understand the value of Conversation Regression Testing, we ran a small (\(N=3\) participants) qualitative pilot study with a design researcher (P1), a conversational agent designer (P2), and an NLP researcher (P3). We ran this study primarily looking at two outcomes: how effectively could participants identify common or particular severe bugs or errors in a baseline chatbot, and how effectively could participants evaluate a new template for improvements. ### Method ParticipantsSince prompt-based chatbot design is not yet a common practice in industry, we recruited academic researchers with an interest in and experience with conversational agent design. TasksWe asked participants to perform two parts of the Conversation Regression Testing pipeline. We collected conversations in advance from AMT workers, and then asked participants to (1) browse the collected conversations to find errors and annotate them with categorization tags; (2) evaluate a "new" template, provided by us, with modified prompts, and report whether this new template resolved any of the errors participants had previously identified. Participants were introduced to the tool and its basic use for about 10 minutes, asked to create some baseline conversations, and then asked to spend 10-15 minutes on each of the tasks above. We recorded participants' responses to using the tool and measured whether they detected a set of 5 error categories we previously Figure 6. A sample of the _conversation visualizer_ reflecting the first few turns of 12 conversations, half of which were “forked” and thus share a substantial prefix of turns. identified in this dataset: (1) skipped steps, (2) ignorance of user expressions of pain, (3) ignorance of user expressions to wait until the user had completed some task (i.e., "hang on, let me get a chair"), (4) factually incorrect responses to questions, and (5) otherwise unhelpful responses. We also measured whether participants could identify which particular error categories were improved by the new template. It bears noting that we did _not_ ask users to engage in the task of prompt engineering; despite recent work exploring its potential, and our confidence in the value of large pre-trained LMs as a design material, the pool of designers making use of prompt engineering and large pre-trained LMs in the design of chatbots is small. Further, we did not want to spend time training participants in prompt engineering or depend on participants' intuitions about prompt changes to understand whether the _technique_ of _Conversation Regression Testing_ is effective at helping designers understand the impacts of _particular_ prompt changes. ### Findings Overall, we found that each of our participants could effectively (1) find errors across conversations using BotDesigner, and (2) evaluate whether a new prompt template improved outcomes across the identified errors. Here, we report some of the insights we gathered from our participants. Figure 7. An example of the _Conversation Regression Testing_ panel of BotDesigner. The left column shows individual _original_ tagged chatbot utterances with individual Test buttons, while the highlighted utterances in the center column show the results of applying the modified chatbot template (right-hand side code panel) to the corresponding “baseline” utterance (left column). #### 5.2.1. Identifying Errors From our first participant (P1), we learned of an interest in tagging _effective_ conversational turns in addition to errors; this motivated the "regression testing" we use, and we subsequently found that **all** our participants were interested in tagging strong responses in addition to errors. Two of our participants found all 5 categories of error (P1, P2), while one participant (P3) did not understand that tag names were for human use (not used as descriptions in some training process), and thus found only 3 of the 5 categories of error. All 3 participants found the tagging process straightforward, and P1 in particular appreciated the ways in which the conversations could be modified and rolled back: "oh, that's useful" (P1). P1 also noted that determining whether some utterances were logically sound sometimes required substantial understanding of the underlying instructional task, which made catching errors a function of the willingness to manually scroll between the exercise template and the conversation. Regarding the specific functionality used to understand the flow of conversations, P2 noted that the "I think this diagram [Ed: the conversation visualizer] has a real potential to help me understand what's going on in the conversation [...] having a graph of all the conversations is really something valuable that I would appreciate." #### 5.2.2. Testing New Prompts All 3 participants were able to identify which classes of error were improved by the new bot template. Our chatbot designer participant in particular (P2) interrupted the study halfway through to ask whether we could instead load up conversations _they had collected_ and import a bot template _they had constructed_ and to _continue the study with their template and data_: "You know I do have real life data, and we can use this [to improve my prompts.]" (P2) Though of course anecdotal, we consider this request to be a strong endorsement of the effectiveness of _Conversation Regression Testing_ as a technique and BotDesigner as a method for applying it. After using the testing interface shown in Fig. 7 for the evaluation task, P2 notes: I think you found out very interesting things. I didn't think really about how I can control all the interactions...you know I use a high temperature for the chatbot, and I really like it because the conversations are becoming awesome with the new models, just fantastic, but I don't have control. I don't know what is produces, you know. **This kind of tool, as a plug-in for an AI system, that shows you a log of what happened on the system, and then you can this data to fine tune the user experience.** Our observations of participants using BotDesigner hint at the substantial value of systematizing the typical trial-and-error approach that makes it very challenging to assess prompt changes across multiple conversations rather than single turns at a time. ## 6. Limitations & Future Work One fundamental assumption of the approach described here is that there is common structure across multiple dialogs. In step-by-step instructions, this is straightforward. In other conversation domains, how to align different conversations to common structure might be a research topic in itself. Though probably helpful, the tested implementation of BotDesigner does not present an aggregate picture of what classes of annotated utterances are improved or get worse, nor whether the changes in the produced utterances are meaningfully different or merely textually distinct. We don't yet offer tools for tracking evolution of utterances over time - if the interactive loop is about changing prompts, some changes will make things better, others will make things worse, and maybe some changes are modular, some aren't. This probably requires tracking prompt state and responses over time Future improvements to BotDesigner could also include the use of large pre-trained LMs to automate some tasks the designer currently performs, such as comparing baseline utterances with new utterances produced by an updated bot, or finding utterances with identical content but distinct text across conversations. ## 7. Conclusion The combination of pre-trained large language models (LM) and prompts offers exciting new opportunities for chatbot design. However, identifying robust and generalizable prompt strategies that can effectively improve conversational interactions has so far been challenging. Designers face challenges in both holistically analyzing the highly-contextual errors LMs make across conversations, and in resolving the errors without unknowingly causing new errors in preceding or subsequent conversations. This paper advances on these critical challenges. The primary contribution of this paper is the concept of _Conversation Regression Testing_ for prompt strategy design. Without model retraining, UX improvements from prompts tend to be brittle. Identifying truly effective prompt strategies requires systematic methods for assessing their robustness and generalizability. Such methods have been missing in prompt-related HCI research. _Conversation Regression Testing_ offers a first step in filling this critical gap. The technical contribution of this paper lies in the techniques for implementing BotDesigner. It presents a novel conversation visualization technique that visualizes common conversation patterns across many discrete conversations between an LM and various users. This technique not only enabled BotDesigner to aggregate LM errors without losing error contexts, it can be useful for developing many other human-LM interaction analysis or design tools. BotDesigner ultimately implements an interface for _Conversation Regression Testing_, a technique that can be valuable for prototyping prompts for many other pre-trained LM applications beyond conversational interactions.
2308.01718
Symplectic tableaux and quantum symmetric pairs
We provide a new branching rule from the general linear group $GL_{2n}(\mathbb{C})$ to the symplectic group $Sp_{2n}(\mathbb{C})$ by establishing a simple algorithm which gives rise to a bijection from the set of semistandard tableaux of a fixed shape to a disjoint union of several copies of sets of symplectic tableaux of various shapes. The algorithm arises from representation theory of a quantum symmetric pair of type $A\mathrm{II}_{2n-1}$, which is a $q$-analogue of the classical symmetric pair $(\mathfrak{gl}_{2n}(\mathbb{C}), \mathfrak{sp}_{2n}(\mathbb{C}))$.
Hideya Watanabe
2023-08-03T12:22:12Z
http://arxiv.org/abs/2308.01718v1
# Symplectic tableaux and quantum symmetric pairs ###### Abstract. We provide a new branching rule from the general linear group \(GL_{2n}(\mathbb{C})\) to the symplectic group \(Sp_{2n}(\mathbb{C})\) by establishing a simple algorithm which gives rise to a bijection from the set of semistandard tableaux of a fixed shape to a disjoint union of several copies of sets of symplectic tableaux of various shapes. The algorithm arises from representation theory of a quantum symmetric pair of type \(A\mathrm{II}_{2n-1}\), which is a \(q\)-analogue of the classical symmetric pair \((\mathfrak{gl}_{2n}(\mathbb{C}),\mathfrak{sp}_{2n}(\mathbb{C}))\). 2020 Mathematics Subject Classification: Primary 05E10; Secondary 17B10, 17B37 ## 1. Introduction ### Branching rules Let \(G\) be a group and \(\hat{G}\) a complete set of representatives of the equivalence classes of certain irreducible \(G\)-modules. Let \(H\) be a subgroup of \(G\). It is a fundamental problem to determine how a given irreducible \(G\)-module \(V\in\hat{G}\) decomposes into irreducible \(H\)-submodules (if it does): \[V\simeq\bigoplus_{W\in\hat{H}}W^{m_{V,W}}.\] An explicit description of the multiplicities \(m_{V,W}\) is called a _branching rule_. The problem of finding branching rules for certain pairs \((G,H)\) of classical groups (the general/special linear groups \(GL_{m}(\mathbb{C})\), \(SL_{m}(\mathbb{C})\), symplectic groups \(Sp_{2n}(\mathbb{C})\), and (special) orthogonal groups \(O_{m}(\mathbb{C})\), \(SO_{m}(\mathbb{C})\)) has been studied for a long time, and several (partial) answers have been obtained (see [10] and references therein). In the present paper, we focus on the irreducible polynomial representations for the pair \((G,H)=(GL_{2n}(\mathbb{C}),Sp_{2n}(\mathbb{C}))\). The equivalence classes of irreducible polynomial representations of \(GL_{2n}(\mathbb{C})\) (resp., \(Sp_{2n}(\mathbb{C})\)) are parametrized by the set \(\mathrm{Par}_{\leq 2n}\) (resp., \(\mathrm{Par}_{\leq n}\)) of partitions of length at most \(2n\) (resp., \(n\)). For each \(\lambda\in\mathrm{Par}_{\leq 2n}\) and \(\nu\in\mathrm{Par}_{\leq n}\), let \(m_{\lambda,\nu}\) denote the corresponding multiplicity. Littlewood [12] provided a partial branching rule. Namely, he determined the multiplicities \(m_{\lambda,\nu}\) for all \(\lambda,\nu\in\mathrm{Par}_{\leq n}\), but not for all \(\lambda\in\mathrm{Par}_{\leq 2n}\). Sundaram [13] gave a complete branching rule. The key ingredients for her theorem are King's symplectic tableaux ([13]), Berele's insertion scheme for \(Sp_{2n}(\mathbb{C})\) ([1]), and Sundaram's algorithm. In her branching rule, the multiplicities are determined by counting certain tableaux, which we call _symplectic Littlewood-Richardson tableaux_. Naito and Sagaki [14] proposed a conjectural branching rule in terms of Littelmann paths. The conjecture was proved by Schumann and Torres [15]. ### Results In the present paper, we introduce a simple algorithm which gives rise to a bijection \[\mathrm{LR}^{A\mathrm{II}}:\mathrm{SST}_{2n}(\lambda)\to\bigsqcup_{\begin{subarray} {c}\nu\in\mathrm{Par}_{\leq n}\\ \nu\subseteq\lambda\end{subarray}}Sp\mathrm{T}_{2n}(\nu)\times\mathrm{Rec}_{2n }(\lambda/\nu)\] which sends a semistandard Young tableau of shape \(\lambda\in\operatorname{Par}_{\leq 2n}\) with entries in \([1,2n]:=\{1,\ldots,2n\}\) to a pair consisting of a symplectic tableau of some shape \(\nu\in\operatorname{Par}_{\leq n}\) with entries in \([1,2n]\) and a tableau, called a _recording tableau_, of skew shape \(\lambda/\nu\). As a byproduct, it turns out that the multiplicity \(m_{\lambda,\nu}\) coincides with the number \(|\mathrm{Rec}_{2n}(\lambda/\nu)|\) of recording tableaux of shape \(\lambda/\nu\). Therefore, our algorithm provides a new branching rule for \((GL_{2n}(\mathbb{C}),Sp_{2n}(\mathbb{C}))\). Moreover, it has deep representation theoretical information as we will see in the next subsection. We call the bijection the _Littlewood-Richardson map_ since it can be regarded as a generalization of the branching rule, known as the Littlewood-Richardson rule, for the pair \((GL_{m}(\mathbb{C})\times GL_{m}(\mathbb{C}),GL_{m}(\mathbb{C}))\). Let us briefly explain our algorithm. Given \(T\in\operatorname{SST}_{2n}(\lambda)\), let \(\mathbf{a}=(a_{1},\ldots,a_{l})\) denote the first column of \(T\) (read from top to bottom), and \(S\) the other part. For the column \(\mathbf{a}\), define a new column \(\operatorname{red}(\mathbf{a})\) to be the one obtained from \(\mathbf{a}\) by removing the entries in the set \(\operatorname{rem}(\mathbf{a})\), which is defined by the following recursive formula: \[\operatorname{rem}(\mathbf{a}):=\begin{cases}\emptyset&\text{ if }l\leq 1,\\ \operatorname{rem}(a_{1},\ldots,a_{l-2})\sqcup\{a_{l-1},a_{l}\}&\text{ if }l \geq 2,\ a_{l}\in 2\mathbb{Z},\ a_{l-1}=a_{l}-1,\text{ and }\\ &\text{ }&a_{l}<2l-|\operatorname{rem}(a_{1},\ldots,a_{l-2})|-1,\\ \operatorname{rem}(a_{1},\ldots,a_{l-1})&\text{ otherwise.}\end{cases}\] Then, define a new tableau \(\operatorname{suc}(T)\) to be the product \(\operatorname{red}(\mathbf{a})*S\) (in the plactic monoid). Set \(P^{0}:=T\), \(\nu^{0}:=\lambda\), and \(Q^{0}\) to be the unique tableau of shape \(\lambda/\lambda\). For each \(k\geq 0\), set \(P^{k+1}:=\operatorname{suc}(P^{k})\), \(\nu^{k+1}\) to be the shape of \(P^{k+1}\), and \(Q^{k+1}\) to be the tableau of shape \(\lambda/\nu^{k+1}\) such that \[Q^{k+1}(i,j)=\begin{cases}Q^{k}(i,j)&\text{ if }(i,j)\notin D(\nu^{k}),\\ k+1&\text{ if }(i,j)\in D(\nu^{k}),\end{cases}\] where \(D(\nu^{k})\) denotes the Young diagram of the partition \(\nu^{k}\). It turns out that this procedure eventually terminates; there exists a unique integer \(k_{0}\geq 0\) such that \[P^{k}=P^{k_{0}},\ \nu^{k}=\nu^{k_{0}},\text{ and }Q^{k}=Q^{k_{0}}\ \text{ for all }k\geq k_{0}.\] Set \[P^{A\Pi}(T):=P^{k_{0}},\quad Q^{A\Pi}(T):=Q^{k_{0}}.\] Now, the output \(\operatorname{LR}^{A\Pi}(T)\) of the algorithm is the pair \((P^{A\Pi}(T),Q^{A\Pi}(T))\): \[\operatorname{LR}^{A\Pi}(T)=(P^{A\Pi}(T),Q^{A\Pi}(T)).\] **Example 1.2.1**.: Let \(n=3\), \(\lambda=(4,3,2,2,1)\), and consider the semistandard tableau \(T\) of shape \(\lambda\) given by \[T=\begin{array}{c|c}\young 11244\\ \young 23}\\ \young 44}\\ \young 56}\\ \young 6}\end{array}\] Then, we have \[\operatorname{rem}(1,2,4,5,6)=\{1,2,5,6\},\] and \[P^{1}=\young 4}*\young{\young{\young{\young{\young{\young{\young{\young{\young{\young{\young{\young{ \young{\young{\youngyoung{\young The next step is computed as follows: We have \[\operatorname{rem}(1,2,4,6)=\{1,2\},\] and \[P^{2}=\youngyoung{4}{6}*\young{2}{3}=\youngyoung{2}{4}{3},\quad Q^{2}=\youngyoung{1}{2 }\] Let us proceed to the next step: We have \[\operatorname{rem}(2,3,4,6)=\{3,4\},\] and \[P^{3}=\youngyoung{2}{6}*\youngyoung{4}{6}=\youngyoung{2}{4}{6},\quad Q^{3}=\youngyoung{ 1}{2}\] Since \(\operatorname{rem}(2,6)=\emptyset\), the algorithm now terminates. Hence, we finally obtain \[\operatorname{LR}^{\operatorname{AII}}(T)=\left(\youngyoung{2}{4}{4},\young{ 2}{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3},\young{3}, \young{3},\young{3},\young{3},\young{3},\young{3},\young{ Therefore, the Littlewood-Richardson map tells us not only the multiplicities \(m_{\lambda,\nu}\) but also how the irreducible \(\mathbf{U}\)-module \(V(\lambda)\) decomposes into irreducible \(\mathbf{U}^{\text{\tiny{t}}}\)-submodules at \(q=\infty\). Hence, this result must be closely related to the theory of crystal bases for quantum symmetric pairs (_cf._[21]). ### Organization The paper is organized as follows. In Section 2, we collect terminology and basic results concerning partitions and tableaux which are necessary to formulate our main algorithm. Then, we state our main theorem in Section 3. It consists of the bijectivity of the Littlewood-Richardson map and an explicit description of the recording tableaux. The rest of the paper is devoted to proving the theorem. In Section 4, we factor the reduction map \((\mathbf{a}\mapsto\text{red}(\mathbf{a}))\) into small pieces so that we can prove its injectivity. After reviewing representation theory of \(\mathfrak{gl}_{2n}(\mathbb{C})\), \(\mathfrak{sp}_{2n}(\mathbb{C})\), \(\mathbf{U}\), and \(\mathbf{U}^{\text{\tiny{t}}}\) in Sections 5 and 6, we prove the surjectivity of the Littlewood-Richardson map via an investigation into the quantum Littlewood-Richardson map in Section 7. We finally complete the proof of our main theorem in Section 8 by relating the recording tableaux to the symplectic Littlewood-Richardson tableaux. ### Acknowledgements This work was supported by JSPS KAKENHI Grant Number JP22KJ2603. ### Notation Throughout the paper, we fix positive integers \(m\) and \(n\). Given two integers \(a\) and \(b\), let \([a,b]\) denote the integer interval: \[[a,b]:=\{c\in\mathbb{Z}\mid a\leq c\leq b\}.\] ## 2. Preliminaries from combinatorics In this section, we collect terminology and basic results concerning partitions and tableaux which are necessary to formulate our main algorithm in the next section. ### Partitions A _partition_ is a non-increasing sequence \[\lambda=(\lambda_{1},\dots,\lambda_{l})\] of positive integers. We often regard a non-increasing sequence of nonnegative integers as a partition by ignoring the zero's. Each \(\lambda_{i}\) is referred to as a _part_ of \(\lambda\). It is convenient to set \(\lambda_{i}:=0\) for \(i>l\). The sum \(\sum_{i=1}^{l}\lambda_{i}\) of parts is called the _size_ of \(\lambda\), and is denoted by \(|\lambda|\). The number \(l\) of parts of \(\lambda\) is called the _length_ of \(\lambda\), and is denoted by \(\ell(\lambda)\). We regard the empty sequence \(()\) as a unique partition of length \(0\). Let Par denote the set of all partitions. For each \(l\in\mathbb{Z}_{\geq 0}\), let \(\text{Par}_{\leq l}\) denote the set of all partitions of length at most \(l\). For each \(l\in\mathbb{Z}_{\geq 0}\), let \(\varpi_{l}\) denote the partition of length \(l\) whose parts are all \(1\): \[\varpi_{l}=(1^{l})=(\underbrace{1,\dots,1}_{l}) \tag{2.1}\] Until the end of this subsection, let us fix a partition \(\lambda\). The _Young diagram_ of shape \(\lambda\) is the set \[D(\lambda):=\{(i,j)\mid 1\leq i\leq\ell(\lambda)\text{ and }1\leq j\leq\lambda_{i}\}.\] As usual, we visualize it by a collection of boxes; e.g., \[D(4,3,2,2,1)=\yng(4,3,2,1)=\yng(4,3,2,1)=\yng(4,3,2,1)=\yng(4,3,2,1)=\yng(4,3,2,1)= \yng(4, For each \(j\in[1,\lambda_{1}]\), set \[w_{j}^{\rm col}(T):=(T({\rm col}_{j}(\lambda),j),\ldots,T(2,j),T(1,j))\] (see (2.2) for the definition of \({\rm col}_{j}\)). It is the sequence of entries in the \(j\)-th column of \(T\) read from bottom to top. The _column word_ of \(T\) is the sequence \(w^{\rm col}(T)\) of entries obtained by concatenating the \(w_{j}^{\rm col}(T)\)'s: \[w^{\rm col}(T):=w_{1}^{\rm col}(T)\circ\cdots\circ w_{\lambda_{1}}^{\rm col}(T). \tag{2.5}\] For example, if \(T\) is the tableau in (2.3), then \[w^{\rm col}(T)=(6,5,4,2,1,6,4,2,1,3,2,4).\] The tableau \(T\) is said to be _semistandard_ if the entries increase weakly from left to right along the rows, and strictly from top to bottom along the columns. Namely, \[T(i,j)\leq T(i,j+1)\ {\rm and}\ T(i,j)<T(i+1,j)\ \ {\rm for\ all}\ (i,j)\in D(\lambda),\] where we set \(T(i^{\prime},j^{\prime}):=\infty\) if \((i^{\prime},j^{\prime})\notin D(\lambda)\). For example, the tableau in (2.3) is semistandard. Let \({\rm SST}_{m}(\lambda)\) denote the set of all semistandard tableaux of shape \(\lambda\) with entries in \([1,m]\). The generating function \[s_{\lambda}(x_{1},\ldots,x_{m}):=\sum_{T\in{\rm SST}_{m}(\lambda)}{\bf x}^{ \rm wt(T)}\in\mathbb{Z}[x_{1},\ldots,x_{m}] \tag{2.6}\] is called the _Schur function_, where \[{\rm wt}(T):=(T[1],\ldots,T[m]),\quad{\bf x}^{(a_{1},\ldots,a_{m})}:=x_{1}^{a _{1}}\cdots x_{m}^{a_{m}}. \tag{2.7}\] The Schur functions are symmetric polynomials: \[s_{\lambda}(x_{\sigma(1)},\ldots,x_{\sigma(m)})=s_{\lambda}(x_{1},\ldots,x_{m})\] for all permutation \(\sigma\) on \([1,m]\). For \(\mu\in{\rm Par}\) with \(\mu\subseteq\lambda\), a _tableau_ of shape \(\lambda/\mu\) is a map \[D(\lambda/\mu)\to\mathbb{Z}_{>0}.\] Let \({\rm Tab}(\lambda/\mu)\) denote the set of all tableaux of shape \(\lambda/\mu\). The notion of semistandard tableaux of shape \(\lambda/\mu\) is defined in the obvious way. Let \({\rm SST}_{m}(\lambda/\mu)\) denote the set of all semistandard tableaux of shape \(\lambda/\mu\) with entries in \([1,m]\). For example, the following is a semistandard tableau of shape \((4,3,2,2,1)/(2,2,1)\): \[\begin{array}{c}\framebox{$2\,\,\,\,\framebox{$2\,\,\,\,\framebox{$3$}}$}\\ \framebox{$2\,\,\,\,\framebox{$4$}$}\\ \framebox{$6$}\end{array}\] ### Plactic monoid The set of all semistandard tableaux with entries in \([1,m]\) forms a monoid, called the _plactic monoid_ (_cf._[20, Sections 1.1 and A.2]). For the reader's convenience, we recall here its definition. In order to describe the monoid structure, we need to introduce the _column insertion algorithm_, which receives a pair \((w,T)\) of a positive integer \(w\) and a semistandard tableau \(T\) as an input, and returns a new semistandard tableau \(w\to T\) as an output as follows. Set \(\lambda:={\rm sh}(T)\), and \[w_{0}:=w,\quad T_{0}:=T.\] For each \(j>0\), given a pair \((w_{j-1},T_{j-1})\), set \[\begin{split} r_{j}&:=\min\{r\in[1,\operatorname{col} _{j}(\lambda)+1]\mid T(r,j)\geq w_{j-1}\},\\ w_{j}&:=T(r_{j},j),\end{split} \tag{2.8}\] where we set \(T(i^{\prime},j^{\prime})=\infty\) if \((i^{\prime},j^{\prime})\notin D(\lambda)\) (see (2.2) for the definition of \(\operatorname{col}_{j}\)). Also set \(\lambda^{j}\) to be the partition such that \[D(\lambda^{j})=\begin{cases}D(\lambda)&\text{ if }r_{j}\leq\operatorname{col} _{j}(\lambda),\\ D(\lambda)\sqcup\{(r_{j},j)\}&\text{ if }r_{j}=\operatorname{col}_{j}( \lambda)+1,\end{cases}\] and \(T_{j}\) to be the semistandard tableau of shape \(\lambda^{j}\) such that \[T_{j}(i^{\prime},j^{\prime})=\begin{cases}w_{j-1}&\text{ if }(i^{\prime},j^{ \prime})=(r_{j},j),\\ T_{j-1}(i^{\prime},j^{\prime})&\text{ if }(i^{\prime},j^{\prime})\neq(r_{j},j), \end{cases}\text{ for all }(i^{\prime},j^{\prime})\in D(\lambda^{j}).\] Let \(s\geq 1\) denote the minimal integer such that \(r_{j}=\operatorname{col}_{j}(\lambda)+1\). Then, the semistandard tableau \(T_{s}\) is the one \(w\to T\). The sequence \[\operatorname{br}(w,T):=(r_{1},\dots,r_{s})\] is called the _bumping route_. **Example 2.3.1**.: Let \(w=7\) and \[T=\begin{array}{c|c|c}\hline\framebox{1}&\framebox{2}&\framebox{3}\\ \framebox{3}&\framebox{4}&\framebox{5}\\ \framebox{4}&\framebox{5}&\framebox{6}\\ \framebox{6}&\framebox{6}&\framebox{9}\\ \framebox{7}&\framebox{7}&\framebox{8}\\ \framebox{10}\end{array}\] Then, we have \[w\to T=\begin{array}{c|c|c}\hline\framebox{1}&\framebox{2}&\framebox{3}\\ \framebox{3}&\framebox{4}&\framebox{5}\\ \framebox{4}&\framebox{5}&\framebox{6}\\ \framebox{6}&\framebox{7}&\framebox{7}&\framebox{10}\\ \framebox{8}&\framebox{8}\\ \framebox{10}\end{array}\end{array},\quad\operatorname{br}(w,T)=(5,5,4,2).\] The following proposition can be straightforwardly deduced from the definitions. **Proposition 2.3.2**.: _Let \(w,T,r_{1},\dots,r_{s}\) be as above. Set \(S:=w\to T\) and \(\mu:=\operatorname{sh}(S)\). Then, the following hold._ 1. \(r_{1}\geq\dots\geq r_{s}\)__ 2. \(w\leq T(r_{1},1)\leq T(r_{2},2)\leq\dots\leq T(r_{s-1},s-1)<T(r_{s},s)=\infty\)_._ 3. \(\lambda\subset\mu\)_. Moreover,_ \(D(\mu/\lambda)=\{(r_{s},s)\}\)_._ 4. _For each_ \((i,j)\in D(\mu)\)_, we have_ \[S(i,j)=\begin{cases}w&\text{ if }(i,j)=(r_{1},1),\\ T(r_{j-1},j-1)&\text{ if }j\in[2,s]\text{ and }i=r_{j},\\ T(i,j)&\text{ otherwise}.\end{cases}\] Given two semistandard tableaux \(S\) and \(T\), their product \(S*T\) in the plactic monoid is given by \[S*T:=w_{1}\to(\dots\to(w_{r}\to T)\cdots)\] where \((w_{1},\dots,w_{r})=w^{\operatorname{col}}(S)\) (see (2.5) for the definition of \(w^{\operatorname{col}}\)). Regarding bumping routes of successive insertions, the following is known. **Proposition 2.3.3** ([12, Exercise 3 in Section A.2]).: _Let \(\lambda\in\mathrm{Par}_{\leq m}\), \(T\in\mathrm{SST}_{m}(\lambda)\), and \(w,w^{\prime}\in[1,m]\) be such that \(w<w^{\prime}\). Let us write_ \[\mathrm{br}(w,T)=(r_{1},\ldots,r_{s}),\quad\mathrm{br}(w^{\prime},(w\to T))=(r_{ 1}^{\prime},\ldots,r_{s^{\prime}}^{\prime}).\] _Then, we have \(s^{\prime}\leq s\) and \(r_{j}^{\prime}>r_{j}\) for all \(j\in[1,s^{\prime}]\)._ The following is known as (a combinatorial version of) Pieri's formula. **Proposition 2.3.4**.: _Let \(\lambda\in\mathrm{Par}_{\leq m}\) and \(k\in[0,m]\). The assignment \((S,T)\to S\ast T\) gives rise to a bijection_ \[\ast:\mathrm{SST}_{m}(\varpi_{k})\times\mathrm{SST}_{m}(\lambda)\to\bigsqcup_ {\begin{subarray}{c}\mu\in\mathrm{Par}_{\leq m}\\ \lambda\subseteq\mu\text{ and }[\mu/\lambda]=k\end{subarray}} \mathrm{SST}_{m}(\mu).\] ### Symplectic tableaux In this subsection, we fix a partition \(\lambda\in\mathrm{Par}_{\leq 2n}\). **Definition 2.4.1** ([12, Section 4]).: A semistandard tableau \(T\in\mathrm{SST}_{2n}(\lambda)\) is said to be _symplectic_ if \[T(k,1)\geq 2k-1\ \text{ for all }k\in[1,\ell(\lambda)].\] Let \(Sp\mathrm{T}_{2n}(\lambda)\) denote the set of all symplectic tableaux of shape \(\lambda\). **Proposition 2.4.2**.: _If \(Sp\mathrm{T}_{2n}(\lambda)\neq\emptyset\), then \(\ell(\lambda)\leq n\)._ Proof.: Assume contrary that \(\ell(\lambda)>n\). Let \(T\in Sp\mathrm{T}_{2n}(\lambda)\). Then, we have \[T(n+1,1)\geq 2(n+1)-1=2n+1>2n.\] This contradicts that the entries of \(T\) are in \([1,2n]\). Thus, the assertion follows. For each \(\nu\in\mathrm{Par}_{\leq n}\), the generating function \[s_{\nu}^{Sp}(y_{1},\ldots,y_{n}):=\sum_{T\in\mathrm{Sp}\mathrm{T}_{2n}(\nu)} \mathbf{y}^{\mathrm{wt}^{Sp}(T)}\in\mathbb{Z}[y_{1}^{\pm 1},\ldots,y_{n}^{\pm 1}] \tag{2.9}\] is called the _symplectic Schur function_, where \[\mathrm{wt}^{Sp}(T):=(T[1]-T[2],T[3]-T[4],\ldots,T[2n-1]-T[2n]). \tag{2.10}\] The symplectic Schur functions are linearly independent. **Lemma 2.4.3**.: _Let \(T\in\mathrm{SST}_{2n}(\lambda)\). If \(T\) is not symplectic, then there exists a unique \(i\in[2,2n]\) such that_ \[T(i,1)<2i-1\text{ and }T(k,1)\geq 2k-1\text{ for all }k\in[1,i-1].\] _Moreover, we have_ \[T(i,1)=2i-2\text{ and }T(i-1,1)=2i-3.\] Proof.: Since \(T\) is not symplectic, there exists \(i\in[1,\ell(\lambda)]\) such that \(T(i,1)<2i-1\) (note that the number \(i\) cannot be \(1\) since \(T(1,1)\) is always greater than or equal to \(1\) (\(=2\cdot 1-1\))). We may take the minimal \(i\) among such integers. Then, the first assertion is clear. Now, we have \[2i-3=2(i-1)-1\leq T(i-1,1)<T(i,1)<2i-1.\] This implies the second assertion. ### Reduction map For each \(l\in[0,m]\), a tableau in \(\operatorname{SST}_{m}(\varpi_{l})\) (see (2.1) for the definition of \(\varpi_{l}\)) can be represented by an increasing sequence \((a_{1},\dots,a_{l})\) of integers in \([1,m]\). We often regard such sequences \(\mathbf{a}\), \(\mathbf{b}\), etc. as sets, and consider their cardinalities \(|\mathbf{a}|\), disjoint unions \(\mathbf{a}\sqcup\mathbf{b}\), set differences \(\mathbf{a}\setminus\mathbf{b}\), and so on. In this subsection, we fix \(l\in[0,2n]\) and \(\mathbf{a}=(a_{1},\dots,a_{l})\in\operatorname{SST}_{2n}(\varpi_{l})\). **Definition 2.5.1**.: The set of _removable entries_ of \(\mathbf{a}\) is the subset \(\operatorname{rem}(\mathbf{a})\) of \(\mathbf{a}\) defined as follows. 1. If \(l\leq 1\), then \(\operatorname{rem}(\mathbf{a})=\emptyset\). 2. If \(l>1\), then \[\operatorname{rem}(\mathbf{a})=\begin{cases}\operatorname{rem}(a_{1},\dots,a_ {l-2})\sqcup\{a_{l-1},a_{l}\}&\text{ if }a_{l}\in 2\mathbb{Z},\;a_{l-1}=a_{l}-1,\text{ and}\\ &a_{l}<2l-|\operatorname{rem}(a_{1},\dots,a_{l-2})|-1,\\ \operatorname{rem}(a_{1},\dots,a_{l-1})&\text{ otherwise.}\end{cases}\] **Definition 2.5.2**.: The _reduction map_ on \(\operatorname{SST}_{2n}(\varpi_{l})\) is the map \[\operatorname{red}:\operatorname{SST}_{2n}(\varpi_{l})\to\bigsqcup_{k=0}^{2n} \operatorname{SST}_{2n}(\varpi_{k});\;\mathbf{a}\mapsto\mathbf{a}\setminus \operatorname{rem}(\mathbf{a}).\] For example, if we take \(\mathbf{a}=(1,3,4,5,6,7,11,12,13,14)\), then we have \[\operatorname{rem}(\mathbf{a})=\{3,4,5,6,13,14\},\quad\operatorname{red}( \mathbf{a})=(1,7,11,12).\] The following is immediate from the definition. **Proposition 2.5.3**.: 1. _If_ \(a_{l}\in\operatorname{rem}(\mathbf{a})\)_, then_ \(a_{l}\in 2\mathbb{Z}\)_._ 2. _If_ \(a_{l}\notin\operatorname{rem}(\mathbf{a})\)_, then_ \(\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\dots,a_{l-1})\)_._ 3. _If_ \(a_{l}\in\operatorname{rem}(\mathbf{a})\)_, then_ \(\operatorname{rem}(a_{1},\dots,a_{l-1})=\operatorname{rem}(a_{1},\dots,a_{l-2})\)_._ 4. \(\operatorname{rem}(a_{1},\dots,a_{k})\subseteq\operatorname{rem}(\mathbf{a})\) _for all_ \(k\in[0,l]\)_._ ### Successor map In this subsection, we fix \(\lambda\in\operatorname{Par}_{\leq 2n}\), and set \(l:=\ell(\lambda)\). Let \(\lambda^{\prime}\) denote the partition \((\lambda_{1}-1,\dots,\lambda_{l}-1)\). Let us define a map \[d:\operatorname{SST}_{2n}(\lambda)\to\operatorname{SST}_{2n}(\varpi_{l}) \times\operatorname{SST}_{2n}(\lambda^{\prime}) \tag{2.11}\] as follows. For each \(T\in\operatorname{SST}_{2n}(\lambda)\), the image \(d(T)\) is the pair \((\mathbf{a},T^{\prime})\) consisting of the first column \(\mathbf{a}\) of \(T\): \[\mathbf{a}=(T(1,1),T(2,1),\dots,T(l,1)),\] and the other part \(T^{\prime}\): \[T^{\prime}(i,j)=T(i,j+1)\;\text{ for all }(i,j)\in D(\lambda^{\prime}).\] **Definition 2.6.1**.: The _successor map_ is the composite \[\operatorname{suc}:=*\circ(\operatorname{red},\operatorname{id})\circ d: \operatorname{SST}_{2n}(\lambda)\to\bigsqcup_{\mu\in\operatorname{Par}_{ \leq 2n}}\operatorname{SST}_{2n}(\mu),\] where \(*\) denotes the multiplication map of the plactic monoid. **Example 2.6.2**.: Let \[T:=\begin{array}{c|c|c|c|}\hline 1&2&2&3\\ \hline 3&3&4&5\\ \hline 4&4&5&6\\ \hline 5&6&6&9\\ \hline 6&7&7&10\\ \hline 7&8&8\\ \hline\frac{11}{10}\\ \frac{13}{14}\\ \hline\end{array}\] Then, we have \[\operatorname{suc}(T)=\begin{array}{c|c|c|c|}\hline 1&2&2&3\\ \hline 7&3&4&5\\ \hline 11&4&5&6\\ \hline 7&8&8\\ \hline\frac{13}{14}\\ \hline\end{array}=\begin{array}{c|c|c|c|}\hline 1&2&2&3\\ \hline 3&4&5&9\\ \hline 4&5&6\\ \hline 6&6&7\\ \hline 7&7&10\\ \hline 8&8\\ \hline\end{array},\quad\operatorname{suc}^{2}(T)=\begin{array}{c|c|c|c|} \hline 1&2&2&3\\ \hline 6&4&5&9\\ \hline 6&5&6\\ \hline 7&7&10\\ \hline 8&8\\ \hline\end{array}=\begin{array}{c|c|c|c|}\hline 1&2&2&3\\ \hline 4&5&6&9\\ \hline 7&10\\ \hline 8&1\\ \hline\end{array},\] \[\operatorname{suc}^{3}(T)=\begin{array}{c|c|c|c|}\hline 1&4&5&6&9\\ \hline 4&0&6\\ \hline 7&10\\ \hline 10&0\\ \hline\end{array}=\begin{array}{c|c|c|c|}\hline 1&2&2&3\\ \hline 4&5&6&9\\ \hline 7&10\\ \hline 8&1\\ \hline\end{array},\quad\operatorname{suc}^{4}(T)=\begin{array}{c|c|c|c|} \hline 1&2&2&3\\ \hline 4&5&6&9\\ \hline 6&10\\ \hline 7&10\\ \hline\end{array},\quad\begin{array}{c|c|c|c|}\hline 1&2&2&3\\ \hline 4&5&6&9\\ \hline 7&10\\ \hline\end{array},\] **Lemma 2.6.3**.: _Let \(T\in\operatorname{SST}_{2n}(\lambda)\). Set \(d(T)=(\mathbf{a},T^{\prime})\), and write \(\mathbf{a}=(a_{1},\ldots,a_{l})\) and \(\operatorname{red}(\mathbf{a})=\mathbf{b}=(b_{1},\ldots,b_{k})\). Set \(S^{0}:=T^{\prime}\), \(S^{t}:=b_{t}\to S^{t-1}\), and \(\mu^{t}:=\operatorname{sh}(S^{t})\). Let us write \(\operatorname{br}(b_{t},S^{t-1})=(r_{t,1},\ldots,r_{t,s_{t}})\). Let \(r_{t,0}\) be such that \(a_{r_{t,0}}=b_{t}\). Then, the following hold for all \(t\in[1,k]\):_ 1. \(D(\mu^{t}/\lambda^{\prime})=\{(r_{u,s_{u}},s_{u})\mid u\in[1,t]\}\)_._ 2. \(S^{t}(i,j)=\begin{cases}T(r_{u,j-1},j)&\text{ if }u\in[1,t],\ j\in[1,s_{u}],\text{ and }i=r_{u,j},\\ T(i,j+1)&\text{ otherwise}.\end{cases}\)__ 3. \(s_{1}\geq s_{2}\geq\cdots\geq s_{k}\)_._ 4. \(r_{t,j}>r_{t-1,j}>\cdots>r_{1,j}\) _for all_ \(j\in[0,s_{t}]\)_._ 5. \(r_{t,j}=\min\{r\in[r_{t-1,j}+1,\operatorname{col}_{j}(\mu^{t-1})+1]\mid T(r,j+ 1)\geq T(r_{t,j-1},j)\}\) _for all_ \(j\in[1,s_{t}]\)_, where we set_ \(r_{0,j}=0\)_._ 6. \(r_{t,0}\geq r_{t,1}\geq\cdots\geq r_{t,s_{t}}\)_._ Proof.: The first and second assertions can be deduced from Proposition 2.3.2 (3) and (4) by induction on \(t\). The third and fourth assertions follow from Proposition 2.3.3. Next, let us prove the fifth assertion. For each \(j\in[0,s_{t}]\), set \[w_{t,j}:=\begin{cases}b_{t}&\text{ if }j=0,\\ S^{t-1}(r_{t,j},j)&\text{ if }j>0.\end{cases}\] Then, by equation (2.8), we have \[r_{t,j}=\min\{r\in[1,\operatorname{col}_{j}(\mu^{t-1})+1]\mid S^{t-1}(r,j) \geq w_{t,j-1}\}. \tag{2.12}\] This, together with the second and fourth assertions, implies our claim. Finally, let us prove the last assertion by induction on \(t\); we understand that \(s_{0}=0\) so that the assertion for \(t=0\) is clear. We have \[r_{t,1}\geq\cdots\geq r_{t,s_{t}}\] by Proposition 2.3.2 (1). Hence, we only need to show that \(r_{t,0}\geq r_{t,1}\). By equation (2.12), we have \[r_{t,1}\leq\operatorname{col}_{1}(\mu^{t-1})+1.\] Hence, there is nothing to prove if \(r_{t,0}>\operatorname{col}_{1}(\mu^{t-1})\). Therefore, suppose that \(r_{t,0}\leq\operatorname{col}_{1}(\mu^{t-1})\). Then, we have \[T(r_{t,0},1)\leq T(r_{t,0},2).\] Since \(a_{r_{t,0}}=b_{t}>b_{t-1}=a_{r_{t-1,0}}\), we see that \(r_{t,0}>r_{t-1,0}\). By our induction hypothesis, it holds that \(r_{t-1,0}\geq r_{t-1,1}\). Summarizing above, we have \[r_{t-1,1}<r_{t,0}\leq\operatorname{col}_{1}(\mu^{t-1})\text{ and }T(r_{t,0},2) \geq T(r_{t,0},1).\] Then, the fifth assertion implies that \(r_{t,1}\leq r_{t,0}\), as desired. **Lemma 2.6.4**.: _Let \(T\in\operatorname{SST}_{2n}(\lambda)\) and set \(\mu:=\operatorname{sh}(\operatorname{suc}(T))\). Then, we have \(\mu\underset{\text{vert}}{\subseteq}\lambda\). Moreover, the equality holds if and only if \(\operatorname{suc}(T)=T\)._ Proof.: We use the notation in Lemma 2.6.3. Note that \(S^{k}=\operatorname{suc}(T)\) and \(\mu^{k}=\mu\). By Proposition 2.3.4, we see that \[\lambda^{\prime}\underset{\text{vert}}{\subseteq}\mu\text{ and }|\mu/ \lambda^{\prime}|=|\operatorname{red}(\mathbf{a})|.\] Therefore, in order to prove that \(\mu\underset{\text{vert}}{\subseteq}\lambda\), we only need to show that \(\ell(\mu)\leq l\) (see Lemma 2.1.3). For each \(t\in[0,k]\), by Lemma 2.6.3 (1) we have \[\ell(\mu)=\ell(\lambda^{\prime})+\sharp\{t\in[1,k]\mid s_{t}=1\}. \tag{2.13}\] Let \(u\in[1,k]\) be such that \(r_{u,0}\leq\ell(\lambda^{\prime})\). Then, we have \[r_{u,1}\leq r_{u,0}\leq\ell(\lambda^{\prime})\leq\ell(\mu^{u-1})=\operatorname {col}_{1}(\mu^{u-1}),\] where the first inequality follows from Lemma 2.6.3 (6). This implies that \(s_{u}>1\). Hence, we see that \[\sharp\{t\in[1,k]\mid s_{u}=1\}\leq k-\ell(\lambda^{\prime}).\] This, together with equation (2.13), implies the first assertion. Next, let us prove the second assertion. Suppose that \(\mu=\lambda\). In this case, we must have \(\operatorname{red}(\mathbf{a})=\mathbf{a}\) since \[0=|\lambda/\mu|=|\lambda/\lambda^{\prime}|-|\mu/\lambda^{\prime}|=l-| \operatorname{red}(\mathbf{a})|=|\operatorname{rem}(\mathbf{a})|.\] On the other hand, it is clear that \[\mathbf{a}*T^{\prime}=T.\] Therefore, we obtain \(\operatorname{suc}(T)=T\), as desired. The opposite direction is trivial. ## 3. Main result ### Statement Let \(\lambda\in\operatorname{Par}_{\leq 2n}\) and \(T\in\operatorname{SST}_{2n}(\lambda)\). For each \(k\geq 0\), define a semistandard tableau \(P^{k}\), a partition \(\nu^{k}\), and a tableau \(Q^{k}\) inductively as follows. First, set \[P^{0}:=T,\quad\nu^{0}:=\lambda,\] and \(Q^{0}\) to be the unique tableau of shape \(\lambda/\lambda\). For \(k\geq 0\), set \[P^{k+1}:=\operatorname{suc}(P^{k}),\quad\nu^{k+1}:=\operatorname{sh}(P^{k+1}),\] and \(Q^{k+1}\) to be the tableau of shape \(\lambda/\nu^{k+1}\) given by \[Q^{k+1}(i,j):=\begin{cases}Q^{k}(i,j)&\text{ if }(i,j)\notin D(\nu^{k}),\\ k+1&\text{ if }(i,j)\in D(\nu^{k}).\end{cases}\] Note that by Lemma 2.6.4, there exists a unique \(k_{0}\geq 0\) such that \[\lambda=\nu^{0}\underset{\text{\rm vert}}{\supset}\nu^{1}\underset{\text{\rm vert }}{\supset}\nu^{2}\underset{\text{\rm vert}}{\supset}\cdots\underset{\text{ \rm vert}}{\supset}\nu^{k_{0}}\] and \[P^{k}=P^{k_{0}},\ \nu^{k}=\nu^{k_{0}},\ Q^{k}=Q^{k_{0}}\ \ \text{for all }k\geq k_{0}.\] **Definition 3.1.1**.: Let \(T,k_{0}\) be as above. Define two tableaux \(P^{A\text{\rm II}}(T)\) and \(Q^{A\text{\rm III}}(T)\) to be \(P^{k_{0}}\) and \(Q^{k_{0}}\), respectively. See Example 1.2.1 for an example of this algorithm. The following is immediate from the definition. **Lemma 3.1.2**.: _Let \(\lambda,T,\nu^{k},k_{0}\) be as above. Set \(\nu:=\nu^{k_{0}}\)._ 1. _For each_ \((i,j)\in D(\lambda/\nu)\)_, we have_ \[Q^{A\text{\rm II}}(T)(i,j)=\min\{k\in[1,k_{0}]\mid(i,j)\notin D(\nu^{k})\}.\] 2. _For each_ \(k\geq 0\)_, we have_ \[D(\nu^{k})=D(\nu)\sqcup\{(i,j)\in D(\lambda/\nu)\mid Q^{A\text{\rm II}}(T)(i, j)>k\}.\] **Definition 3.1.3**.: Let \(\lambda\in\operatorname{Par}_{\leq 2n}\) and \(\nu\in\operatorname{Par}_{\leq n}\) be such that \(\nu\subseteq\lambda\). A tableau \(Q\) of shape \(\lambda/\nu\) is said to be a _recording tableau_ if there exists \(T\in\operatorname{SST}_{2n}(\lambda)\) such that \(Q^{A\text{\rm III}}(T)=Q\). Let \(\operatorname{Rec}_{2n}(\lambda/\nu)\) denote the set of recording tableaux of shape \(\lambda/\nu\). Let \(\lambda\in\operatorname{Par}_{\leq 2n}\) and \(\nu\in\operatorname{Par}_{\leq n}\) be such that \(\nu\subseteq\lambda\). Let \(\widetilde{\operatorname{Rec}}_{2n}(\lambda/\nu)\) denote the set of tableaux \(Q\) of shape \(\lambda/\nu\) satisfying the following: 1. The entries of \(Q\) strictly decrease along the rows from left to right. 2. The entries of \(Q\) weakly decrease along the columns from top to bottom. 3. For each \(k>0\), the number \(Q[k]\) (see (2.4) for the definition) is even. 4. For each \(k>0\), it holds that \[Q[k]\geq 2(\ell(\nu^{k-1})-n),\] where \(\nu^{k-1}\) is the partition such that \[D(\nu^{k-1})=D(\nu)\sqcup\{(i,j)\in D(\lambda/\nu)\mid Q(i,j)\geq k\}.\] 5. For each \(r,k>0\), let \(Q_{\leq r}[k]\) denote the number of occurrences of \(k\) in \(Q\) in the \(r\)-th row or above. Then, the following inequality holds: \[Q_{\leq r}[k+1]\leq Q_{\leq r}[k].\] Now, we are ready to state the main result in this paper. **Theorem 3.1.4**.: _Let \(\lambda\in\operatorname{Par}_{\leq 2n}\)._ 1. _The assignment_ \(T\mapsto(P^{A\text{\rm II}}(T),Q^{A\text{\rm II}}(T))\) _gives rise to a bijection_ \[\operatorname{LR}^{A\text{\rm III}}:\operatorname{SST}_{2n}(\lambda)\to \bigsqcup_{\begin{subarray}{c}\nu\in\operatorname{Par}_{\leq n}\\ \nu\subseteq\lambda\end{subarray}}(Sp\mathrm{T}_{2n}(\nu)\times \operatorname{Rec}_{2n}(\lambda/\nu)).\] _._ 2. _For each_ \(\nu\in\mathrm{Par}_{\leq n}\) _such that_ \(\nu\subseteq\lambda\)_, we have_ \[\mathrm{Rec}_{2n}(\lambda/\nu)=\widetilde{\mathrm{Rec}}_{2n}(\lambda/\nu).\] **Definition 3.1.5**.: We call the map \(\mathrm{LR}^{\mathrm{AII}}\) in Theorem 3.1.4 the _Littlewood-Richardson map_. The rest of this paper is devoted to proving the theorem. Since the proof is involved, we give an outline in the next subsection. ### Outline of the proof First, we reformulate the reduction map as a composite of certain maps (Corollary 4.4.2 (1)). As a result, we see that the reduction map on \(\mathrm{SST}_{2n}(\varpi_{l})\) is injective (Corollary 4.4.2 (2)). Next, by studying some properties of the reduction map, we deduce that the tableau \(P^{\mathrm{AII}}(T)\) is symplectic (Proposition 7.1.1). Also, we prove the injectivity of the Littlewood-Richardson map by using the successor map (Proposition 7.1.3). To prove the surjectivity of the Littlewood-Richardson map, we use representation theory of a quantum symmetric pair \((\mathbf{U},\mathbf{U}^{\mathrm{t}})\) of type \(A\mathrm{II}_{2n-1}\). It is known that for each \(\lambda\in\mathrm{Par}_{\leq 2n}\), there exists an irreducible \(\mathbf{U}\)-module \(V(\lambda)\) with basis of the form \(\{b_{T}\mid T\in\mathrm{SST}_{2n}(\lambda)\}\). Similarly, for each \(\nu\in\mathrm{Par}_{\leq n}\), there exists an irreducible \(\mathbf{U}^{\mathrm{t}}\)-module \(V^{\mathrm{t}}(\nu)\). We show that it admits a distinguished basis of the form \(\{b_{T}^{\mathrm{t}}\mid T\in Sp\mathbb{T}_{2n}(\nu)\}\) (Proposition 6.4.1). Then, we lift the Littlewood-Richardson map to a \(\mathbf{U}^{\mathrm{t}}\)-module homomorphism \[V(\lambda)\to\bigoplus_{\begin{subarray}{c}\nu\in\mathrm{Par}_{\leq n}\\ \nu\subseteq\lambda\end{subarray}}(V^{\mathrm{t}}(\nu)\otimes\mathbb{Q}(q) \mathrm{Rec}_{2n}(\lambda/\nu))\] which maps \(b_{T}\) to \(b_{P^{\mathrm{AII}}(T)}\otimes Q^{A\mathrm{II}}(T)\) modulo \(q^{-1}\) for all \(T\in\mathrm{SST}_{2n}(\lambda)\) (Proposition 6.5.2). Here, each \(x\in\mathbf{U}^{\mathrm{t}}\) acts on each summand of the right-hand side as \(x\otimes\mathrm{id}\). The complete reducibility of the right-hand side implies that this homomorphism is surjective (Theorem 7.2.1). Hence, we conclude that the Littlewood-Richardson map is surjective (Corollary 7.2.2). Finally, by studying the successor map in detail, we verify that \(\mathrm{Rec}_{2n}(\lambda/\nu)\subseteq\widetilde{\mathrm{Rec}}_{2n}(\lambda/\nu)\) (Lemma 8.3.1). On the other hand, we show that there is an injective map from \(\widetilde{\mathrm{Rec}}_{2n}(\lambda/\nu)\) to the set \(\mathrm{LRT}_{2n}^{Sp}(\lambda/\nu)\) of symplectic Littlewood-Richardson tableaux (Lemma 8.3.2). Then, we conclude that \(\mathrm{Rec}_{2n}(\lambda/\nu)=\widetilde{\mathrm{Rec}}_{2n}(\lambda/\nu)\) (Theorem 8.3.3) by proving that both \(|\mathrm{Rec}_{2n}(\lambda/\nu)|\) and \(|\mathrm{LRT}_{2n}^{Sp}(\lambda/\nu)|\) are equal to the multiplicity \(m_{\lambda,\nu}\) by Sundaram's branching rule (Theorem 8.2.3) and the bijectivity of the Littlewood-Richardson map. ## 4. Factorization of the reduction map The aim of this section is to prove Corollary 4.4.2, which describes the reduction map as a composite of several maps. The other part is devoted to preparing the proof. ### Combinatorial \(R\)-matrices Recall from (2.1) the partitions \(\varpi_{l}\) for \(l\geq 0\). **Definition 4.1.1** (_cf_. [10, Rule 3.10]).: Let \(k,l\in[0,m]\). The _combinatorial \(R\)-matrix_ is the map \[R=R_{k,l}:\mathrm{SST}_{m}(\varpi_{k})\times\mathrm{SST}_{m}(\varpi_{l}) \to\mathrm{SST}_{m}(\varpi_{l})\times\mathrm{SST}_{m}(\varpi_{k})\] defined as follows. Let \(\mathbf{a}=(a_{1},\dots,a_{k})\in\mathrm{SST}_{m}(\varpi_{k})\) and \(\mathbf{b}=(b_{1},\dots,b_{l})\in\mathrm{SST}_{m}(\varpi_{l})\). 1. When \(k\leq l\). For each \(r\in[1,k]\), define \(i_{r}\in[1,l]\) inductively as follows. Set \(i_{1}\) to be the minimum \(i\in[1,l]\) such that \(b_{i}\geq a_{1}\); when such \(i\) does not exist, we set \(i_{1}:=1\). Suppose that \(r\geq 2\) and we have determined \(i_{1},\ldots,i_{r-1}\). Set \(i_{r}\) to be the minimum \(i\in[1,l]\setminus\{i_{1},\ldots,i_{r-1}\}\) such that \(b_{i}\geq a_{r}\); when such \(i\) does not exist, we set \(i_{r}:=\min([1,l]\setminus\{i_{1},\ldots,i_{r-1}\})\). Then, we set \[R(\mathbf{a},\mathbf{b}):=(\mathbf{a}\sqcup\mathbf{b}^{\prime\prime},\mathbf{ b}^{\prime}),\] where \[\mathbf{b}^{\prime}:=(b_{i_{1}},\ldots,b_{i_{k}}),\quad\mathbf{b}^{\prime \prime}:=\mathbf{b}\setminus\mathbf{b}^{\prime}.\] 2. When \(k\geq l\). For each \(r\in[1,l]\), define \(i_{r}\in[1,k]\) inductively as follows. Set \(i_{1}\) to be the maximum \(i\in[1,k]\) such that \(a_{i}\leq b_{1}\); when such \(i\) does not exist, we set \(i_{1}:=k\). Suppose that \(r\geq 2\) and we have determined \(i_{1},\ldots,i_{r-1}\). Set \(i_{r}\) to be the maximum \(i\in[1,k]\setminus\{i_{1},\ldots,i_{r-1}\}\) such that \(a_{i}\leq b_{r}\); when such \(i\) does not exist, we set \(i_{r}:=\max([1,k]\setminus\{i_{1},\ldots,i_{r-1}\})\). Then, we set \[R(\mathbf{a},\mathbf{b}):=(\mathbf{a}^{\prime},\mathbf{b}\sqcup\mathbf{a}^{ \prime\prime}),\] where \[\mathbf{a}^{\prime}:=(a_{i_{1}},\ldots,a_{i_{l}}),\quad\mathbf{a}^{\prime \prime}:=\mathbf{a}\setminus\mathbf{a}^{\prime}.\] **Proposition 4.1.2** ([11, Proposition 3.21]).: _The map \(R_{k,l}\) is a bijection with inverse \(R_{l,k}\)._ Given an integer \(a\in[1,m]\), set \[a^{\vee}:=[1,m]\setminus\{a\}\in\mathrm{SST}_{m}(\varpi_{m-1}). \tag{4.1}\] **Lemma 4.1.3**.: _We have_ \[R(m,m^{\vee})=(1^{\vee},1).\] Proof.: With the same notation as Definition 4.1.1 (1), we see that \[i_{1}=1.\] Hence, the assertion follows. **Lemma 4.1.4**.: _Let \(t\in[1,m-1]\). Then, we have_ \[R(t^{\vee},(1,\ldots,t))=((1,\ldots,t-1,m),m^{\vee}).\] Proof.: With the same notation as Definition 4.1.1 (2), we see inductively that \[i_{1}=1,i_{2}=2,\ldots,i_{t-1}=t-1,\text{ and }i_{t}=m-1.\] Hence, the assertion follows. **Lemma 4.1.5**.: _Let \(\mathbf{a}=(a_{1},\ldots,a_{k})\in\mathrm{SST}_{m}(\varpi_{k})\) and \(\mathbf{b}=(b_{1},\ldots,b_{l})\in\mathrm{SST}_{m}(\varpi_{l})\) with \(k\leq l\). Suppose that for each \(r\in[1,k]\), there exists \(j_{r}\in[1,l]\) such that \(a_{r}=b_{j_{r}}\). Then, we have_ \[R(\mathbf{a},\mathbf{b})=(\mathbf{b},\mathbf{a})\] _and_ \[R(\mathbf{b},\mathbf{a})=(\mathbf{a},\mathbf{b}).\] Proof.: The first assertion is clear from Definition 4.1.1 (1). The second assertion follows from the first one and Proposition 4.1.2. **Lemma 4.1.6**.: _Let \(\mathbf{a}=(a_{1},\ldots,a_{k})\in\operatorname{SST}_{m}(\varpi_{k})\), and \(r,s\in[1,m]\) be such that \(a_{k}+1<r\leq s\). Then, we have_ \[R(s^{\vee},\mathbf{a}\sqcup[r,s])=(\mathbf{a}\sqcup[r-1,s-1],(r-1)^{\vee})\] _and_ \[R(\mathbf{a}\sqcup[r-1,s-1],(r-1)^{\vee})=(s^{\vee},\mathbf{a}\sqcup[r,s]).\] Proof.: With the same notation as Definition 4.1.1 (2), we see inductively that \[i_{1}=a_{1},\ldots,i_{k}=a_{k},i_{k+1}=r,i_{k+2}=r+1,\ldots,i_{k+s-r}=s-1,\text { and }i_{k+s-r+1}=r-1.\] Hence, the first assertion follows. Now, the second assertion follows from Proposition 4.1.2. ### Some properties of removable entries In this subsection, we fix \(\text{al}\in[0,2n]\) and \(\mathbf{a}=(a_{1},\ldots,a_{l})\in\operatorname{SST}_{2n}(\varpi_{l})\). **Lemma 4.2.1**.: _Let \(i\in[1,l]\) be such that \(a_{j}\notin\operatorname{rem}(\mathbf{a})\) for all \(j\in[i,l]\). Then, we have_ \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\ldots,a_{i-1}).\] Proof.: The assertion is deduced by iterative applications of Proposition 2.5.3 (2). **Proposition 4.2.2**.: _For each \(i\in[1,l]\), we have \(a_{i}\in\operatorname{rem}(\mathbf{a})\) if and only if one of the following hold:_ 1. \(a_{i}\notin 2\mathbb{Z}\)_,_ \(i<l\)_,_ \(a_{i+1}=a_{i}+1\)_, and_ \(a_{i}<2i-|\operatorname{rem}(a_{1},\ldots,a_{i-1})|\)_._ 2. \(a_{i}\in 2\mathbb{Z}\)_,_ \(i>1\)_,_ \(a_{i-1}=a_{i}-1\)_, and_ \(a_{i}<2i-|\operatorname{rem}(a_{1},\ldots,a_{i-2})|-1\)_._ Proof.: We prove the assertion by induction on \(l\). If \(l\leq 1\), then the assertion is clear since we have \(\operatorname{rem}(\mathbf{a})=\emptyset\) and neither \(i<l\) nor \(i>1\) for all \(i\in[1,l]\). Hence, assume that \(l>1\) and the assertion holds for \(0,1,\ldots,l-1\). First, suppose that \[a_{l}\in 2\mathbb{Z},\ a_{l-1}=a_{l}-1,\text{ and }a_{l}<2l-|\operatorname{rem }(a_{1},\ldots,a_{l-2})|-1. \tag{4.2}\] By Definition 2.5.1, we have \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\ldots,a_{l-2})\sqcup \{a_{l-1},a_{l}\}. \tag{4.3}\] When \(i\leq l-2\), equation (4.3) implies that \(a_{i}\in\operatorname{rem}(\mathbf{a})\) if and only if \(a_{i}\in\operatorname{rem}(a_{1},\ldots,a_{l-2})\). Hence, the assertion follows from our induction hypothesis. When \(i\geq l-1\), equation (4.3) implies that \(a_{i}\in\operatorname{rem}(\mathbf{a})\). On the other hand, condition (4.2) implies the first (resp., second) condition in the statement when \(i=l-1\) (resp., \(i=l\)). Therefore, the assertion follows in this case. Next, suppose that condition (4.2) fails. By Definition 2.5.1, it holds that \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\ldots,a_{l-1}).\] Then, the assertion follows from our induction hypothesis. For each \(a\in\mathbb{Z}\), set \[s(a):=\begin{cases}a+1&\text{ if }a\notin 2\mathbb{Z},\\ a-1&\text{ if }a\in 2\mathbb{Z}\end{cases} \tag{4.4}\] **Lemma 4.2.3**.: _Let \(a\in[1,2n]\). Then, we have \(a\in\operatorname{rem}(\mathbf{a})\) if and only if \(s(a)\in\operatorname{rem}(\mathbf{a})\). Consequently, we have \(|\operatorname{rem}(\mathbf{a})|\in 2\mathbb{Z}\)._ Proof.: The assertion follows from Proposition 4.2.2. **Lemma 4.2.4**.: _Suppose that \(a_{l}\in 2\mathbb{Z}\) and that there exists \(r\in[0,l-1]\) such that \(a_{l-r}=a_{l}-r\in\operatorname{rem}(\mathbf{a})\). Then, we have \(a_{l-t}\in\operatorname{rem}(\mathbf{a})\) for all \(t\in[0,r]\). Moreover, if \(r\notin 2\mathbb{Z}\), then it holds that_ \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\dots,a_{l-r-1})\sqcup [a_{l}-r,a_{l}].\] Proof.: Since \(a_{l-r}=a_{l}-r\) and \(a_{l-r}<a_{l-r+1}<\dots<a_{l}\), we have \(a_{l-t}=a_{l}-t\) for all \(t\in[0,r]\). Let us prove for each \(t\in[1,r]\) that if \(a_{l-t}\in\operatorname{rem}(\mathbf{a})\), then \(a_{l-t+1}\in\operatorname{rem}(\mathbf{a})\). It is clear that this claim implies the first assertion. First, suppose that \(t\notin 2\mathbb{Z}\). Then we have \[a_{l-t+1}=a_{l-t}+1=s(a_{l-t})\in\operatorname{rem}(\mathbf{a})\] by Lemma 4.2.3. Next, suppose that \(t\in 2\mathbb{Z}\). By Proposition 4.2.2, we have \(a_{l-t-1}=a_{l-t}-1\) and \[a_{l-t}<2(l-t)-|\operatorname{rem}(a_{1},\dots,a_{l-t-2})|-1.\] Then, by Definition 2.5.1, we obtain \[|\operatorname{rem}(a_{1},\dots,a_{l-t})|=|\operatorname{rem}(a_{1},\dots,a_{ l-t-2})|+2. \tag{4.5}\] On the other hand, since \(t\in[1,r]\cap 2\mathbb{Z}\), we see that \(l-t+1<l\). Hence, we have \(a_{l-t+2}=a_{l-t+1}+1\) and \[a_{l-t+1}=a_{l-t}+1<2(l-t)-|\operatorname{rem}(a_{1},\dots,a_{l-t-2})|=2(l-t+1 )-|\operatorname{rem}(a_{1},\dots,a_{l-t})|,\] by equation (4.5). Then, Proposition 4.2.2 implies \[a_{l-t+1}\in\operatorname{rem}(\mathbf{a}),\] as desired. So far, we have proved the first assertion. The second assertion follows from the following equality, which is obtained from Definition 2.5.1 under our hypothesis that \(r\notin 2\mathbb{Z}\): \[\operatorname{rem}(a_{1},\dots,a_{l-r+2k+1})=\operatorname{rem}(a_{1},\dots,a _{l-r+2k-1})\sqcup\{a_{l-r+2k},a_{l-r+2k+1}\}\] for all \(k\in[0,(r-1)/2]\). Thus, we complete the proof. **Lemma 4.2.5**.: _Suppose that \(a_{l}\in 2\mathbb{Z}\) and that there exists \(r\in[1,l-1]\) such that \(a_{l-r}=a_{l}-r\notin 2\mathbb{Z}\). Then, we have_ \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\dots,a_{l-r-1})\sqcup [a_{l}-t,a_{l}],\] _where \(t\in[0,r]\) is the maximal odd integer such that_ \[a_{l}-t<2(l-t)-|\operatorname{rem}(a_{1},\dots,a_{l-r-1})|;\] _when such \(t\) does not exist, we set \(t:=-1\)._ Proof.: Let \(t^{\prime}\in[0,r]\) denote the maximal odd integer such that \(a_{l-t^{\prime}}\in\operatorname{rem}(\mathbf{a})\); when such \(t^{\prime}\) does not exist, we set \(t^{\prime}:=-1\). Then, by Lemma 4.2.4, we have \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\dots,a_{l-t^{\prime} -1})\sqcup[a_{l}-t^{\prime},a_{l}].\] Moreover, since \(a_{l-r},a_{l-r+1},\dots,a_{l-t^{\prime}-1}\notin\operatorname{rem}(\mathbf{a})\) by the definition of \(t^{\prime}\), Lemma 4.2.1 implies that \[\operatorname{rem}(a_{1},\dots,a_{l-t^{\prime}-1})=\operatorname{rem}(a_{1}, \dots,a_{l-r-1}). \tag{4.6}\] Hence, in order to complete the proof, we only need to show that \(t^{\prime}=t\). First, suppose that \(t^{\prime}\neq-1\). Then, by Proposition 4.2.2 and equation (4.6), we have \[a_{l}-t^{\prime}=a_{l-t^{\prime}}<2(l-t^{\prime})-|\operatorname{rem}(a_{1}, \ldots,a_{l-t^{\prime}-1})|=2(l-t^{\prime})-|\operatorname{rem}(a_{1},\ldots,a_ {l-r-1})|.\] This implies that \(t^{\prime}\leq t\). In particular, \(t\neq-1\). Then, the definition of \(t\) and equation (4.6) imply that \[a_{l-t}=a_{l}-t<2(l-t)-|\operatorname{rem}(a_{1},\ldots,a_{l-r-1})|=2(l-t)-| \operatorname{rem}(a_{1},\ldots,a_{l-t-1})|.\] Hence, we obtain \(t\leq t^{\prime}\). Therefore, we conclude that \(t^{\prime}=t\), as desired in this case (when \(t^{\prime}\neq-1\)). Next, suppose that \(t^{\prime}=-1\). We only need to show that there exists no odd integer \(t\in[0,r]\) such that \[a_{l}-t<2(l-t)-|\operatorname{rem}(a_{1},\ldots,a_{l-r-1})|.\] Assume contrary. Then, by Proposition 4.2.2, we see that \(a_{l-t}\in\operatorname{rem}(\mathbf{a})\). This contradicts that \(t^{\prime}=-1\). Therefore, we obtain \(t^{\prime}=t\), as desired. **Lemma 4.2.6**.: _Let \(i\in[1,l]\) be such that \(a_{i}\notin\operatorname{rem}(\mathbf{a})\). Then, we have_ \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\ldots,a_{i-1})\sqcup ((a_{i+1},\ldots,a_{l})\cap\operatorname{rem}(\mathbf{a})).\] Proof.: By Proposition 2.5.3 (4), the right-hand side is contained in the left-hand side. Hence, we only need to show the opposite containment. Let \(j\in[1,l]\) be such that \(a_{j}\in\operatorname{rem}(\mathbf{a})\). By our assumption, it must hold that \(j\neq i\). First, suppose that \(j<i-1\). In this case, we have \(a_{j}\in\operatorname{rem}(a_{1},\ldots,a_{i-1})\) by Proposition 4.2.2. Next, suppose that \(j=i-1\). In this case, we have \(a_{j}\in 2\mathbb{Z}\); otherwise, it holds that \(a_{i}=a_{j+1}=a_{j}+1\) by Proposition 4.2.2 and \(a_{j}+1=s(a_{j})\in\operatorname{rem}(\mathbf{a})\) by Lemma 4.2.3, which contradicts that \(a_{i}\notin\operatorname{rem}(\mathbf{a})\). Then, we obtain \(a_{j}\in\operatorname{rem}(a_{1},\ldots,a_{i-1})\) by Proposition 4.2.2. Finally, suppose that \(j>i\). In this case, it is clear that \[a_{j}\in(a_{i+1},\ldots,a_{l})\cap\operatorname{rem}(\mathbf{a}).\] Thus, we complete the proof. **Proposition 4.2.7**.: _We have_ \[0\leq l-|\operatorname{rem}(\mathbf{a})|\leq\min(l,2n-l).\] _In particular, it holds that_ \[|\operatorname{rem}(\mathbf{a})|\geq 2(l-n).\] Proof.: Set \(r:=|\operatorname{rem}(\mathbf{a})|\). Since \(\operatorname{rem}(\mathbf{a})\) is a subset of \(\{a_{1},\ldots,a_{l}\}\), we have \[0\leq l-r\leq l.\] Hence, we only need to show that \(l-r\leq 2n-l\), or equivalently, \[r\geq 2(l-n).\] We prove this inequality by induction on \(l\). When \(l\leq 1\), the claim is trivial since we have \(\operatorname{rem}(\mathbf{a})=\emptyset\) and \(n\geq 1\). Assume that \(l>1\) and our claim is true for \(0,1,\ldots,l-1\). Set \[\mathbf{a}^{\prime}:=(a_{1},\ldots,a_{l-1})\text{ and }r^{\prime}:=| \operatorname{rem}(\mathbf{a}^{\prime})|.\] Then, we have \(r\geq r^{\prime}\). Note that \(a_{l-1}\leq 2n-1\). Suppose first that \(a_{l-1}<2n-1\). Then, we have \[\mathbf{a}^{\prime}\in\operatorname{SST}_{2(n-1)}(\varpi_{l-1}).\] By our induction hypothesis, we obtain \[r\geq r^{\prime}\geq 2((l-1)-(n-1))=2(l-n),\] as desired. Next, suppose that \(a_{l-1}=2n-1\) and \(a_{l-1}\in\operatorname{rem}(\mathbf{a})\). In this case, Lemma 4.2.3 and Definition 2.5.1 imply that \(a_{l}=2n\in\operatorname{rem}(\mathbf{a})\) and \[\operatorname{rem}(\mathbf{a})=\operatorname{rem}(a_{1},\dots,a_{l-2})\sqcup \{a_{l-1},a_{l}\},\] respectively. Set \[r^{\prime\prime}:=|\operatorname{rem}(a_{1},\dots,a_{l-2})|.\] Then, by the same argument as in the previous case, we obtain \[r=r^{\prime\prime}+2\geq 2((l-2)-(n-1))+2=2(l-n),\] as desired. Finally, suppose that \(a_{l-1}=2n-1\) and \(a_{l-1}\notin\operatorname{rem}(\mathbf{a})\). In this case, Proposition 4.2.2 implies \[2n-1=a_{l-1}\geq 2(l-1)-r^{\prime\prime}.\] Furthermore, since \(r^{\prime\prime}\in 2\mathbb{Z}\) by Lemma 4.2.3, it follows that \[2n-2\geq 2(l-1)-r^{\prime\prime},\] equivalently, \[r^{\prime\prime}\geq 2(l-n).\] Now, the assertion follows from the fact that \(r\geq r^{\prime\prime}\). **Corollary 4.2.8**.: _Let \(i\in[1,l]\)._ 1. _If_ \(a_{i}\) _is odd, then_ \(|\operatorname{rem}(a_{1},\dots,a_{i-1})|\geq 2i-a_{i}-1\)_._ 2. _If_ \(a_{i}\) _is even, then_ \(|\operatorname{rem}(a_{1},\dots,a_{i})|\geq 2i-a_{i}\)_._ Proof.: Observe that \((a_{1},\dots,a_{i-1})\in\operatorname{SST}_{a_{i}-1}(\varpi_{i-1})\) in the first case, while \((a_{1},\dots,a_{i})\in\operatorname{SST}_{a_{i}}(\varpi_{i})\) in the second case. Then, the assertion follows from Proposition 4.2.7. ### Some properties of the reduction map In this subsection, we fix \(l\in[0,2n]\) and \(\mathbf{a}=(a_{1},\dots,a_{l})\in\operatorname{SST}_{2n}(\varpi_{l})\). **Lemma 4.3.1**.: _Let \(l\in[1,2n]\) be an even integer. Then, we have_ \[\operatorname{red}(1,2,\dots,l)=().\] Proof.: By Corollary 4.2.8 (2), we have \[|\operatorname{rem}(1,2,\dots,l)|\geq l.\] This implies that \[\operatorname{rem}(1,2,\dots,l)=[1,l].\] Therefore, the assertion follows. **Lemma 4.3.2**.: _Let \(i\in[1,l]\) be such that \(a_{j}\notin\operatorname{rem}(\mathbf{a})\) for all \(j\in[i,l]\). Then, we have_ \[\operatorname{red}(\mathbf{a})=\operatorname{red}(a_{1},\dots,a_{i-1})\sqcup( a_{i},\dots,a_{l}).\] Proof.: The assertion follows from Lemma 4.2.1. **Lemma 4.3.3**.: _Suppose that \(a_{l}\in 2\mathbb{Z}\) and that there exists \(r\in[0,l-1]\) such that \(r\notin 2\mathbb{Z}\) and \(a_{l-r}=a_{l}-r\in\operatorname{rem}(\mathbf{a})\). Then, we have_ \[\operatorname{red}(\mathbf{a})=\operatorname{red}(a_{1},\dots,a_{l-r-1}).\] Proof.: The assertion follows from Lemma 4.2.4. **Lemma 4.3.4**.: _Suppose that \(a_{l}\in 2\mathbb{Z}\) and that there exists \(r\in[1,l-1]\) such that \(a_{l-r}=a_{l}-r\notin 2\mathbb{Z}\). Then, we have_ \[\operatorname{red}(\mathbf{a})=\operatorname{red}(a_{1},\dots,a_{l-r-1})\sqcup [a_{l}-r,a_{l}-t-1],\] _where \(t\in[0,r]\) is the maximal odd integer such that_ \[a_{l}-t<2(l-t)-|\operatorname{rem}(a_{1},\dots,a_{l-r-1})|;\] _when such \(t\) does not exist, we set \(t:=-1\)._ Proof.: The assertion follows from Lemma 4.2.5. **Lemma 4.3.5**.: _Let us write \(\operatorname{red}(\mathbf{a})=(a_{i_{1}},a_{i_{2}},\dots,a_{i_{k}})\) for some \(k\in[0,l]\) and \(1\leq i_{1}<i_{2}<\dots<i_{k}\leq l\). Then, for each \(t\in[1,k]\), we have_ \[i_{t}=t+|\operatorname{rem}(a_{1},a_{2},\dots,a_{i_{t}-1})|.\] Proof.: Clearly, we have \[i_{t}=t+|(a_{1},\dots,a_{i_{t}})\cap\operatorname{rem}(\mathbf{a})|.\] Since \(a_{i_{t}}\notin\operatorname{rem}(\mathbf{a})\), Lemma 4.2.6 implies \[(a_{1},\dots,a_{i_{t}})\cap\operatorname{rem}(\mathbf{a})=\operatorname{rem}( a_{1},\dots,a_{i_{t}-1}).\] Hence, the assertion follows. **Proposition 4.3.6**.: _The tableau \(\operatorname{red}(\mathbf{a})\) is symplectic. Consequently, the assignment \(\mathbf{a}\mapsto\operatorname{red}(\mathbf{a})\) gives rise to a map_ \[\operatorname{red}:\operatorname{SST}_{2n}(\varpi_{l})\to\bigsqcup_{ \begin{subarray}{c}0\leq k\leq\min(l,2n-l)\\ l-k\in 2\mathbb{Z}\end{subarray}}Sp\Gamma_{2n}(\varpi_{k})\] _for each \(l\in[0,2n]\)._ Proof.: Let us write \(\operatorname{red}(\mathbf{a})=(a_{i_{1}},\dots,a_{i_{k}})\) for some \(k\leq l\) and \(1\leq i_{1}<\dots<i_{k}\leq l\). Assume contrary that \(\operatorname{red}(\mathbf{a})\) is not symplectic. Then, by Lemma 2.4.3, there exists \(r\in[2,k]\) such that \[a_{i_{r}}=2r-2,\ a_{i_{r-1}}=2r-3,\] and \[a_{i_{s}}\geq 2s-1\ \ \text{for all $s\in[1,r-2]$}.\] In particular, we have \[i_{r-1}=i_{r}-1.\] By Lemma 4.3.5, it holds that \[i_{r-1}=(r-1)+|\operatorname{rem}(a_{1},\dots,a_{i_{r}-2})|.\] Therefore, we have \[2i_{r-1}-|\operatorname{rem}(a_{1},\dots,a_{i_{r}-2})|=2(r-1)+|\operatorname {rem}(a_{1},\dots,a_{i_{r}-2})|\geq 2r-2.\] So far, we have shown that \(a_{i_{r}}\) is even, \(a_{i_{r}-1}=a_{i_{r}}-1\), and \[a_{i_{r}}=2r-2<2i_{r}-|\operatorname{rem}(a_{1},\dots,a_{i_{r}-2})|-1.\] Then, Proposition 4.2.2 implies that \(a_{i_{r}}\in\operatorname{rem}(\mathbf{a})\). However, this contradicts that \(a_{i_{r}}\) is an entry of \(\operatorname{red}(\mathbf{a})\). Thus, we complete the proof. **Proposition 4.3.7**.: _We have \(\operatorname{red}(\mathbf{a})=\mathbf{a}\) if and only if \(\mathbf{a}\) is symplectic._ Proof.: First, suppose that \(\operatorname{red}(\mathbf{a})=\mathbf{a}\). Then, Proposition 4.3.6 implies that the tableaux \(\mathbf{a}\) is symplectic. Next, suppose that \(\mathbf{a}\) is symplectic. Assume contrary that \(\operatorname{red}(\mathbf{a})\neq\mathbf{a}\). Then, by Proposition 4.2.2, there exists \(i\in[2,l]\) such that \(a_{i}<2i-1\). This contradicts that \(\mathbf{a}\) is symplectic. Thus, the assertion follows. **Corollary 4.3.8**.: _Let \(\nu\in\operatorname{Par}_{\leq n}\) and \(T\in\operatorname{SST}_{2n}(\nu)\). Then, we have \(\operatorname{suc}(T)=T\) if and only if \(T\) is symplectic._ Proof.: Let us write \(d(T)=(\mathbf{a},T^{\prime})\). By Lemma 2.6.4 and its proof, we have \(\operatorname{suc}(T)=T\) if and only if \(\operatorname{red}(\mathbf{a})=\mathbf{a}\). The latter is equivalent to that \(\mathbf{a}\) is symplectic, which is then equivalent to that \(T\) is symplectic. Thus, the assertion follows. ### Factorization of the reduction map Recall the combinatorial \(R\)-matrices, the map \(s\), and the map \({}^{\vee}\) from Definition 4.1.1, (4.4), and (4.1), respectively. **Proposition 4.4.1**.: _Let \(l\in[2,2n]\), \(\mathbf{a}=(a_{1},\ldots,a_{l})\in\operatorname{SST}_{2n}(\varpi_{l})\). Let us write_ \[R(s(a_{l})^{\vee},(a_{1},\ldots,a_{l-1}))=(\mathbf{c},\mathbf{d})\] _for some \((\mathbf{c},\mathbf{d})\in\operatorname{SST}_{2n}(\varpi_{l-1})\times \operatorname{SST}_{2n}(\varpi_{2n-1})\), and_ \[R(\operatorname{red}(\mathbf{c}),\mathbf{d})=(s(b_{k})^{\vee},\mathbf{b}^{ \prime})\] _for some \(b_{k}\in\mathbb{Z}\) and \(\mathbf{b}^{\prime}=(b_{1},\ldots,b_{k-1})\in\operatorname{SST}_{2n}(\varpi_{ k-1})\), where \(k:=|\operatorname{red}(\mathbf{c})|+1\). Then, we have \(b_{k}>b_{k-1}\) and_ \[\mathbf{b}:=(b_{1},\ldots,b_{k})=\begin{cases}(1,2)&\text{ if }a_{l}=l\in 2 \mathbb{Z},\\ \operatorname{red}(\mathbf{a})&\text{ otherwise}.\end{cases}\] Proof.: Set \(\mathbf{a}^{\prime}:=(a_{1},\ldots,a_{l-1})\). First, suppose that \(a_{l}=l\in 2\mathbb{Z}\). Then, we have \[(s(a_{l})^{\vee},(1,\ldots,l-1)) =((l-1)^{\vee},\mathbf{a}^{\prime})\] \[\overset{R}{\mapsto}((1,\ldots,l-2,2n),(2n)^{\vee})=(\mathbf{c},\mathbf{d})\] \[\overset{(\operatorname{red},\operatorname{id})}{\mapsto}((2n),( 2n)^{\vee})=(\operatorname{red}(\mathbf{c}),\mathbf{d})\] \[\overset{R}{\mapsto}(1^{\vee},(1))=(s(b_{k})^{\vee},\mathbf{b}^{ \prime}).\] Here, we used Lemmas 4.1.4, 4.3.2, 4.3.1, and 4.1.3. As a result, we obtain \(\mathbf{b}=(1,2)\), as desired. Next, suppose that \(a_{l}\notin 2\mathbb{Z}\). Then, we have \[(s(a_{l})^{\vee},\mathbf{a}^{\prime}) =((a_{l}+1)^{\vee},\mathbf{a}^{\prime})\] \[\overset{R}{\mapsto}(\mathbf{a}^{\prime},(a_{l}+1)^{\vee})=( \mathbf{c},\mathbf{d})\] \[\overset{(\operatorname{red},\operatorname{id})}{\mapsto}( \operatorname{red}(\mathbf{a}^{\prime}),(a_{l}+1)^{\vee})=(\operatorname{red}( \mathbf{c}),\mathbf{d})\] \[\overset{R}{\mapsto}((a_{l}+1)^{\vee},\operatorname{red}( \mathbf{a}^{\prime}))=(s(b_{k})^{\vee},\mathbf{b}^{\prime}).\] Here, we used Lemma 4.1.5. This implies that \(\mathbf{b}^{\prime}=\operatorname{red}(\mathbf{a}^{\prime})\) and \(b_{k}=s(a_{l}+1)=a_{l}\). Hence, we obtain \[b_{k}=a_{l}>a_{l-1}\geq b_{k-1}\] since \(\mathbf{b}^{\prime}=\operatorname{red}(\mathbf{a}^{\prime})\) is a subsequence of \(\mathbf{a}^{\prime}=(a_{1},\dots,a_{l-1})\). On the other hand, since \(a_{l}+1\notin\operatorname{rem}(\mathbf{a})\), we have \(a_{l}=s(a_{l}+1)\notin\operatorname{rem}(\mathbf{a})\) by Lemma 4.2.3. Then, Lemma 4.3.2 implies \[\operatorname{red}(\mathbf{a})=\operatorname{red}(\mathbf{a}^{\prime})\sqcup \{a_{l}\}=\mathbf{b},\] as desired. Finally, suppose that \(a_{l}\in 2\mathbb{Z}\) and \(a_{l}\neq l\). Set \[r:=\max\{i\in[0,l-1]\mid a_{l-i}=a_{l}-i\},\quad\mathbf{a}^{\prime\prime}:=(a_ {1},\dots,a_{l-r-1}).\] Note that \(r<l-1\). Then, we have \(\mathbf{a}=\mathbf{a}^{\prime\prime}\sqcup[a_{l}-r,a_{l}]\), \(\mathbf{a}^{\prime}=\mathbf{a}^{\prime\prime}\sqcup[a_{l}-r,a_{l}-1]\), \(s(a_{l})=a_{l}-1\), and \[(\mathbf{c},\mathbf{d})=R((a_{l}-1)^{\vee},\mathbf{a}^{\prime})=((\mathbf{a} ^{\prime\prime}\sqcup[a_{l}-r-1,a_{l}-2]),(a_{l}-r-1)^{\vee}).\] The last equality follows from Lemma 4.1.6. 1. When \(r\in 2\mathbb{Z}\). Since \(a_{l}-r-1\notin 2\mathbb{Z}\) and \(a_{l}-2\in 2\mathbb{Z}\), we can apply Lemma 4.3.4 to obtain \[\operatorname{red}(\mathbf{c})=\operatorname{red}(\mathbf{a}^{\prime\prime}) \sqcup[a_{l}-r-1,a_{l}-t-3],\] where \(t\) is the maximal odd integer in \([0,r-1]\) such that \[(a_{l}-2)-t<2((l-1)-t)-|\operatorname{rem}(\mathbf{a}^{\prime\prime})|;\] when such \(t\) does not exist, we set \(t:=-1\). Then, by Lemma 4.1.6 again, we have \[R(\operatorname{red}(\mathbf{c}),\mathbf{d})=((a_{l}-t-2)^{\vee}, \operatorname{red}(\mathbf{a}^{\prime\prime})\sqcup[a_{l}-r,a_{l}-t-2]).\] This implies that \(\mathbf{b}^{\prime}=\operatorname{red}(\mathbf{a}^{\prime\prime})\sqcup[a_{l }-r,a_{l}-t-2]\), \(b_{k}=s(a_{l}-t-2)=a_{l}-t-1\), and \[\mathbf{b}=\operatorname{red}(\mathbf{a}^{\prime\prime})\sqcup[a_{l}-r,a_{l}-t -1].\] On the other hand, we have \[\mathbf{a}=\mathbf{a}^{\prime\prime}\sqcup[a_{l}-r,a_{l}]=(\mathbf{a}^{\prime \prime}\sqcup(a_{l}-r))\sqcup[a_{l}-r+1,a_{l}].\] By Lemma 4.3.4, we have \[\operatorname{red}(\mathbf{a})=\operatorname{red}(\mathbf{a}^{\prime\prime} \sqcup(a_{l}-r))\sqcup[a_{l}-r+1,a_{l}-t^{\prime}-1],\] where \(t^{\prime}\) is the maximal odd integer in \([0,r-1]\) such that \[a_{l}-t^{\prime}<2(l-t^{\prime})-|\operatorname{rem}(\mathbf{a}^{\prime\prime} \sqcup(a_{l}-r))|;\] when such \(t^{\prime}\) does not exist, we set \(t^{\prime}:=-1\). Since \(s(a_{l}-r)=a_{l}-r-1\notin\mathbf{a}^{\prime\prime}\), we have \[\operatorname{rem}(\mathbf{a}^{\prime\prime}\sqcup(a_{l}-r))=\operatorname{ rem}(\mathbf{a}^{\prime\prime})\text{ and }\operatorname{red}(\mathbf{a}^{\prime\prime}\sqcup(a_{l}-r))= \operatorname{red}(\mathbf{a}^{\prime\prime})\sqcup(a_{l}-r).\] Therefore, we obtain \[\operatorname{red}(\mathbf{a})=\operatorname{red}(\mathbf{a}^{\prime\prime} )\sqcup[a_{l}-r,a_{l}-t^{\prime}-1].\] Now, one can straightforwardly verify that \(t^{\prime}=t\). Hence, we conclude that \[\operatorname{red}(\mathbf{a})=\operatorname{red}(\mathbf{a}^{\prime\prime}) \sqcup[a_{l}-r,a_{l}-t-1]=\mathbf{b},\] as desired. 2. When \(r\notin 2\mathbb{Z}\) and \(a_{l}-r-1\notin\operatorname{rem}(\mathbf{c})\). This case can be proved as in the same way as the previous case. 3. When \(r\notin 2\mathbb{Z}\) and \(a_{l}-r-1\in\operatorname{rem}(\mathbf{c})\). Since \(a_{l}-r-2=s(a_{l}-r-1)\in\operatorname{rem}(\mathbf{c})\), we see that \[a_{l-r-1}=a_{l}-r-2.\] Set \[\mathbf{a}^{\prime\prime\prime}:=(a_{1},\dots,a_{l-r-2}).\] Then, we have \[\mathbf{c}=\mathbf{a}^{\prime\prime\prime}\sqcup[a_{l}-r-2,a_{l}-2]\] with \(a_{l}-r-2\in\operatorname{rem}(\mathbf{c})\). By Lemma 4.3.3, we obtain \[\operatorname{red}(\mathbf{c})=\operatorname{red}(\mathbf{a}^{\prime\prime \prime}).\] Then, Lemma 4.1.5 implies \[R(\operatorname{red}(\mathbf{c}),\mathbf{d})=(\mathbf{d},\operatorname{red}( \mathbf{c})),\] and consequently, \[\mathbf{b}=\operatorname{red}(\mathbf{a}^{\prime\prime\prime})\sqcup(a_{l}-r- 2).\] On the other hand, we have \[\mathbf{a}=(\mathbf{a}^{\prime\prime\prime}\sqcup(a_{l}-r-2))\sqcup[a_{l}-r, a_{l}].\] Since \(a_{l}-r-2\notin 2\mathbb{Z}\), Proposition 2.5.3 (2) implies that \[\operatorname{rem}(\mathbf{a}^{\prime\prime\prime}\sqcup(a_{l}-r-2))= \operatorname{rem}(\mathbf{a}^{\prime\prime\prime}).\] On the other hand, the fact that \(a_{l}-r-2\in\operatorname{rem}(\mathbf{c})\), together with Proposition 4.2.2, implies \[a_{l}-r-2<2(l-r-1)-|\operatorname{rem}(\mathbf{a}^{\prime\prime\prime})|.\] These two equalities and Proposition 4.2.2 ensure that \[a_{l}-r\in\operatorname{rem}(\mathbf{a}).\] Then, Lemma 4.3.3 implies \[\operatorname{red}(\mathbf{a})=\operatorname{red}(\mathbf{a}^{\prime\prime \prime})\sqcup(a_{l}-r-2)=\mathbf{b},\] as desired. In order to state the main result in this section, let us introduce four maps: * \(\bigvee:\operatorname{SST}_{2n}(\varpi_{l})\to\operatorname{SST}_{2n}(\varpi_ {1})\times\operatorname{SST}_{2n}(\varpi_{l-1});\ (a_{1},\dots,a_{l})\mapsto((a_{l}),(a_{1},\dots,a_{l-1}))\). * \(K:\operatorname{SST}_{2n}(\varpi_{1})\to\operatorname{SST}_{2n}(\varpi_{2n-1 });\ (a)\mapsto s(a)^{\vee}\). * \[\bigwedge:\operatorname{SST}_{2n}(\varpi_{1})\times\operatorname{SST }_{2n}(\varpi_{k}) \to\operatorname{SST}_{2n}(\varpi_{k+1})\sqcup\{0\};\] \[((a),(a_{1},\dots,a_{k})) \mapsto\begin{cases}(a_{1},\dots,a_{k},a)&\text{ if }a>a_{k},\\ 0&\text{ if }a\leq a_{k},\end{cases}\] where \(0\) is a formal symbol. * \(\pi:\operatorname{SST}_{2n}(\varpi_{k^{\prime}})\to\bigsqcup_{k^{\prime\prime }=0}^{2n}\operatorname{SST}_{2n}(\varpi_{k^{\prime\prime}});\ \mathbf{a}\mapsto \begin{cases}()&\text{ if }\mathbf{a}=(1,2),\\ \mathbf{a}&\text{ if }\mathbf{a}\neq(1,2).\end{cases}\) **Corollary 4.4.2**.: _Let \(l\in[0,2n]\)._ 1. _If_ \(l\geq 2\)_, then the composite_ \[\pi\circ\bigwedge\circ(K^{-1},\operatorname{id})\circ R\circ(\operatorname{red}, \operatorname{id})\circ R\circ(K,\operatorname{id})\circ\bigvee\] _is well defined on_ \(\operatorname{SST}_{2n}(\varpi_{l})\) _and coincides with the reduction map._ 2. _The reduction map on_ \(\operatorname{SST}_{2n}(\varpi_{l})\) _is injective._ Proof.: The first assertion follows from Proposition 4.4.1 and Lemma 4.3.1. The second assertion for \(l\leq 1\) is trivial, and for \(l\geq 2\) follows from the first one since each factor in the composite is injective (on a suitable domain). **Corollary 4.4.3**.: _Let \(\lambda\in\operatorname{Par}_{\leq 2n}\). Then, the successor map in injective on \(\operatorname{SST}_{2n}(\lambda)\)._ Proof.: The assertion follows from the injectivity of the map \(d\) (see (2.11) for the definition), the reduction map (Corollary 4.4.2 (2)), and the Pieri's formula (Proposition 2.3.4). ## 5. Preliminaries from Lie algebras In this section, we briefly review representation theory of \(\mathfrak{gl}_{2n}(\mathbb{C})\) and \(\mathfrak{sp}_{2n}(\mathbb{C})\). ### General linear algebras Let \(\mathfrak{g}\) denote the general linear algebra \(\mathfrak{gl}_{2n}(\mathbb{C})\), that is, the complex Lie algebra \(\mathbb{C}\) consisting of all \(2n\times 2n\) complex matrices. For each \(i,j\in[1,2n]\), let \(E_{i,j}\) denote the matrix unit with entry \(1\) at \((i,j)\) position. For each \(i\in[1,2n]\), set \[d_{i}:=E_{i,i}.\] Also, for each \(i\in[1,2n-1]\), set \[h_{i}:=d_{i}-d_{i+1}.\] Let \(M\) be a finite-dimensional \(\mathfrak{g}\)-module. Then, it decomposes into its weight spaces: \[M=\bigoplus_{\mathbf{w}=(w_{1},\dots,w_{2n})\in\mathbb{Z}^{2n}}M_{\mathbf{w}},\] where \[M_{\mathbf{w}}:=\{m\in M\mid d_{i}m=w_{i}m\ \ \text{for all $i\in[1,2n]$}\}.\] The character of \(M\) is the Laurent polynomial given by \[\operatorname{ch}_{\mathfrak{g}}M=\sum_{\mathbf{w}\in\mathbb{Z}^{2n}}(\dim M _{\mathbf{w}})x_{1}^{w_{1}}\cdots x_{2n}^{w_{2n}}\in\mathbb{Z}[x_{1}^{\pm 1}, \dots,x_{2n}^{\pm 1}].\] For each \(\lambda\in\operatorname{Par}_{\leq 2n}\), there exists a unique, up to isomorphism, finite-dimensional irreducible \(\mathfrak{g}\)-module \(V^{\mathfrak{g}}(\lambda)\) such that \[\operatorname{ch}_{\mathfrak{g}}V^{\mathfrak{g}}(\lambda)=s_{\lambda}(x_{1}, \dots,x_{2n}),\] where the right-hand side denotes the Schur function (see (2.6) for the definition). ### Symplectic Lie algebras Consider the \(2n\)-dimensional complex vector space \(\mathbb{C}^{2n}\) with a standard basis \(\{e_{1},\dots,e_{2n}\}\). Let \(\langle,\rangle\) denote the skew-symmetric bilinear form on \(\mathbb{C}^{2n}\) given by \[\langle e_{i},e_{j}\rangle=\begin{cases}1&\text{if $j-i=n$},\\ -1&\text{if $i-j=n$},\\ 0&\text{otherwise}.\end{cases}\] Let \(\mathfrak{s}\) denote the symplectic Lie algebra \(\mathfrak{sp}_{2n}(\mathbb{C})\), that is, the subalgebra of \(\mathfrak{g}\) consisting of \(X\in\mathfrak{g}\) such that \[{}^{t}XJ_{n}+J_{n}X=O_{2n},\] where \(O_{2n}\) denote the zero matrix of size \(2n\), \[J_{n}:=\begin{pmatrix}O_{n}&I_{n}\\ -I_{n}&O_{n}\end{pmatrix}\] the matrix representation of the skew-symmetric bilinear form \(\langle,\rangle\) with respect to the standard basis with \(I_{n}\) the identity matrix of size \(n\). For each \(i\in[1,n]\), set \[h_{i}^{\prime}:=\begin{cases}h_{i}-h_{n+i}&\text{ if }i<n,\\ d_{n}-d_{2n}&\text{ if }i=n.\end{cases}\] Then, we have \(h_{i}^{\prime}\in\mathfrak{s}\). Let \(M\) be a finite-dimensional \(\mathfrak{s}\)-module. Then, it decomposes into its weight spaces: \[M=\bigoplus_{\mathbf{z}=(z_{1},\dots,z_{n})\in\mathbb{Z}^{n}}M_{\mathbf{z}},\] where \[M_{\mathbf{z}}:=\{m\in M\mid h_{i}^{\prime}m=(z_{i}-z_{i+1})m\ \text{ for all }i\in[1,n-1]\text{ and }h_{n}^{\prime}m=z_{n}m\}.\] The character of \(M\) is the Laurent polynomial given by \[\operatorname{ch}_{\mathfrak{s}}M=\sum_{\mathbf{z}\in\mathbb{Z}^{n}}(\dim M_ {\mathbf{z}})y_{1}^{z_{1}}\cdots y_{n}^{z_{n}}\in\mathbb{Z}[y_{1}^{\pm 1}, \dots,y_{n}^{\pm 1}].\] For each \(\nu\in\operatorname{Par}_{\leq n}\), there exists a unique finite-dimensional irreducible \(\mathfrak{s}\)-module \(V^{\mathfrak{s}}(\nu)\) such that \[\operatorname{ch}_{\mathfrak{s}}V^{\mathfrak{s}}(\nu)=s_{\nu}^{Sp}(y_{1}, \dots,y_{n}),\] where the right-hand side denotes the symplectic Schur function (see (2.9) for the definition). Let \(M=\bigoplus_{\mathbf{w}\in\mathbb{Z}^{2n}}M_{\mathbf{w}}\) be a finite-dimensional \(\mathfrak{g}\)-module. For each \(m\in M_{\mathbf{w}}\), we have \[h_{i}^{\prime}m=\begin{cases}(w_{i}-w_{i+1}-w_{n+i}+w_{n+i+1})m&\text{ if }i<n,\\ (w_{n}-w_{2n})m&\text{ if }i=n,\end{cases}\ \text{ for all }i\in[1,n].\] Therefore, as an \(\mathfrak{s}\)-module, the \(M\) decomposes as \[M=\bigoplus_{\mathbf{z}\in\mathbb{Z}^{n}}M_{\mathbf{z}},\quad M_{\mathbf{z}}= \bigoplus_{\mathbf{w}\in(\operatorname{res}^{\mathfrak{s}})^{-1}(\mathbf{z}) }M_{\mathbf{w}},\] where \[\operatorname{res}^{\mathfrak{s}}:\mathbb{Z}^{2n}\to\mathbb{Z}^{n};\ (w_{1}, \dots,w_{2n})\mapsto(w_{1}-w_{n+1},w_{2}-w_{n+2},\dots,w_{n}-w_{2n}).\] This observation implies that \[\operatorname{ch}_{\mathfrak{s}}M=\operatorname{res}^{\mathfrak{s}}( \operatorname{ch}_{\mathfrak{g}}M),\] where \[\operatorname{res}^{\mathfrak{s}}:\mathbb{Z}[x_{1}^{\pm 1},\dots,x_{2n}^{\pm 1}] \to\mathbb{Z}[y_{1}^{\pm 1},\dots,y_{n}^{\pm 1}]\] denotes the ring homomorphism such that \[\operatorname{res}^{\mathfrak{s}}(x_{i})=\begin{cases}y_{i}&\text{ if }i\leq n,\\ y_{i-n}^{-1}&\text{ if }i>n.\end{cases}\] In particular, for each \(\lambda\in\operatorname{Par}_{\leq 2n}\), we obtain \[\operatorname{ch}_{\mathfrak{s}}V^{\mathfrak{g}}(\lambda)=s_{\lambda}(y_{1}, \dots,y_{n},y_{1}^{-1},\dots,y_{n}^{-1}). \tag{5.1}\] On the other hand, as an \(\mathfrak{s}\)-module, \(V^{\mathfrak{g}}(\lambda)\) decomposes into a direct sum of several copies of \(V^{\mathfrak{s}}(\nu)\) for various \(\nu\in\operatorname{Par}_{\leq n}\): \[V^{\mathfrak{g}}(\lambda)\simeq\bigoplus_{\nu\in\operatorname{Par}_{\leq n}}V^ {\mathfrak{s}}(\nu)^{m_{\lambda,\nu}}\ \text{ for some }m_{\lambda,\nu}\geq 0. \tag{5.2}\] An explicit descriptions of the multiplicities \(m_{\lambda,\nu}\) is called a _branching rule_. ### A Non-standard realization of the symplectic algebra Let \(\mathfrak{k}\) denote the Lie subalgebra of \(\mathfrak{g}\) generated by \[\{E_{2i-1,2i},\ E_{2i,2i-1}\mid i\in[1,n]\}\sqcup\{E_{2i+1,2i}+E_{2i-1,2i+2} \mid i\in[1,n-1]\}.\] Then, there exists an isomorphism \[f:\mathfrak{s}\to\mathfrak{k}\] of Lie algebras such that \[f(h^{\prime}_{i})=\begin{cases}(-1)^{i-1}(h_{2i-1}+h_{2i+1})&\text{ if }i<n,\\ (-1)^{n-1}h_{2n-1}&\text{ if }i=n,\end{cases}\] (_cf._[21, Section 4.3]). The notions of weight \(\mathfrak{k}\)-modules and their characters are defined in the same way as those for \(\mathfrak{s}\) by replacing \(h^{\prime}_{i}\) with \(f(h^{\prime}_{i})\). For each \(\nu\in\operatorname{Par}_{\leq n}\), the irreducible \(\mathfrak{s}\)-module \(V^{\mathfrak{s}}(\nu)\) is equipped with a \(\mathfrak{k}\)-module structure via the isomorphism \(f\). Let \(V^{\mathfrak{t}}(\nu)\) denote this \(\mathfrak{k}\)-module. Clearly, we have \[\operatorname{ch}_{\mathfrak{k}}V^{\mathfrak{t}}(\nu)=\operatorname{ch}_{ \mathfrak{s}}V^{\mathfrak{s}}(\nu)=s_{\nu}^{Sp}(y_{1},\dots,y_{n}).\] Let \(M\) be a \(\mathfrak{g}\)-module, \(\mathbf{w}=(w_{1},\dots,w_{2n})\in\mathbb{Z}^{2n}\), and \(m\in M_{\mathbf{w}}\). Then, for each \(i\in[1,n]\), we have \[f(h^{\prime}_{i})m=\begin{cases}(-1)^{i-1}(w_{2i-1}-w_{2i}+w_{2i+1}-w_{2i+2}) m&\text{ if }i<n,\\ (-1)^{n-1}(w_{2n-1}-w_{2n})m&\text{ if }i=n.\end{cases}\] Therefore, defining a map \[\operatorname{res}^{\mathfrak{k}}:\mathbb{Z}^{2n}\to\mathbb{Z}^{n}\] by \[\operatorname{res}^{\mathfrak{k}}(w_{1},\dots,w_{2n})=(w_{1}-w_{2},-w_{3}+w_{ 4},w_{5}-w_{6},\dots,(-1)^{n-1}(w_{2n-1}-w_{2n})),\] we obtain the following weight space decomposition of \(M\) as a \(\mathfrak{k}\)-module: \[M=\bigoplus_{\mathbf{z}\in\mathbb{Z}^{n}}M_{\mathbf{z}},\quad M_{\mathbf{z} }=\bigoplus_{\mathbf{w}\in(\operatorname{res}^{\mathfrak{k}})^{-1}(\mathbf{z })}M_{\mathbf{w}}.\] Consequently, we have \[\operatorname{ch}_{\mathfrak{k}}M=\operatorname{res}^{\mathfrak{k}}( \operatorname{ch}_{\mathfrak{g}}M), \tag{5.3}\] where \[\operatorname{res}^{\mathfrak{k}}:\mathbb{Z}[x_{1}^{\pm 1},\dots,x_{2n}^{\pm 1}] \to\mathbb{Z}[y_{1}^{\pm 1},\dots,y_{n}^{\pm 1}]\] denotes the ring homomorphism given by \[\operatorname{res}^{\mathfrak{k}}(x_{2i-1})=\begin{cases}y_{i}&\text{ if }i\text{ is odd},\\ y_{i}^{-1}&\text{ if }i\text{ is even},\end{cases}\quad\operatorname{res}^{ \mathfrak{k}}(x_{2i})=\begin{cases}y_{i}^{-1}&\text{ if }i\text{ is odd},\\ y_{i}&\text{ if }i\text{ is even},\end{cases}\quad\text{ for all }i\in[1,n].\] **Proposition 5.3.1**.: _We have_ \[\operatorname{ch}_{\mathfrak{k}}V^{\mathfrak{g}}(\lambda)=\sum_{\nu\in\operatorname {Par}_{\leq n}}m_{\lambda,\nu}s_{\nu}^{Sp}(y_{1},\dots,y_{n}).\] Proof.: Using the fact that the Schur functions are symmetric, we compute as follows: \[\operatorname{ch}_{\mathfrak{k}}V^{\mathfrak{g}}(\lambda) \stackrel{{\eqref{eq:chern}}}{{=}}\operatorname{res}^{ \mathfrak{k}}(\operatorname{ch}_{\mathfrak{g}}V^{\mathfrak{g}}(\lambda))\] \[=s_{\lambda}(y_{1},y_{1}^{-1},y_{2}^{-1},y_{2},\dots,y_{n}^{(-1)^ {n-1}},y_{n}^{(-1)^{n}})\] \[=s_{\lambda}(y_{1},\dots,y_{n},y_{1}^{-1},\dots,y_{n}^{-1})\] \[\stackrel{{\eqref{eq:chern}}}{{=}}\operatorname{ch} _{\mathfrak{s}}V^{\mathfrak{g}}(\lambda)\] \[\stackrel{{\eqref{eq:chern}}}{{=}}\sum_{\nu\in \operatorname{Par}_{\leq n}}m_{\lambda,\nu}s_{\nu}^{Sp}(y_{1},\dots,y_{n}).\] Thus, the assertion follows. ## 6. Preliminaries from quantum symmetric pairs In this section, we briefly review representation theory of the quantum group \(\mathbf{U}\) of \(\mathfrak{gl}_{2n}(\mathbb{C})\) and an \(n\)quantum group \(\mathbf{U}^{\mathfrak{i}}\) of \(\mathfrak{sp}_{2n}(\mathbb{C})\). ### Quantum group of type \(A\) Let \(\mathbf{U}\) denote the quantum group \(U_{q}(\mathfrak{gl}_{2n})\), that is, the unital associative algebra over \(\mathbb{Q}(q)\) with generators \[E_{i},F_{i},D_{k}^{\pm 1}\ \ \text{for $i\in[1,2n-1]$ and $k\in[1,2n]$}\] subject to the following relations: \[D_{k}D_{k}^{-1}=D_{k}^{-1}D_{k}=1,\] \[D_{k}D_{l}=D_{l}D_{k},\] \[D_{k}E_{i}=q^{\delta_{k,i}-\delta_{k,i+1}}E_{i}D_{k},\quad D_{k} F_{i}=q^{-\delta_{k,i}+\delta_{k,i+1}}F_{i}D_{k},\] \[E_{i}F_{j}-F_{j}E_{i}=\delta_{i,j}\frac{K_{i}-K_{i}^{-1}}{q-q^{- 1}},\] \[E_{i}E_{j}=E_{j}E_{i},\quad F_{i}F_{j}=F_{j}F_{i},\] \[E_{i}^{2}E_{j}-(q+q^{-1})E_{i}E_{j}E_{i}+E_{j}E_{i}^{2}=0,\quad F _{i}^{2}F_{j}-(q+q^{-1})F_{i}F_{j}F_{i}+F_{j}F_{i}^{2}=0,\] where \[K_{i}:=D_{i}D_{i+1}^{-1}.\] A \(\mathbf{U}\)-module \(M\) is said to be a _weight module_ if it admits a decomposition \[M=\bigoplus_{\mathbf{w}\in\mathbb{Z}^{2n}}M_{\mathbf{w}}\] as a vector space such that \[M_{\mathbf{w}}=\{m\in M\mid D_{i}m=q^{w_{i}}m\ \ \text{for all $i\in[1,2n]$}\}.\] The character of a finite-dimensional weight \(\mathbf{U}\)-module \(M\) is the Laurent polynomial \(\operatorname{ch}M\in\mathbb{Z}[x_{1}^{\pm 1},\dots,x_{2n}^{\pm 1}]\) given by \[\operatorname{ch}M=\sum_{\mathbf{w}\in\mathbb{Z}^{2n}}(\dim M_{\mathbf{w}}) \mathbf{x}^{\mathbf{w}}.\] By [11, 19.1.1], there exists a unique anti-algebra automorphism \(\rho\) on \(\mathbf{U}\) such that \[\rho(E_{i})=qK_{i}F_{i},\quad\rho(F_{i})=qK_{i}^{-1}E_{i},\quad\rho(K_{i})=K_{i} \ \ \text{for all}\ i\in I.\] A symmetric bilinear form \((,)\) on a \(\mathbf{U}\)-module \(M\) is said to be contragredient if \[(xm_{1},m_{2})=(m_{1},\rho(x)m_{2})\ \ \text{for all}\ x\in\mathbf{U},\ m_{1},m_{2} \in M.\] A basis \(B\) of such \(M\) is said to be _almost orthonormal_ if \[(b_{1},b_{2})\in\delta_{b_{1},b_{2}}+q^{-1}\mathbb{Q}[\![q^{-1}]\!]\ \ \text{for all}\ b_{1},b_{2}\in B.\] For each \(m_{1},m_{2}\in M\), we write \(m_{1}\equiv m_{2}\) to indicate that \[m_{1}-m_{2}\in q^{-1}\mathbb{Q}[\![q^{-1}]\!]B.\] Given two \(\mathbf{U}\)-modules \(M\) and \(N\) with almost orthonormal bases \(B_{M}\) and \(B_{N}\) respectively, we see that the tensor product \(B_{M}\otimes B_{N}:=\{b_{1}\otimes b_{2}\mid(b_{1},b_{2})\in B_{M}\times B_{N}\}\) forms an almost orthonormal basis of \(M\otimes N\) with respect to the natural bilinear form. For each \(\lambda\in\operatorname{Par}_{\leq 2n}\), there exists a unique, up to isomorphism, finite-dimensional irreducible \(\mathbf{U}\)-module \(V(\lambda)\) such that \[\operatorname{ch}V(\lambda)=s_{\lambda}(x_{1},\dots,x_{2n}).\] The weight space \(V(\lambda)_{\lambda}\) is one-dimensional. Fix a nonzero vector \(v_{\lambda}\in V(\lambda)_{\lambda}\). Then, there exists a unique contragredient form \((,)\) on \(V(\lambda)\) such that \((v_{\lambda},v_{\lambda})=1\). There exists a distinguished basis \(\mathbf{B}(\lambda)\), called the _canonical basis_, of \(V(\lambda)\). It is of the form \[\mathbf{B}(\lambda)=\{b_{T}\mid T\in\operatorname{SST}_{2n}(\lambda)\},\] and the vector \(b_{T}\) is a weight vector of weight \(\operatorname{wt}(T)\) (see (2.7) for the definition): \[b_{T}\in V(\lambda)_{\operatorname{wt}(T)}.\] The canonical basis forms an almost orthonormal basis. **Example 6.1.1**.: 1. The \(\mathbf{U}\)-module structure of \(V(\varpi_{1})\) is as follows: \[E_{i}b_{(a)}=\delta_{a,i+1}b_{(i)},\] \[F_{i}b_{(a)}=\delta_{a,i}b_{(i+1)},\] \[D_{k}b_{(a)}=q^{\delta_{a,k}}b_{(a)}.\] 2. The \(\mathbf{U}\)-module structure of \(V(\varpi_{2n-1})\) is as follows (recall the map \(\cdot^{\vee}\) from (4.1)): \[E_{i}b_{a^{\vee}}=\delta_{a,i}b_{(i+1)^{\vee}},\] \[F_{i}b_{a^{\vee}}=\delta_{a,i+1}b_{i^{\vee}},\] \[D_{k}b_{a^{\vee}}=q^{1-\delta_{a,k}}b_{a^{\vee}}.\] **Proposition 6.1.2** (_cf. [12]_).: 1. _Let_ \(l\in[1,2n]\)_. Then, there exists an injective_ \(\mathbf{U}\)_-module homomorphism_ \[\bigvee:V(\varpi_{l})\to V(\varpi_{1})\otimes V(\varpi_{l-1})\] _such that_ \[\bigvee(b_{(a_{1},\dots,a_{l})})\equiv b_{(a_{l})}\otimes b_{(a_{1},\dots,a_{ l-1})}\] _for all_ \((a_{1},\dots,a_{l})\in\operatorname{SST}_{2n}(\varpi_{l})\) 2. _Let_ \(k^{\prime}\in[0,2n-1]\)_. Then, there exists a_ \(\mathbf{U}\)_-module homomorphism_ \[\bigwedge:V(\varpi_{1})\otimes V(\varpi_{k^{\prime}})\to V(\varpi_{k^{\prime}+1})\] _such that_ \[\bigwedge(b_{(a)}\otimes b_{(a_{1},\ldots,a_{k^{\prime}})})\equiv\begin{cases}b_ {(a_{1},\ldots,a_{k^{\prime}},a)}&\text{ if }a>a_{k^{\prime}},\\ 0&\text{ if }a\leq a_{k^{\prime}}.\end{cases}\] _for all_ \(a\in[1,2n]\) _and_ \((a_{1},\ldots,a_{k^{\prime}})\in\mathrm{SST}_{2n}(\varpi_{k^{\prime}})\)_._ 3. _Let_ \(k,l\in[0,2n]\)_. Then, there exists a_ \(\mathbf{U}\)_-module isomorphism_ \[R=R_{k,l}:V(\varpi_{k})\otimes V(\varpi_{l})\to V(\varpi_{l})\otimes V(\varpi_ {k})\] _such that_ \[R(b_{\mathbf{a}}\otimes b_{\mathbf{b}})\equiv b_{\mathbf{c}}\otimes b_{ \mathbf{d}}\] _for all_ \(\mathbf{a}\in\mathrm{SST}_{2n}(\varpi_{k})\) _and_ \(\mathbf{b}\in\mathrm{SST}_{2n}(\varpi_{l})\)_, where_ \[(\mathbf{c},\mathbf{d}):=R(\mathbf{a},\mathbf{b}).\] 4. _Let_ \(l\in[0,2n]\) _and_ \(\mu\in\mathrm{Par}_{\leq 2n}\)_. Then, there exists a_ \(\mathbf{U}\)_-module isomorphism_ \[*:V(\varpi_{l})\otimes V(\mu)\to\bigoplus_{\begin{subarray}{c}\lambda\in \mathrm{Par}_{\leq 2n}\\ \mu\;\subseteq\;\lambda\end{subarray}}V(\lambda)\] _such that_ \[*(b_{S}\otimes b_{T})\equiv b_{S*T}\] _for all_ \((S,T)\in\mathrm{SST}_{2n}(\varpi_{l})\times\mathrm{SST}_{2n}(\mu)\)_._ 5. _Let_ \(\lambda\in\mathrm{Par}_{\leq 2n}\)_, and set_ \(l:=\ell(\lambda)\)_. Let_ \(\lambda^{\prime}\) _denote the partition_ \((\lambda_{1}-1,\ldots,\lambda_{l}-1)\)_. Then, there exists a_ \(\mathbf{U}\)_-module homomorphism_ \[d:V(\lambda)\to V(\varpi_{l})\otimes V(\lambda^{\prime})\] _such that_ \[d(b_{T})\equiv b_{\mathbf{a}}\otimes b_{T^{\prime}}\] _for all_ \(T\in\mathrm{SST}_{2n}(\lambda)\)_, where_ \[(\mathbf{a},T^{\prime}):=d(T).\] ### Quantum symmetric pair of type \(A\mathrm{II}\) For each \(i\in[1,n]\), set \[B_{2i}:=F_{2i}-q[E_{2i-1},[E_{2i+1},E_{2i}]_{q^{-1}}]_{q^{-1}}K_{2i}^{-1}\in \mathbf{U}.\] Here, \([,]_{q^{-1}}\) denotes the \(q\)-commutator given by \[[x,y]_{q^{-1}}:=xy-q^{-1}yx.\] Let \(\mathbf{U}^{\mathrm{\imath}}\) denote the subalgebra of \(\mathbf{U}\) generated by \[\{E_{2i-1},F_{2i-1},K_{2i-1}^{\pm 1}\mid i\in[1,n]\}\sqcup\{B_{2i}\mid i\in[1,n- 1]\}.\] The pair \((\mathbf{U},\mathbf{U}^{\mathrm{\imath}})\) forms a quantum symmetric pair of type \(A\mathrm{II}_{2n-1}\). A \(\mathbf{U}^{\mathrm{\imath}}\)-module \(M\) is said to be a _weight module_ if it admits a decomposition \[M=\bigoplus_{\mathbf{z}\in\mathbb{Z}^{n}}M_{\mathbf{z}}\] as a vector space such that \[M_{\mathbf{z}}=\{m\in M\mid K_{2i-1}m=q^{z_{2i-1}-z_{2i}}m\ \text{ for all }i\in[1,n]\}.\] The character of a finite-dimensional weight \(\mathbf{U}^{\mathrm{i}}\)-module \(M\) is the Laurent polynomial \(\operatorname{ch}_{\mathfrak{t}}M\in\mathbb{Z}[y_{1}^{\pm 1},\dots,y_{n}^{\pm 1}]\) given by \[\operatorname{ch}_{\mathfrak{t}}M=\sum_{\mathbf{z}\in\mathbb{Z}^{n}}(\dim M_{ \mathbf{z}})\mathbf{y}^{\mathbf{z}}.\] **Proposition 6.2.1**.: _Let \(M\) be a finite-dimensional weight \(\mathbf{U}\)-module. Then, we have_ \[\operatorname{ch}_{\mathfrak{t}}M=\operatorname{res}^{\mathfrak{t}}( \operatorname{ch}M).\] For each \(\nu\in\operatorname{Par}_{\leq n}\), there exists, up to isomorphism, a unique finite-dimensional irreducible \(\mathbf{U}^{\mathrm{i}}\)-module \(V^{\mathrm{i}}(\nu)\) such that \[\operatorname{ch}_{\mathfrak{t}}V^{\mathrm{i}}(\nu)=s_{\nu}^{Sp}(y_{1},\dots, y_{n})\] (_cf._[21, Proposition 3.3.9 and Corollary 4.3.2]). The anti-automorphism \(\rho\) on \(\mathbf{U}\) restricts to an anti-automorphism on \(\mathbf{U}^{\mathrm{i}}\). The notions of contragredient inner product, almost orthogonal basis, and binary relation \(\equiv\) are defined for \(\mathbf{U}^{\mathrm{i}}\)-modules as in the same way as \(\mathbf{U}\)-modules. Let \(M\) be a \(\mathbf{U}\)-module equipped with a contragredient inner product. When we regard the \(\mathbf{U}\)-module \(M\) as a \(\mathbf{U}^{\mathrm{i}}\)-module by restriction, the inner product is still contragredient. In particular, the irreducible \(\mathbf{U}\)-module \(V(\lambda)\), regarded as a \(\mathbf{U}^{\mathrm{i}}\)-module, admits a contragredient inner product. Hence, it is completely reducible. **Proposition 6.2.2**.: _Let \(\lambda\in\operatorname{Par}_{\leq 2n}\). Then, the multiplicity of \(V^{\mathrm{i}}(\nu)\) in \(V(\lambda)\) coincides with \(m_{\lambda,\nu}\)._ Proof.: Let us write \[V(\lambda)\simeq\bigoplus_{\nu\in\operatorname{Par}_{\leq n}}V^{\mathrm{i}}( \nu)^{m^{\prime}_{\lambda,\nu}}\] for some \(m^{\prime}_{\lambda,\nu}\geq 0\). Then, we have \[\sum_{\nu\in\operatorname{Par}_{\leq n}}m^{\prime}_{\lambda,\nu}s_{\nu}^{Sp}( y_{1},\dots,y_{n})=\operatorname{ch}_{\mathfrak{t}}V(\lambda)=\operatorname{ res}^{\mathfrak{t}}(s_{\lambda}(x_{1},\dots,x_{2n}))=\operatorname{ch}_{ \mathfrak{t}}V^{\mathfrak{g}}(\lambda),\] where the second equality follows from Proposition 6.2.1. Now, the assertion follows from Proposition 5.3.1 and the linearly independence of the symplectic Schur functions. **Lemma 6.2.3**.: _There exists a \(\mathbf{U}^{\mathrm{i}}\)-module isomorphism_ \[K:V(\varpi_{1})\to V(\varpi_{2n-1})\] _such that_ \[K(b_{(a)})=b_{s(a)^{\vee}}\ \text{ for all }a\in[1,2n].\] Proof.: Clearly, the linear map \(K\) defined as above is an isomorphism. Using Example 6.1.1, one can straightforwardly verify that \[K(xb_{(a)})=xb_{s(a)^{\vee}}\ \text{ for all }x\in\mathbf{U}^{\mathrm{i}} \text{ and }a\in[1,2n].\] Hence, the map \(K\) is a \(\mathbf{U}^{\mathrm{i}}\)-module isomorphism. **Lemma 6.2.4**.: _There exists a \(\mathbf{U}^{\mathrm{i}}\)-module homomorphism_ \[\pi:V(\varpi_{2})\to V(\varpi_{2})\oplus V(\varpi_{0})\] _such that_ \[\pi(b_{(a_{1},a_{2})})\equiv\begin{cases}b_{()}&\text{ if }(a_{1},a_{2})=(1,2), \\ b_{(a_{1},a_{2})}&\text{ if }(a_{1},a_{2})\neq(1,2).\end{cases}\] _for all \(1\leq a_{1}<a_{2}\leq 2n\)._ Proof.: The assertion follows from [23, Theorem 4.3.1]. Alternatively, one can verify that the vector \(w_{0}:=b_{(1,2)}-q^{-2}b_{(3,4)}\in V(\varpi_{2})\) spans the \(\mathbf{U}^{\imath}\)-submodule isomorphic to \(V(\varpi_{0})\) (_cf._[23, Lemma 4.1.4]). ### Reduction map and successor map **Definition 6.3.1**.: The _reduction map_ is the \(\mathbf{U}^{\imath}\)-module homomorphism \[\operatorname{red}=\operatorname{red}_{l}:V(\varpi_{l})\to\bigoplus_{ \begin{subarray}{c}0\leq k\leq\min(l,2n-l)\\ l-k\in 2\mathbb{Z}\end{subarray}}V(\varpi_{k})\] defined inductively as follows. 1. If \(l\leq 1\), then the reduction map is the identity map. 2. If \(l>1\), then the reduction map is the composite \[\pi\circ\bigwedge\circ(K^{-1}\otimes\operatorname{id})\circ R\circ( \operatorname{red}_{l-1}\otimes\operatorname{id})\circ R\circ(K\otimes \operatorname{id})\circ\bigvee\] **Proposition 6.3.2**.: _For each \(\mathbf{a}\in\operatorname{SST}_{2n}(\varpi_{l})\), we have_ \[\operatorname{red}(b_{\mathbf{a}})\equiv b_{\operatorname{red}(\mathbf{a})}.\] Proof.: The assertion follows from Corollary 4.4.2 (1), Proposition 6.1.2, and Lemmas 6.2.3 and 6.2.4. **Definition 6.3.3**.: Let \(\lambda=(\lambda_{1},\dots,\lambda_{l})\in\operatorname{Par}_{\leq 2n}\). Set \(\lambda^{\prime}:=(\lambda_{1}-1,\dots,\lambda_{l}-1)\). The _successor map_ is the \(\mathbf{U}^{\imath}\)-module homomorphism \[\operatorname{suc}:V(\lambda)\to\bigoplus_{\begin{subarray}{c}\lambda^{ \prime}\subseteq\mu\\ \text{\tiny vert}\end{subarray}}V(\mu)\] defined to be the composite \[\operatorname{suc}:=*\circ(\operatorname{red}\otimes\operatorname{id})\circ d.\] **Proposition 6.3.4**.: _Let \(\lambda\in\operatorname{Par}_{\leq 2n}\) and \(T\in\operatorname{SST}_{2n}(\lambda)\). Then, we have_ \[\operatorname{suc}(b_{T})\equiv b_{\operatorname{suc}(T)}.\] Proof.: The assertion follows from Propositions 6.1.2 and 6.3.2. ### An orthonormal basis of \(V^{\imath}(\nu)\) **Proposition 6.4.1**.: _Let \(\nu\in\operatorname{Par}_{\leq n}\). Then, there exists a basis \(\{b_{T}^{\imath}\mid T\in Sp\mathrm{T}_{2n}(\nu)\}\) of \(V^{\imath}(\nu)\) and a \(\mathbf{U}^{\imath}\)-module homomorphism_ \[p_{\nu}:V(\nu)\to V^{\imath}(\nu)\] _such that_ \[p_{\nu}(b_{T})\equiv\begin{cases}b_{T}^{\imath}&\text{ if }T\in Sp\mathrm{T}_{2 n}(\nu),\\ 0&\text{ if }T\notin Sp\mathrm{T}_{2n}(\nu),\end{cases}\] _for all \(T\in\operatorname{SST}_{2n}(\nu)\)._ Proof.: Let \(p_{0}:V(\nu)\to V(\nu)\) and \(p_{1}:V(\nu)\to\bigoplus_{\xi\subseteq\nu}V(\xi)\) denote the composite of the successor map \(\operatorname{suc}:V(\nu)\to\bigoplus_{\xi}V(\xi)\) followed by the projections, respectively. Then, by Corollary 4.3.8, we have \[p_{0}(b_{T})\equiv\begin{cases}b_{T}&\text{ if }T\in Sp\mathrm{T}_{2n}(\nu),\\ 0&\text{ if }T\notin Sp\mathrm{T}_{2n}(\nu),\end{cases}\quad p_{1}(b_{T})\equiv \begin{cases}0&\text{ if }T\in Sp\mathrm{T}_{2n}(\nu),\\ b_{\operatorname{suc}(T)}&\text{ if }T\notin Sp\mathrm{T}_{2n}(\nu),\end{cases}\] This implies that \(p_{0}(V(\nu))\) and \(p_{1}(V(\nu))\) contain linearly independent subsets \(\{p_{0}(b_{T})\mid T\in Sp\mathrm{T}_{2n}(\nu)\}\) and \(\{p_{1}(b_{T})\mid T\notin Sp\mathrm{T}_{2n}(\nu)\}\), respectively. Since the successor map on \(\operatorname{SST}_{2n}(\nu)\) is injective by Corollary 4.4.3, we see that they are bases of the two spaces. Now, we compute as \[\operatorname{ch}_{\iota}p_{0}(V(\nu))=\sum_{T\in Sp\mathrm{T}_{2n}(\nu)} \mathbf{y}^{\operatorname{wt}^{Sp}(T)}=\operatorname{ch}_{\iota}V^{\iota}( \nu).\] This implies \(p_{0}(\nu)\simeq V^{\iota}(\nu)\). Thus, we complete the proof. ### Quantum Littlewood-Richardson map **Definition 6.5.1**.: The quantum Littlewood-Richardson map is the \(\mathbf{U}^{\text{\tiny{\rm{i}}}}\)-module homomorphism \[\operatorname{LR}^{\text{\tiny{\rm{AII}}}}:V(\lambda)\to\bigoplus_{\begin{subarray} {c}\nu\in\operatorname{Par}_{\leq n}\\ \nu\subseteq\lambda\end{subarray}}V^{\iota}(\nu)\otimes\mathbb{Q}(q) \mathrm{Rec}_{2n}(\lambda/\nu)\] defined to be the composite of the \(\mathbf{U}^{\text{\tiny{\rm{i}}}}\)-module homomorphism \[V(\lambda)\to\bigoplus_{\begin{subarray}{c}\nu\in\operatorname{Par}_{\leq n}\\ \nu\subseteq\lambda\end{subarray}}V(\nu)\otimes\mathbb{Q}(q)\mathrm{SST}_{2n} (\lambda)\] which sends \(b_{T}\) to \(\operatorname{suc}^{k}(b_{T})\otimes T\) for each \(T\in\mathrm{SST}_{2n}(\lambda)\), where \(k>0\) is such that \(\operatorname{suc}^{k}(S)=P^{\text{\tiny{\rm{AII}}}}(S)\) for all \(S\in\mathrm{SST}_{2n}(\lambda)\), and the sum of \(\mathbf{U}^{\text{\tiny{\rm{i}}}}\)-homomorphisms of the form \[V(\nu)\otimes\mathbb{Q}(q)\mathrm{SST}_{2n}(\lambda)\to V^{\iota}(\nu) \otimes\mathbb{Q}(q)\mathrm{Rec}_{2n}(\lambda/\nu)\] which sends \(b_{S}\otimes T\) to \(\delta_{S,P^{\text{\tiny{\rm{AII}}}}(T)}p_{\nu}(b_{P^{\text{\tiny{\rm{AII}}}} (T)})\otimes Q^{\text{\tiny{\rm{AII}}}}(T)\) for each \(S\in\mathrm{SST}_{2n}(\nu)\) and \(T\in\mathrm{SST}_{2n}(\lambda)\). The following is immediate from the definition. **Proposition 6.5.2**.: _Let \(T\in\mathrm{SST}_{2n}(\lambda)\). Then, we have_ \[\operatorname{LR}^{\text{\tiny{\rm{AII}}}}(b_{T})\equiv b_{P^{\text{\tiny{\rm{ AII}}}}(T)}^{\nu}\otimes Q^{\text{\tiny{\rm{AII}}}}(T).\] ## 7. Surjectivity and codomain of the Littlewood-Richardson map In this section, we complete the proof of the first assertion of Theorem 3.1.4. ### Injectivity and codomain of the Littlewood-Richardson map **Proposition 7.1.1**.: _Let \(\lambda\in\operatorname{Par}_{\leq 2n}\) and \(T\in\mathrm{SST}_{2n}(\lambda)\). Then, the tableau \(P^{\text{\tiny{\rm{AII}}}}(T)\) is symplectic. Consequently, we have_ \[\operatorname{LR}^{\text{\tiny{\rm{AII}}}}(T)\in Sp\mathrm{T}_{2n}(\nu)\times \mathrm{Rec}_{2n}(\lambda/\nu)\] _for some \(\nu\in\operatorname{Par}_{\leq n}\) such that \(\nu\subseteq\lambda\)._ Proof.: Since \(\operatorname{suc}(P^{\text{\tiny{\rm{AII}}}}(T))=P^{\text{\tiny{\rm{AII}}}}(T)\), it is symplectic by Corollary 4.3.8. Let \(\lambda,\mu\in\operatorname{Par}_{\leq 2n}\) be such that \(\mu\subseteq\lambda\). Define a map \[\widetilde{\operatorname{suc}}:\operatorname{SST}_{2n}(\mu)\times\operatorname{ Tab}(\lambda/\mu)\to\bigsqcup_{\begin{subarray}{c}\mu^{\prime}\in\operatorname{Par}_{\leq 2n}\\ \mu^{\prime}\subseteq\mu\\ \operatorname{vert}\end{subarray}}\operatorname{SST}_{2n}(\mu^{\prime})\times \operatorname{Tab}(\lambda/\mu^{\prime})\] as follows. Let \((S,R)\in\operatorname{SST}_{2n}(\mu)\times\operatorname{Tab}(\lambda/\mu)\). Set \[k:=\max\{R(i,j)\mid(i,j)\in D(\lambda/\mu)\},\] and \(\mu^{\prime}:=\operatorname{sh}(\operatorname{suc}(S))\). Then, \[\widetilde{\operatorname{suc}}(S,R)=(\operatorname{suc}(S),R^{\prime}),\] where \(R^{\prime}\in\operatorname{Tab}(\lambda/\mu^{\prime})\) is such that \[R^{\prime}(i,j)=\begin{cases}R(i,j)&\text{if }(i,j)\notin D(\mu),\\ k+1&\text{if }(i,j)\in D(\mu),\end{cases}\text{ for all }(i,j)\in D(\lambda/\mu^{\prime}).\] **Lemma 7.1.2**.: _Let \(\lambda,\mu\in\operatorname{Par}_{\leq 2n}\) be such that \(\mu\subseteq\lambda\). Then, the map \(\widetilde{\operatorname{suc}}\) on \(\operatorname{SST}_{2n}(\mu)\times\operatorname{Tab}(\lambda/\mu)\) is injective._ Proof.: Let \((S_{1},R_{1}),(S_{2},R_{2})\in\operatorname{SST}_{2n}(\mu)\times\operatorname{ Tab}(\lambda/\mu)\) be such that \[(\operatorname{suc}(S_{1}),R^{\prime}_{1}):=\widetilde{\operatorname{suc}}(S_ {1},R_{1})=\widetilde{\operatorname{suc}}(S_{2},R_{2})=(\operatorname{suc}(S_ {2}),R^{\prime}_{2}).\] Then, for all \((i,j)\in D(\lambda/\mu)\), we have \[R_{1}(i,j)=R^{\prime}_{1}(i,j)=R^{\prime}_{2}(i,j)=R_{2}(i,j).\] This implies that \[R_{1}=R_{2}.\] Next, we show that \(S_{1}=S_{2}\). Let us write \[d(S_{i})=(\mathbf{a}_{i},S^{\prime}_{i})\ \text{ for each }i=1,2.\] Then, we have \[\operatorname{red}(\mathbf{a}_{1})*S^{\prime}_{1}=\operatorname{suc}(S_{1})= \operatorname{suc}(S_{2})=\operatorname{red}(\mathbf{a}_{2})*S^{\prime}_{2}.\] Since \(|\operatorname{sh}(S^{\prime}_{1})|=|\operatorname{sh}(S^{\prime}_{2})|\), we must have \[|\operatorname{red}(\mathbf{a}_{1})|=|\operatorname{red}(\mathbf{a}_{2})|.\] Then, by Proposition 2.3.4, we obtain \[(\operatorname{red}(\mathbf{a}_{1}),S^{\prime}_{1})=(\operatorname{red}( \mathbf{a}_{2}),S^{\prime}_{2}).\] Now, Corollary 4.4.2 (2) implies \[\mathbf{a}_{1}=\mathbf{a}_{2}.\] Hence, we deduce \[S_{1}=\mathbf{a}_{1}*S^{\prime}_{1}=\mathbf{a}_{2}*S^{\prime}_{2}=S_{2},\] as desired. **Proposition 7.1.3**.: _Let \(\lambda\in\operatorname{Par}_{\leq 2n}\). Then, the Littlewood-Richardson map on \(\operatorname{SST}_{2n}(\lambda)\) is injective._ Proof.: Let \(T_{1},T_{2}\in\operatorname{SST}_{2n}(\lambda)\) be such that \(\operatorname{LR}^{A\Pi}(T_{1})=\operatorname{LR}^{A\Pi}(T_{2})\). Let us write \[\operatorname{LR}^{A\Pi}(T_{1})=(P,Q),\] and set \(\nu:=\operatorname{sh}(P)\). First, note that \[\operatorname{LR}^{A\Pi}(T_{1})=\widetilde{\operatorname{suc}}^{k_{0}}(T_{1},Q ^{0})\] for a sufficiently large \(k_{0}>0\), where \(Q^{0}\) denotes the unique tableau of shape \(\lambda/\lambda\). For each \(k\in[0,k_{0}]\), let \(\nu^{k}\) denote the partition whose Young diagram is \[D(\nu^{k})=D(\nu)\sqcup\{(i,j)\in D(\lambda/\nu)\mid Q(i,j)>k\}.\] Then, we see that \[\operatorname{sh}(\operatorname{suc}^{k}(T_{1}))=\nu^{k}\ \text{ for all }k\in[0,k_{0}].\] Since \(\operatorname{LR}^{A\Pi}(T_{2})=\operatorname{LR}^{A\Pi}(T_{1})=(P,Q)\), we have also \[\operatorname{sh}(\operatorname{suc}^{k}(T_{2}))=\nu^{k}\ \text{ for all }k\in[0,k_{0}].\] Applying Lemma 7.1.2, we deduce that \[\operatorname{suc}^{k}(T_{1})=\operatorname{suc}^{k}(T_{2})\ \text{ for all }k\in[0,k_{0}]\] by descending induction on \(k\). In particular, we obtain \[T_{1}=\operatorname{suc}^{0}(T_{1})=\operatorname{suc}^{0}(T_{2})=T_{2},\] as desired. ### Surjectivity of the Littlewood-Richardson map **Theorem 7.2.1**.: _The quantum Littlewood-Richardson map on \(V(\lambda)\) is a \(\mathbf{U}^{\imath}\)-module isomorphism._ Proof.: Let \(Q\in\operatorname{Rec}_{2n}(\lambda/\nu)\). Then, there exists \(T\in\operatorname{SST}_{2n}(\lambda)\) such that \(Q^{A\Pi}(T)=Q\). By Proposition 6.5.2, we have \[\operatorname{LR}^{A\Pi}(b_{T})\equiv b_{P^{A\Pi}(T)}\otimes Q\neq 0.\] Since the summand \(V^{\imath}(\nu)\otimes\mathbb{Q}(q)Q\) is an irreducible \(\mathbf{U}^{\imath}\)-module, we see that it is contained in the image of \(\operatorname{LR}^{A\Pi}\). Since the recording tableau \(Q\) is arbitrary, the \(\operatorname{LR}^{A\Pi}\) on \(V(\lambda)\) is surjective. The injectivity follows from that of the Littlewood-Richardson map on \(\operatorname{SST}_{2n}(\lambda)\) and Proposition 6.5.2 **Corollary 7.2.2**.: _The Littlewood-Richardson map on \(\operatorname{SST}_{2n}(\lambda)\) is bijective._ ## 8. Characterization of the recording tableaux In this section, we prove the second assertion of Theorem 3.1.4. ### Some properties of the successor map Let \(\lambda=(\lambda_{1},\ldots,\lambda_{l})\in\operatorname{Par}_{\leq 2n}\) and \(T\in\operatorname{SST}_{2n}(\lambda)\). Let us write \(d(T)=(\mathbf{a},S)\), \(\mathbf{a}=(a_{1},\ldots,a_{l})\), and \(\operatorname{red}(\mathbf{a})=\mathbf{b}=(b_{1},\ldots,b_{k})\). For each \(t\in[0,k]\), set \[S_{t}:=\begin{cases}S&\text{ if }t=0,\\ b_{t}\to S_{t-1}&\text{ if }t>0,\end{cases}\] and \(\lambda^{t}:=\operatorname{sh}(S_{t})\). Let us write \[\operatorname{br}(b_{t},S_{t-1})=(r_{t,1},r_{t,2},\ldots,r_{t,s_{t}}).\] Also, let \(r_{t,0}\in[1,l]\) be such that \[a_{r_{t,0}}=b_{t}.\] Set \[\operatorname{br}_{\leq t}:=\{(r_{u,j},j)\mid u\in[1,t],\ j\in[1,s_{u}]\}.\] Set \(T^{\prime}:=\operatorname{suc}(T)\), \(\mu:=\operatorname{sh}(T^{\prime})\), and \(l^{\prime}:=\ell(\mu)\). Define \(\mathbf{a}^{\prime}=(a^{\prime}_{1},\ldots,a^{\prime}_{l^{\prime}})\), \(S^{\prime}\), \(\mathbf{b}^{\prime}=(b^{\prime}_{1},\ldots,b^{\prime}_{k^{\prime}})\), \(S^{\prime}_{t^{\prime}}\), \(\mu^{\prime}\), \((r^{\prime}_{t^{\prime},1},\ldots,r^{\prime}_{t^{\prime},s^{\prime}_{t^{ \prime}}})\) for \(t^{\prime}\in[1,k^{\prime}]\) in the same way as before. **Lemma 8.1.1**.: _Let \(t\in[1,k]\) be such that \(r_{t,0}=r_{t,1}\) and \(a^{\prime}_{r_{t,1}+1}=a^{\prime}_{r_{t,1}}+1\). Then, we have \(a_{r_{t,0}+1}=a_{r_{t,0}}+1\)._ Proof.: By Lemma 2.6.3 (2), we have \[a^{\prime}_{r_{t,1}}=T^{\prime}(r_{t,1},1)=T(r_{t,0},1)=a_{r_{t,0}}.\] First, suppose that \((r_{t,1}+1,1)\notin\operatorname{br}_{\leq k}\). Then, we have \[T^{\prime}(r_{t,1}+1,1)=T(r_{t,1}+1,2).\] Hence, we deduce that \[a^{\prime}_{r_{t,1}+1}=T^{\prime}(r_{t,1}+1,1)=T(r_{t,1}+1,2)\geq T(r_{t,1}+1,1)=a_{r_{t,1}+1}.\] This implies that \[a_{r_{t,0}+1}=a_{r_{t,1}+1}\leq a^{\prime}_{r_{t,1}+1}=a^{\prime}_{r_{t,1}}+1 =a_{r_{t,0}}+1.\] Therefore, the assertion follows. Next, suppose that \((r_{t,1}+1,1)\in\operatorname{br}_{\leq k}\). Let us write \(r_{t,1}+1=r_{u,1}\) for some \(u\in[1,k]\). Then, we have \[T^{\prime}(r_{t,1}+1,1)=T^{\prime}(r_{u,1},1)=T(r_{u,0},1).\] This implies that \[a^{\prime}_{r_{t,1}+1}=T^{\prime}(r_{t,1}+1,1)=T(r_{u,0},1)=a_{r_{u,0}}.\] Since we have \[r_{1,1}<r_{2,1}<\cdots<r_{k,1}\] by Lemma 2.6.3 (4), it must hold that \(u>t\). Noting that \(a_{r_{u,0}}>a_{r_{t,0}}\), we obtain as before that \[a_{r_{u,0}}=a_{r_{t,0}}+1.\] This implies that \(r_{u,0}=r_{t,0}+1\) since \((a_{1},\ldots,a_{k})\) is a strictly increasing sequence. Hence, the assertion follows. **Lemma 8.1.2**.: _Let \(t\in[0,k]\). Then, we have_ \[\sharp\{t^{\prime}\in[1,k^{\prime}]\mid r^{\prime}_{t^{\prime},0}\leq r_{t,1} \}\geq t,\] _where we set \(r_{0,1}:=0\). In particular, it holds that \(k^{\prime}\geq k\)._ Proof.: For each \(t\in[0,k]\), set \[N(t):=\sharp\{t^{\prime}\in[1,k^{\prime}]\mid r^{\prime}_{t^{\prime},0}\leq r_{t,1 }\}.\] Note that we have \[N(t)=\sharp\{i\in[1,r_{t,1}]\mid a^{\prime}_{i}\notin\operatorname{rem}( \mathbf{a}^{\prime})\}=r_{t,1}-|(a^{\prime}_{1},\ldots,a^{\prime}_{r_{t,1}}) \cap\operatorname{rem}(\mathbf{a}^{\prime})|.\] Let us prove the assertion by induction on \(t\). When \(t=0\), the assertion is trivial. Assume that \(t>0\) and the assertion holds for \(0,1,\ldots,t-1\). First, suppose that \(a^{\prime}_{r_{t,1}}\notin\operatorname{rem}(\mathbf{a}^{\prime})\). Then, we have \[N(t)\geq N(t-1)+1.\] Hence, we deduce from our induction hypothesis that \[N(t)\geq N(t-1)+1\geq(t-1)+1=t,\] as desired. Next, suppose that \(a^{\prime}_{r_{t,1}}\in\operatorname{rem}(\mathbf{a}^{\prime})\) and \(a^{\prime}_{r_{t,1}}\notin 2\mathbb{Z}\). Then, by Proposition 4.2.2, we must have \(r_{t,1}<l\), \(a^{\prime}_{r_{t,1}+1}=a^{\prime}_{r_{t,1}}+1\), and \[a^{\prime}_{r_{t,1}}<2r_{t,1}-|\operatorname{rem}(a^{\prime}_{1},\ldots,a^{ \prime}_{r_{t,1}-1})|.\] Also, by Corollary 4.2.8, we have \[|\operatorname{rem}(a^{\prime}_{1},\ldots,a^{\prime}_{r_{t,1}-1})|\geq 2r_{t,1}- a^{\prime}_{r_{t,1}}-1.\] Hence, we obtain \[|\operatorname{rem}(a^{\prime}_{1},\ldots,a^{\prime}_{r_{t,1}-1})|=2r_{t,1}- a^{\prime}_{r_{t,1}}-1.\] Therefore, \[N(t)=r_{t,1}-(|\operatorname{rem}(a^{\prime}_{1},\ldots,a^{\prime}_{r_{t,1}-1 })|+1)=a^{\prime}_{r_{t,1}}-r_{t,1}.\] On the other hand, by Corollary 4.2.8 again, we have \[|\operatorname{rem}(a_{1},\ldots,a_{r_{t,0}-1})|\geq 2r_{t,0}-a_{r_{t,0}}-1.\] Also, since \(a_{r_{t,0}}\notin\operatorname{rem}(\mathbf{a})\), it holds that \[r_{t,0}-|\operatorname{rem}(a_{1},\ldots,a_{r_{t,0}-1})|=t.\] Hence, \[a_{r_{t,0}}-r_{t,0}\geq t-1.\] Combining above, we obtain \[N(t)=a^{\prime}_{r_{t,1}}-r_{t,0}+(r_{t,0}-r_{t,1})\geq t+(r_{t,0}-r_{t,1})-1.\] Therefore, the assertion follows when \(r_{t,0}>r_{t,1}\). Hence, we only need to consider the case when \(r_{t,0}=r_{t,1}\). In this case, Lemma 8.1.1 implies \[a_{r_{t,0}+1}=a_{r_{t,0}}+1.\] Since \(a_{r_{t,0}}\notin\operatorname{rem}(\mathbf{a})\), we have \[a_{r_{t,0}}\geq 2r_{t,0}-|\operatorname{rem}(a_{1},\ldots,a_{r_{t,0}-1})|=r_{t,0 }+t.\] Now, we deduce \[N(t)=a^{\prime}_{r_{t,1}}-r_{t,1}=a_{r_{t,0}}-r_{t,0}\geq t,\] as desired. Finally, suppose that \(a^{\prime}_{r_{t,1}}\in\operatorname{rem}(\mathbf{a}^{\prime})\) and \(a^{\prime}_{r_{t,1}}\in 2\mathbb{Z}\). In this case, one can deduce the assertion in a similar way to the previous case. Thus, we complete the proof. **Lemma 8.1.3**.: _Let \(t\in[0,k]\). Then, we have \(s^{\prime}_{t}\geq s_{t}\) and \(r^{\prime}_{t,j-1}\leq r_{t,j}\) for all \(j\in[1,s_{t}]\), where we set \(s_{0}=s^{\prime}_{0}=0\)._ Proof.: We proceed by induction on \(t\). The case when \(t=0\) is trivial. Hence, assume that \(t>0\) and the assertion holds for \(0,1,\ldots,t-1\). We prove the assertion by induction on \(j\). By lemma 8.1.2 and the fact that \(r^{\prime}_{1,0}<r^{\prime}_{2,0}<\cdots<r^{\prime}_{k^{\prime},0}\), we obtain \[r^{\prime}_{t,0}\leq r_{t,1}.\] Now, assume that \(j>1\) and we have \(s^{\prime}_{t},s_{t}\geq j-1\) and \(r^{\prime}_{t,j-2}\leq r_{t,j-1}\). When \(s_{t}=j-1\), there is nothing to prove. Hence, assume that \(s_{t}\geq j\). By Lemma 2.6.3, we have \[r^{\prime}_{t,j-1}=\min\{r^{\prime}\in[1,\operatorname{col}_{j-1}(\mu^{t-1})+ 1]\mid T^{\prime}(r^{\prime},j)\geq T^{\prime}(r^{\prime}_{t,j-2},j-1)\}.\] Note that our induction hypothesis implies that \[r^{\prime}_{t-1,j-1}\leq r_{t-1,j}.\] The right-hand side is less than \(r_{t,j}\) by Lemma 2.6.3 (4). Also we have \[r_{t,j}\leq\operatorname{col}_{j}(\lambda^{t-1})+1,\] and \[\operatorname{col}_{j}(\lambda^{t-1})\leq\operatorname{col}_{j}(\lambda^{k}) =\operatorname{col}_{j}(\mu^{0})\leq\operatorname{col}_{j-1}(\mu^{0})\leq \operatorname{col}_{j-1}(\mu^{t-1}).\] Observe that \[T^{\prime}(r_{t,j},j)=T(r_{t,j-1},j),\] \[T^{\prime}(r^{\prime}_{t,j-2},j-1)=\begin{cases}T(r_{u,j-2},j-1)&\text{ if }r^{\prime}_{t,j-2}=r_{u,j-1}\text{ for some }u\in[1,k],\\ T(r^{\prime}_{t,j-2},j)&\text{ otherwise,}\end{cases}\] and \[T(r_{u,j-2},j-1)\leq T(r_{u,j-1},j)\leq T(r_{t,j-1},j),\] \[T(r^{\prime}_{t,j-2},j)\leq T(r_{t,j-1},j).\] By above, we obtain \(r^{\prime}_{t,j-1}\leq r_{t,j}\) and \(s^{\prime}_{t}\geq j\), as desired. **Proposition 8.1.4**.: _Let \(r\in[0,l]\). Then, we have_ \[\sharp\{t\in[1,k]\mid r_{t,s_{t}}\leq r\}\leq\sharp\{t^{\prime}\in[1,k^{\prime }]\mid r^{\prime}_{t^{\prime},s_{t^{\prime}}}\leq r\}.\] Proof.: For each \(t\in[1,k]\), we have \(s^{\prime}_{t}\geq s_{t}\) and \(r^{\prime}_{t,s_{t}-1}\leq r_{t,s_{t}}\). Hence, if \(r_{t,s_{t}}\leq r\), then \[r^{\prime}_{t,s^{\prime}_{t}}\leq r^{\prime}_{t,s_{t}-1}\leq r_{t,s_{t}}\leq r.\] Thus, the assertion follows. ### Symplectic Littlewood-Richardson tableaux **Definition 8.2.1**.: Let \(\lambda\in\operatorname{Par}_{\leq 2n}\) and \(\nu\in\operatorname{Par}_{\leq n}\). A tableau \(T\in\operatorname{Tab}(\lambda/\nu)\) is said to be a _symplectic Littlewood-Richardson tableau_ if it satisfies the following. 1. \(T\in\operatorname{SST}_{2n}(\lambda/\nu)\). 2. Let \((w_{1},\ldots,w_{N})\) denote the column-word \(w^{\operatorname{col}}(T)\) of \(T\). Then, the reversed word \((w_{N},\ldots,w_{1})\) is a lattice permutation: For each \(r\in[1,N]\) and \(k\in[1,2n-1]\), the number of occurrences of \(k\) in the subsequence \((w_{N},\ldots,w_{r})\) is greater than or equal to that of \(k+1\). 3. The sequence \(\operatorname{wt}(T)=(T[1],T[2],\ldots,T[2n])\) is a partition which has even columns (see Definition 2.1.1). 4. If \(T(i,j)=2k+1\) for some \((i,j)\in D(\lambda/\nu)\) and \(k\in\mathbb{Z}_{\geq 0}\), then we have \(i\leq n+k\). Let \(\operatorname{LRT}_{2n}^{Sp}(\lambda/\nu)\) denote the set of all symplectic Littlewood-Richardson tableaux of shape \(\lambda/\nu\). **Remark 8.2.2**.: Let \(T\in\operatorname{SST}_{2n}(\lambda/\nu)\) be such that the sequence \(\operatorname{wt}(T)\) is a partition which has even columns. Then, we have \(T\in\operatorname{LRT}_{2n}^{Sp}(\lambda/\nu)\) if and only if the reversed column-word _fits \(\lambda/\nu\)\(n\)-symplectically_ in the sense of [15, Definition 3.9]. The symplectic Littlewood-Richardson tableaux provide us a branching rule: **Theorem 8.2.3** (_cf._[15, Corollary 3.12]).: _Let \(\lambda\in\operatorname{Par}_{\leq 2n}\) and \(\nu\in\operatorname{Par}_{\leq n}\) be such that \(\nu\subseteq\lambda\). Then, we have_ \[m_{\lambda,\nu}=|\operatorname{LRT}_{2n}^{Sp}(\lambda/\nu)|.\] ### Characterization of the recording tableaux Let \(\lambda\in\operatorname{Par}_{\leq 2n}\), \(\nu\in\operatorname{Par}_{\leq n}\) be such that \(\nu\subseteq\lambda\). **Lemma 8.3.1**.: _We have_ \[\operatorname{Rec}_{2n}(\lambda/\nu)\subseteq\widetilde{\operatorname{Rec}}_{2 n}(\lambda/\nu).\] Proof.: Let \(Q\in\operatorname{Rec}_{2n}(\lambda/\nu)\). We only need to show that \(Q\in\widetilde{\operatorname{Rec}}_{2n}(\lambda/\nu)\), that is, to verify that the tableau \(Q\) satisfies conditions (R1)-(R5) in Section 3. By the definition of recording tableaux, there exists \(T\in\operatorname{SST}_{2n}(\lambda)\) such that \(Q^{\operatorname{All}}(T)=Q\). For each \(k\in\mathbb{Z}_{\geq 0}\), set \[P^{k}:=\begin{cases}T&\text{ if }k=0,\\ \operatorname{suc}(P^{k-1})&\text{ if }k>0,\end{cases}\quad\nu^{k}:= \operatorname{sh}(P^{k}).\] Let us write \[d(P^{k})=(\mathbf{a}^{k},S^{k}),\ \mathbf{a}^{k}=(a_{1}^{k},\ldots,a_{l_{k}}^{ k}),\ \operatorname{red}(\mathbf{a}^{k})=\mathbf{b}^{k}=(b_{1}^{k},\ldots,b_{l_{k}}^{ k}),\] and \[\operatorname{br}(b_{t}^{k},b_{t-1}^{k}\to(\cdots\to(b_{1}^{k}\to S^{k-1})))= (r_{t,1}^{k},r_{t,2}^{k},\ldots,r_{t,s_{t}^{k}}^{k}).\] First, let us verify conditions (R1) and (R2). Let \((i,j)\in D(\lambda/\nu)\) and write \(Q(i,j)=k\). Suppose that \((i,j+1)\in D(\lambda/\nu)\) (resp., \((i+1,j)\in D(\lambda/\nu)\)). Then, since \(\nu^{k}\underset{\text{vert}}{\subseteq}\nu^{k-1}\), we must have \((i,j+1)\notin D(\nu^{k-1})\) (resp., \((i+1,j)\notin D(\nu^{k})\)). This, together with Lemma 3.1.2, implies that \(Q(i,j+1)\leq k-1\) (resp., \(Q(i+1,j)\leq k\)), as desired. Next, let us verify conditions (R3) and (R4). Let \(k>0\). Then, we have \[Q[k]=|\nu^{k-1}/\nu^{k}|\] by Lemma 3.1.2. Since \(P^{k}=\operatorname{suc}(P^{k-1})=\operatorname{red}(\mathbf{a}^{k-1})*S^{k-1}\), it holds that \[|\nu^{k-1}/\nu^{k}|=|\operatorname{rem}(\mathbf{a}^{k-1})|.\] The right-hand side is even by Lemma 4.2.3, and is greater than or equal to \(2(\ell(\nu^{k-1})-n)\) by Proposition 4.2.7. Finally, let us verify condition (R5). Let \(r,k>0\). Then, we have \[Q_{\leq r}[k] =\sharp\{i\in[1,r]\mid Q(i,j)=k\ \ \text{for some }j\}\] \[=|[1,r]\setminus\{r^{k-1}_{t,s^{k-1}_{t}}\mid t\in[1,l^{\prime}_{ k-1}]\}|\] \[=r-\sharp\{t\in[1,l^{\prime}_{k-1}]\mid r^{k-1}_{t,s^{k-1}_{t}} \leq r\}.\] Hence, we deduce that \[Q_{\leq r}[k]-Q_{\leq r}[k+1]=\{t^{\prime}\in[1,l^{\prime}_{k}]\mid r^{k}_{t^{ \prime},s^{k}_{t^{\prime}}}\leq r\}-\{t\in[1,l^{\prime}_{k-1}]\mid r^{k-1}_{t, s^{k-1}_{t}}\leq r\}.\] The right-hand side is nonnegative by Proposition 8.1.4. Thus, we complete the proof. **Lemma 8.3.2**.: _Consider the map_ \[\widetilde{\operatorname{Rec}}_{2n}(\lambda/\nu)\to\operatorname{Tab}( \lambda/\nu)\] _which sends each \(R\in\widetilde{\operatorname{Rec}}_{2n}(\lambda/\nu)\) to the tableaux of shape \(\lambda/\nu\) whose \((i,j)\) entry is equal to \(R_{\leq i}[R(i,j)]\); the number of occurrences of \(R(i,j)\) in \(R\) in the \(i\)-th row or above. Then, it is injective and its image is contained in \(\operatorname{LRT}^{Sp}_{2n}(\lambda/\nu)\)._ Proof.: Let \(R\in\widetilde{\operatorname{Rec}}_{2n}(\lambda/\nu)\), and \(S\in\operatorname{Tab}(\lambda/\nu)\) be the image of \(R\) under the map above. We only need to show that the tableau \(S\) satisfies the conditions (1)-(4) in Definition 8.2.1. First, let us verify condition (1). By condition (R1), each \(k^{\prime}>0\) appears as an entry of \(R\) at most once in each row. In particular, we have \[R[k^{\prime}]\leq\ell(\lambda)\leq 2n\ \ \text{for all }k^{\prime}>0.\] This implies that the entries of \(S\) are in \([1,2n]\). Let \((i,j)\in D(\lambda/\nu)\) and set \(k:=R(i,j)\). Suppose that \((i,j+1)\in D(\lambda/\nu)\) (resp., \((i+1,j)\in D(\lambda/\nu)\)), and set \(k^{\prime}:=R(i,j+1)<k\) (resp., \(k^{\prime\prime}:=R(i+1,j)\leq k\)). Here, we used condition (R1) (resp., (R2)). Then, we have \[S(i,j+1)=R_{\leq i}[k^{\prime}]\geq R_{\leq i}[k]=S(i,j),\] (resp., \[S(i+1,j)=R_{\leq i+1}[k^{\prime\prime}]=R_{\leq i}[r^{\prime\prime}]+1\geq R_ {\leq i}[r]+1=S(i,j)+1,\] where the inequality follows from condition (R5). This implies that \(S\) is semistandard. Second, let us verify condition (2). Let us write \[w_{\operatorname{col}}(S)=(w_{1},\ldots,w_{N})\text{ and }w_{\operatorname{col}}(R) =(w^{\prime}_{1},\ldots,w^{\prime}_{N}).\] Let \(r\in[1,N]\) and \(k\in[1,2n]\). Then, we have \[\sharp\{t\in[r,N]\mid w_{t}=k\}=\sharp\{k^{\prime}>0\mid\sharp\{t^{\prime}\in [r,N]\mid w^{\prime}_{t^{\prime}}=k^{\prime}\}\geq k\}.\] The right-hand side decreases as \(k\) increases. Hence, the assertion follows. Next, let us verify condition (3). By condition (R5), we have \[R[1]\geq R[2]\geq\cdots.\] Then, we can consider the tableau whose \(j\)-th column consists of exactly \(R[j]\)\(j\)'s. Clearly, its shape is \(\operatorname{wt}(S)\). Noting that \(R[j]\in 2\mathbb{Z}\) by condition (R3), we see that the partition \(\operatorname{wt}(S)\) has even columns. Finally, let us verify condition (4). Let \((i,j)\in D(\lambda/\nu)\) and suppose that \(S(i,j)=2k+1\) for some \(k\geq 0\). When \(i\leq n\), there is nothing to prove. Hence, assume that \(i>n\). Set \(r:=R(i,j)\). Then, we have \(R_{\leq i}[r]=2k+1\). Let \(\mu\) be the partition whose Young diagram is given by \[D(\mu)=D(\nu)\sqcup\{(i,j)\in D(\lambda/\nu)\mid R(i,j)\geq r\}.\] We have \(R_{\leq i}[r]\geq 2(i-n)\). Otherwise, we obtain \[R[r]=R_{\leq\ell(\mu)}[r]\leq R_{\leq i}[r]+(\ell(\mu)-i)<2(i-n)+(\ell(\mu)-i) \leq 2(\ell(\mu)-i),\] which contradicts condition (R4). Now, we have \[2k+1=R_{\leq i}[r]\geq 2(i-n).\] This implies that \(2k\geq 2(i-n)\), and then the assertion follows. Thus, we complete the proof. **Theorem 8.3.3**.: _We have \(\operatorname{Rec}_{2n}(\lambda/\nu)=\widetilde{\operatorname{Rec}_{2n}}( \lambda/\nu)\)._ Proof.: By Lemmas 8.3.1 and 8.3.2, we have \[\operatorname{Rec}_{2n}(\lambda/\nu)\subseteq\widetilde{\operatorname{Rec}_{ 2n}}(\lambda/\nu)\] and \[|\operatorname{Rec}_{2n}(\lambda/\nu)|\leq|\widetilde{\operatorname{Rec}_{2n} }(\lambda/\nu)|\leq|\operatorname{LRT}_{2n}^{Sp}(\lambda/\nu)|.\] Hence, we only need to show that \[|\operatorname{Rec}_{2n}(\lambda/\nu)|=|\operatorname{LRT}_{2n}^{Sp}(\lambda/ \nu)|. \tag{8.1}\] Recall from Theorem 8.2.3 that \[|\operatorname{LRT}_{2n}^{Sp}(\lambda/\nu)|=m_{\lambda,\nu}.\] On the other hand, by Theorem 7.2.1 and Proposition 6.2.2, we obtain \[|\operatorname{Rec}_{2n}(\lambda/\nu)|=m_{\lambda,\nu}.\] Therefore, equation (8.1) holds. Thus, we complete the proof.
2303.00774
Systematically Measuring Ultra-Diffuse Galaxies (SMUDGes). IV. Ultra-Diffuse Satellites of Milky Way Analogs
To better understand the formation of large, low surface brightness galaxies, we measure the correlation function between ultra-diffuse galaxy (UDG) candidates and Milky Way analogs (MWAs). We find that (1) the projected radial distribution of UDG satellites (projected surface density $\propto r^{-0.84\pm0.06}$) is consistent with that of normal satellite galaxies, (2) the number of UDG satellites per MWA ($S_{\rm UDG}$) is $\sim 0.5\pm0.1$ over projected radii from 20 to 250 kpc and $-17< M_r < -13.5$, (3) $S_{\rm UDG}$ is consistent with a linear extrapolation of the relationship between the number of UDGs per halo vs. halo mass obtained over galaxy group and cluster scales, (4) red UDG satellites dominate the population of UDG satellites ($\sim80$%), (5) over the range of satellite magnitudes studied, UDG satellites comprise $\sim$ 10% of the satellite galaxy population of MWAs, (6) a significant fraction of these ($\sim$13%) have estimated total masses $>$ 10$^{10.9}$ M$_\odot$ or, equivalently, at least half the halo mass of the LMC, and populate a large fraction ($\sim$ 18%) of the expected subhalos down to these masses. All of these results suggest a close association between the overall low mass galaxy population and UDGs, which we interpret as favoring models where UDG formation principally occurs within the general context of low mass galaxy formation over models invoking more exotic physical processes specifically invoked to form UDGs.
Hina Goto, Dennis Zaritsky, Ananthan Karunakaran, Richard Donnerstein, David J. Sand
2023-03-01T19:01:54Z
http://arxiv.org/abs/2303.00774v1
Systematically Measuring Ultra-Diffuse Galaxies (SMUDGes). IV. Ultra-Diffuse Satellites of Milky Way Analogs ###### Abstract To better understand the formation of large, low surface brightness galaxies, we measure the correlation function between ultra-diffuse galaxy (UDG) candidates and Milky Way analogs (MWAs). We find that (1) the projected radial distribution of UDG satellites (projected surface density \(\propto r^{-0.84\pm 0.06}\)) is consistent with that of normal satellite galaxies, (2) the number of UDG satellites per MWA (\(S_{\rm UDG}\)) is \(\sim 0.5\pm 0.1\) over projected radii from 20 to 250 kpc and \(-17<M_{r}<-13.5\), (3) \(S_{\rm UDG}\) is consistent with a linear extrapolation of the relationship between the number of UDGs per halo vs. halo mass obtained over galaxy group and cluster scales, (4) red UDG satellites dominate the population of UDG satellites (\(\sim 80\%\)), (5) over the range of satellite magnitudes studied, UDG satellites comprise \(\sim 10\%\) of the satellite galaxy population of MWAs, (6) a significant fraction of these (\(\sim\)13%) have estimated total masses \(>10^{10.9}\) M\({}_{\odot}\) or, equivalently, at least half the halo mass of the LMC, and populate a large fraction (\(\sim 18\%\)) of the expected subhalos down to these masses. All of these results suggest a close association between the overall low mass galaxy population and UDGs, which we interpret as favoring models where UDG formation principally occurs within the general context of low mass galaxy formation over models invoking more exotic physical processes specifically invoked to form UDGs. Low surface brightness galaxies (940), Galaxy properties (615) 0000-0002-4882-8865]Hina Goto 0000-0002-4880-7885X]Dennis Zaritsky 0000-0002-4880-7885]Ananthan Karunakaran 0000-0001-8882-7885]Richard Donnerstein 0000-0002-4880-7885]David J. Sand ## 1 Introduction The success of the \(\Lambda\)CDM paradigm as a predictive framework for structure formation is nearly complete, with the only unresolved issues remaining at small galaxy (\(\ll\) L\({}^{*}\)) scales (for an overview see Weinberg et al., 2015). The study of low mass galaxies is then expected to highlight important baryonic physical evolutionary processes that may be missing in the simulations and, perhaps even more excitingly, potential departures from the canonical CDM phenomenology. The desire for progress on either of these two fronts has motivated significant efforts to improve the empirical study of low mass galaxy populations (such as the deep galaxy searches undertaken within a variety of nearby environments; Crnojevic et al., 2016; Park et al., 2017; Venhola et al., 2017; Ferrarese et al., 2020; La Marca et al., 2022). A particular class of low mass galaxy is that of the satellite galaxy. Satellites lie within an even larger halo of a parent or host system. We focus here on satellite galaxies lying within the halos of \(\sim\) L\({}^{*}\) galaxies that we refer to as Milky Way analogs (MWAs). Due to gravitational and hydrodynamical interactions with these parent galaxies, simulations suggest that the numbers, internal structure, and star formation histories of such satellites may have been altered relative to what they would have been for the same galaxies in isolation (for recent examples of such work see; Martin et al., 2021; Samuel et al., 2022). As such, there are a variety of reasons to compare samples of satellite galaxies to samples of similar mass galaxies that do not consist exclusively of satellites (e.g., that of Blanton et al., 2005). Broadly, there are two approaches used to search for satellite galaxies. In the first, which we refer to as "photometric", imaging is used to identify potential satellites around a set of selected parent galaxies. Typically, the redshifts are known for the parent galaxies but not for the candidate satellites. One measures the bulk satellite properties by evaluating the excess population of candidates projected in the vicinity of the parent galaxies (Holmberg, 1969; Lorrimer et al., 1994; Sales & Lambas, 2005; Guo et al., 2012; Wang & White, 2012; Sales et al., 2013). Because foreground and background objects generally greatly outnumber satellites, large samples are needed to tease out results. In this approach one is able, for example, to reach conclusions regarding the radial density profile of satellites around parents, but not to determine which of the many candidates are true satellites. With the advent of large area photometric surveys, this approach now brings statistical power to the questions at hand and subsamples can be defined to explore properties of the satellite galaxy population. In the second approach, which we refer to as "spectroscopic", one measures redshifts of the satellite candidates and identifies those sharing the parent's redshift to within allowances for different peculiar velocities (Zaritsky et al., 1993; Prada et al., 2003; Geha et al., 2017). This approach does then allow for follow-up of the true satellites and for an examination of the satellite kinematics but comes at great observational expense because it requires spectroscopy of many faint targets, most of which are contaminants. As such, it currently provides lower precision on the determination of certain bulk properties of satellites, such as the radial density profile, and is, of course, limited to satellites that are within spectroscopic reach. Both of these approaches are limited by the initial selection of the candidate satellites, which is based on imaging and will always suffer from a surface brightness selection effect (spectroscopic surveys suffer an additional surface brightness bias because of the further difficulty in obtaining the spectra). The recent appreciation that there are many fairly large galaxies, large both in physical size (effective radius, \(r_{e}\), \(>1.5\) kpc) and total luminosity (some as bright as \(M_{g}\sim-17\)), that have evaded detection due to their exceedingly low central surface brightness, and that such galaxies survive in dense environments (van Dokkum et al., 2015), leads to a suspicion that previous satellite samples may be missing satellites that are as massive as the Large Magellanic Cloud (e.g., DF 44 has M\({}_{200}=10^{11.2\pm 0.6}\) M\({}_{\odot}\); van Dokkum et al., 2019). This suspicion prompted the recent examination of the deepest available samples of satellite galaxies outside the Local Group (Carlsten et al., 2021; Mao et al., 2021; Nashimoto et al., 2022) by Karunakaran & Zaritsky (2023) for what those samples have to say regarding the existence of low surface brightness, physically-large galaxies (commonly referred to as ultra-diffuse galaxies or UDGs) as satellites of L\({}^{*}\), or MWA, galaxies. Their conclusion was that MWAs host proportionally, by total mass, nearly the same number of UDGs as do more massive halos. On its surface, this result suggests that UDG formation is neither enhanced nor inhibited in the galactic environment. Nevertheless, the sample of UDG satellites in that study consisted of only 41 confirmed1 satellites split among 75 parent galaxies, making it difficult to divide the sample into categories and address additional questions. Encouragingly, consistent conclusions regarding the mean number of such satellites for MWAs were presented by Li et al. (2022), who use an enhanced photometric approach that incorporates size and color to help remove contamination from a sample without spectroscopic follow-up. It is important to note that for most UDGs a spectroscopic approach is not feasible given that exposure times of \(\sim 1\) hour are needed on large telescopes to obtain a redshift (van Dokkum et al., 2015; Chilingarian et al., 2019; Kadowaki et al., 2021). Footnote 1: Seventeen confirmed with spectroscopic redshifts and 24 with distances measured using surface brightness fluctuations. We return to the photometric approach with a large sample of UDG candidates and focus on bulk UDG satellite properties. We undertake this study because there now exists a catalog of UDG candidates that spans nearly 20,000 sq. degree of sky and contains nearly 7,000 candidates (the SMUDGes catalog; Zaritsky et al., 2019, 2022, 2022). We aim to establish the radial number density profile, the luminosity function, and the color distribution of UDG satellites of MWAs and compare those to the corresponding measurements of the more classical satellite population. By doing so, we will present conclusions regarding plausible UDG formation and evolution models. We present the technical aspects of the approach in SS2 and our results and interpretation in SS3. We use a standard WMAP9 cosmology (Hinshaw et al., 2013), although the results are insensitive to different choices of cosmological parameters at the level of current uncertainties, and magnitudes are from SDSS/DESI and are thus on the AB system (Oke, 1964; Oke & Gunn, 1983). ## 2 Methodology We use the SIMBAD database (Wenger et al., 2000) to identify Milky Way analogs (MWAs) projected in proximity to each UDG candidate in our catalog that meets a minimum 20% estimated completeness (for details of the completeness calculation see Zaritsky et al., 2022, but this criterion corresponds to an estimate that we have found \(>20\%\) of the UDG candidates with similar photometric properties across the full survey footprint). We impose this cut to avoid having to make large, highly uncertain completeness corrections. Overall the completeness is roughly 50%, due mainly to aggressive masking of the survey area around bright foreground objects and regions of Galactic Cirrus. An important related concern is that the completeness is expected to fall dramatically near each MWA because those regions were masked in the original UDG search (Zaritsky et al., 2022). That incompleteness factor is included in an average sense in the catalog because we account for the masked area but is not mapped specifically around each MWA. As such, the distribution of pair separations will be increasingly incorrect at ever smaller separations, but, for the most part, we work at projected radii where we do not expect this to have an impact. Nevertheless, we search for signs of this effect in the results discussed below. Using the absolute magnitude of the Milky Way (M\({}_{g}=-21.0\); Bland-Hawthorn and Gerhard, 2016), we define MWAs to have \(-22<\) M\({}_{g}<-20\) in three different recessional velocity slices (\(4500<cz/(\text{km s}^{-1})<5500\), \(5500<cz/(\text{km s}^{-1})<6500\), and \(6500<cz/(\text{km s}^{-1})<7500\)). The three slices provide us with independent checks of the results. Caution is warranted when comparing among results from different studies, as the definition of MWA varies among studies focusing on such objects (e.g., Mao et al., 2021; Carlsten et al., 2021), which often have additional environmental conditions or, perhaps less critically, slightly different magnitude criteria. We impose the lower \(cz\) limits on the parents to ensure that candidate UDGs, which are selected in SMUDGes to have \(r_{e}>5.3\) arcsec, match the physical size criterion of UDGs (\(r_{e}\geq 1.5\) kpc; van Dokkum et al., 2015). We limit the redshift range of each slice to minimize possible variations in satellite magnitude and size within each sample. We adopt an upper size limit (\(r_{e}<6\) kpc), which is not a standard UDG criterion, because UDG candidates with inferred sizes larger than this limit are likely contaminants (Kadowaki et al., 2021; Zaritsky et al., 2022; Karunakaran and Zaritsky, 2023). To include potential MWAs without a cataloged value of m\({}_{g}\) that do have a cataloged value of m\({}_{B}\), we calculate the average m\({}_{g}-\)m\({}_{B}\) color for those with photometry in both bands and apply that value as a correction to m\({}_{B}\) for those missing m\({}_{g}\). This is a crude correction but we only use the parent magnitude to place it relative to the two magnitude wide bin defining MWAs. We set the search radius for MWAs around each UDG candidate in the completed SMUDGes catalog (Zaritsky et al., 2022) to correspond to 10 Mpc at the near edge of each recessional velocity slice because that separation corresponds roughly to where the galaxy-galaxy correlation function drops below a value of 1 (Tucker et al., 1997; Zehavi et al., 2002). From each SIMBAD query, we retain the right ascension (\(\alpha\)), declination (\(\delta\)), m\({}_{g}\), m\({}_{B}\), and redshift of each MWA candidate. We assign the UDG candidate the redshift of the associated MWA, evaluate its absolute magnitude and size, reject candidates that do not match our physical size criteria for UDGs, and calculate the projected physical separation between the pair. In Figure 1 we present the inferred absolute magnitude distribution of the candidate UDGs. Although the majority of these values are drawn from unphysical pairs, and are therefore incorrect, the distribution does provide some intuition regarding the types of satellites to which this analysis is sensitive. Our search produces a list of 626,560, 411,507, and 538,726 accepted UDG-MWA pairs for the three different redshift slices from the catalog of 6332 UDG candidates and the recovered 2905 MWAs across the three redshift slices. We track the distribution of pair separations, inversely weighted by the completeness fraction of each UDG separately for (1) all pairs, (2) those containing a blue UDG, and (3) those containing a red UDG. The color dividing line is defined to be 0.1 mag bluer than the empirically-determined red sequence for UDGs (\(g-r=0.167-0.031M_{r}\); Zaritsky et al., 2022). We also set lower (\(g-r>0.2\)) and upper (\(g-r<0.75\)) color cutoffs to remove likely contaminants in the catalog (Zaritsky et al., 2022). The blue limiting criterion also matches the color of the bluest field UDGs examined in detail (Jones et al., 2023). Armed with the distribution of pair separations, we present the corresponding pair distribution as a function of separation out to 10 Mpc (Figure 2) using bins of 67 kpc width for the nearest redshift slice. We estimate uncertainties using Poisson errors in the number of pairs in each bin and then propagate those through the calculation of the surface density. This is likely to be an underestimate of the uncertainties and we discuss the alternative approach below. A simple power-law plus constant background fit to the surface density, \(\Sigma=ar^{k}+b\), is also shown in Figure 2 and appears to be a reasonable Figure 1: Distribution of inferred absolute magnitudes of UDG satellites when working within the \(4500<cz/(\text{km sec}^{-1})<5500\) slice. Although many of the galaxies are likely to be false pairs, this plot shows the type of UDG satellites to which we are sensitive. approximation to the data (numerical values for the fit parameters are given in Table 1 for each of the three redshift slices and color divided samples). There are however two potential problems with the model fits and their interpretation. First, as we mentioned above, we expect incompleteness to set in at small separations. The comparison of the data and the fitted power-law in Figure 2 suggests a possible decline in pair surface density in the innermost bin, but with our current binning scheme there is no resolution at radii within \(r<100\) kpc. Whether the decline is observational, statistical, or physical is unclear. To further examine the behavior of the pair separation distribution at radii within 100 kpc, we reduce the bin width to 20 kpc with the understood sacrifice of lower statistical precision. The comparison of the pair surface densities derived from the three redshift slices is presented in Figure 3. For this comparison, we scaled the three distributions to produce the same number density of background (uncorrelated) pairs. Although there is not an unambiguous systemic downturn among the distributions, a second potential problem becomes evident. The scatter among the measurements, even within one redshift slice, greatly exceeds the plotted statistical uncertainties, which are mostly smaller than the size of plotted symbols. We reevaluate the uncertainties using the scatter among the results from the three redshift slices and plot the mean and standard deviation of the mean the lower panel of Figure 3. This estimate of the uncertainties has the potential to be an overestimate as the surface density profiles among the slices may vary because the satellite samples in each slice are different in terms of luminosity and physical size. We expect the uncertainties to lie somewhere between those shown in Figure 2 and 3, but closer to the latter. In the lower panel of Figure 3 we find a decline at the smallest radii (perhaps the innermost two or three bins), but the uncertainties are clearly larger than our previous estimates (excluding the innermost point, which is likely to have an actual uncertainty that is comparable to the other data within 100 kpc but just happened to contain measurements that exhibited little scatter). At the radii where this turnover might be detectable (\(\sim 60\) kpc) some of our largest galaxy masks may be affecting the completeness. Examining the sample of confirmed UDG satellites discussed by Karunakaran & Zaritsky (2023), we find no sign of such a turnover, but the numbers are small. On the other hand, if this turnover is real then it could signal an interesting physical effect. This topic is clearly an avenue that requires further study with larger samples. We will discuss the possible effect of this uncertainty in our measurements below. As for why the Poisson statistics underestimate the true uncertainties, we suspect that it is related to the fact that pairs are not statistically independent (for example, a single UDG will be paired with each of the L\({}^{*}\) galaxies in a nearby galaxy group). To calculate the number of UDG satellites per MWA, \(S_{\rm UDG}\), we invert our measurements. The distribution of separations remains the same, but the normalization changes. The one aspect we do not know is the number of MWAs over the survey area (we only searched for MWAs projected near UDGs). We can however, estimate the surface density of MWAs in the survey footprint using the measured background values, \(b\), of our model fits. By multiplying this surface density and the survey area, \(A\), of the full survey (20,000 deg\({}^{2}\)), we calculate the number of MWAs over the survey footprint (\(\equiv A\cdot b\)). Specifically, the number of UDGs associated with each MWA, for projected separations ranging from Figure 2: UDG-MW pair surface density for the \(4500<cz/({\rm km~{}s^{-1}})<5500\) slice per UDG. The pairs are defined by their projected separation. The upper panel shows the distribution in linear units, while the lower one in logarithmic units. The power law + background model fit is performed in linear space but shown in both panels. Errorbars are plotted, but are mostly within the symbols themselves. We discuss the apparent underestimation of the uncertainties in the text. \(r_{min}\) to \(r_{max}\) is given by \[S_{\rm{UDG}}=\frac{\int_{r_{min}}^{r_{max}}2\pi ar^{k}rdr\cdot N_{\rm{UDG}}}{A \cdot b}, \tag{1}\] where \(N_{\rm{UDG}}\) is the number of UDG candidates in the sample being considered (as opposed to the number of UDG satellites, \(S_{\rm{UDG}}\)) and \(a\), \(b\), and \(k\) are the corresponding fit parameters for that sample. We calculate \(S_{\rm{UDG}}\) by integrating from 20 to 250 kpc. To estimate the uncertainties in \(S_{\rm{UDG}}\), we evaluate the integral 1000 times, choosing different values for \(a\), \(b\), and \(k\) from their respective error distributions, and use the resulting distribution of \(S_{\rm{UDG}}\) to define the 16th and 84th percentiles as the \(1\sigma\) uncertainty interval. The inner boundary of our integration represents a radius at which we are effectively within the MWA itself but in practice eliminating this cut does not increase the inferred \(S_{\rm{UDG}}\) beyond the quoted uncertainty. The outer boundary represents the extent of the MWA halo, or virial radius. While our MW may have a somewhat smaller virial radius (e.g., Shen et al., 2022), even decreasing the outer radius to 200 kpc reduces the inferred number of satellites only slightly below the quoted \(1\sigma\) lower bound quoted in Table 2. Finally, to address the possibility of a turnover in \(S_{\rm{UDG}}\) at small radii, if we integrate only from 70 kpc outward, where there is no hint of a turnover, \(S_{\rm{UDG}}\) drops to 0.43 for the nearest redshift slice, which is a value that is only slightly more than a \(1\sigma\) decrease from that quoted. We expect this to be an overestimate of the potential effect because we assumed in this test that there are absolutely no UDG satellites interior to 70 kpc even in projection. In fact, \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ UDG Sample} & \(a\) & \(k\) & \(b\) \\ \hline All, \(4500<cz/\)km s\({}^{-1}<5500\) & 0.28\(\pm\)0.02 & \(-0.87\pm 0.07\) & \(0.26\pm 0.01\) \\ All, \(5500<cz/\)km s\({}^{-1}<6500\) & 0.21\(\pm\)0.01 & \(-0.96\pm 0.05\) & \(0.18\pm 0.04\) \\ All, \(6500<cz/\)km s\({}^{-1}<7500\) & 0.32\(\pm\)0.01 & \(-0.70\pm 0.04\) & \(0.21\pm 0.01\) \\ Red, \(4500<cz/\)km s\({}^{-1}<5500\) & 0.39\(\pm\)0.03 & \(-0.86\pm 0.07\) & \(0.23\pm 0.01\) \\ Red, \(5500<cz/\)km s\({}^{-1}<6500\) & 0.28\(\pm\)0.02 & \(-0.98\pm 0.05\) & \(0.17\pm 0.01\) \\ Red, \(6500<cz/\)km s\({}^{-1}<7500\) & 0.38\(\pm\)0.02 & \(-0.73\pm 0.05\) & \(0.20\pm 0.01\) \\ Blue, \(4500<cz/\)km s\({}^{-1}<5500\) & 0.07\(\pm\)0.02 & \(-1.23\pm 0.26\) & \(0.31\pm 0.01\) \\ Blue, \(5500<cz/\)km s\({}^{-1}<6500\) & 0.10\(\pm\)0.01 & \(-0.96\pm 0.10\) & \(0.18\pm 0.04\) \\ Blue, \(6500<cz/\)km s\({}^{-1}<7500\) & 0.25\(\pm\)0.02 & \(-0.56\pm 0.07\) & \(0.20\pm 0.01\) \\ \hline \end{tabular} \end{table} Table 1: UDG-MWA Pair Separation Distribution Power-law Fit Parameters \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ UDG Sample} & \(S_{\rm{UDG}}\) \\ \hline All, \(4500<cz/\)km s\({}^{-1}<5500\) & \(0.53^{+0.10}_{-0.08}\) \\ All, \(5500<cz/\)km s\({}^{-1}<6500\) & \(0.46^{+0.05}_{-0.05}\) \\ All, \(6500<cz/\)km s\({}^{-1}<7500\) & \(0.25^{+0.02}_{-0.02}\) \\ Red, \(4500<cz/\)km s\({}^{-1}<5500\) & \(0.42^{+0.08}_{-0.07}\) \\ Red, \(5500<cz/\)km s\({}^{-1}<6500\) & \(0.36^{+0.05}_{-0.04}\) \\ Red, \(6500<cz/\)km s\({}^{-1}<7500\) & \(0.16^{+0.02}_{-0.02}\) \\ Blue, \(4500<cz/\)km s\({}^{-1}<6500\) & \(0.09^{+0.09}_{-0.04}\) \\ Blue, \(5500<cz/\)km s\({}^{-1}<6500\) & \(0.10^{+0.03}_{-0.02}\) \\ Blue, \(6500<cz/\)km s\({}^{-1}<7500\) & \(0.07^{+0.01}_{-0.01}\) \\ \hline \end{tabular} \end{table} Table 2: Number of UDG Satellites per MWA the region near the MWA may be a difficult one to interpret as there may be an additional contribution from UDG-like tidal dwarfs (Bennet et al., 2018). ## 3 Results ### The UDG-MWA Pair Separation Distribution In Figures 2 and 3, and Table 1, we present our measurements of the UDG-MWA pair separation distribution. The rise in the surface density toward smaller separations demonstrates that there does exist a significant population of UDGs that are physically correlated with MWAs. Additionally, the mean power law slope fit for three slices (\(-0.84\pm 0.06\)) is in agreement with that for 'normal' satellites of giant galaxies (\(\sim-0.9\); Lorrimer et al., 1994). UDGs do not appear to preferentially form or get destroyed in the environments near MWAs at different relative rates than do "normal" satellites. This conclusion comes with the caveat that we have insufficient data to explore trends at radii smaller than about 60 kpc. Environment, at least broadly within the virial radius of MWAs, does not appear to be a significant factor in the evolution of the number of UDGs. This result is in concordance with a lack of any strong environmental signature in the approximately linear relation between the number of UDGs per halo and host halo mass extending from the most massive galaxy clusters down to MWAs (Karunakaran and Zaritsky, 2023; Li et al., 2022), which we will further confirm in SS3.2. Regarding UDG formation mechanisms, these results indicate that UDGs, at least the population of satellite UDGs, form primarily as part of the normal, hierarchical universal dark matter superstructure (e.g., Di Cintio et al., 2017; Chan et al., 2018; Jiang et al., 2019; Martin et al., 2021; Wright et al., 2021), rather than through more specific channels like UDG formation through tidal interactions (Bennet et al., 2018; Jones et al., 2021), direct satellite collisions (Silk, 2019; Shin et al., 2020), or interaction with extremely dense environments (Yozin and Bekki, 2015; Safarzadeh and Scannapieco, 2017) that may best explain interesting individual UDGs. Of course, even within the "standard" model, formation for such a heterogeneous class of objects as UDGs may follow several formation pathways (Liao et al., 2019; Sales et al., 2020) and our measurement is insensitive to UDG satellites found at small separations. ### The Number of Satellite UDGs per MWA Using the pair separation density profiles, we calculate and present the number of UDG satellites for the typical MWA within projected radii between 20 and 250 kpc in Table 2. Aside from the statistical errors that are quoted, these numbers are susceptible to various systematic uncertainties. First, the sample of UDGs is incomplete, as we will discuss in SS3.3. Second, as we have discussed already, the appropriate limits on the integration of the radial surface density profile are somewhat uncertain, which results in uncertainties that are comparable to the statistical ones. Lastly, we are working with projected rather than physical radii. Focusing for now only on the full samples (not selecting by color), we find that the typical MWA contains less than one UDG satellite, within the range of UDG properties that we are sensitive to. We place this result in one context in Figure 4 by comparing the number of UDG satellites per MWA, \(S_{\rm UDG}\), with the numbers of UDG satellites measured in host halos of larger masses. The comparison is somewhat fraught because the data come from a variety of surveys that have different selection criteria. Such differences tend to be of order unity and are obscured by the large parameter range covered in the Figure, but they need to be carefully addressed if one is interested in modest deviations from a linear slope in Figure 3: Comparison of pair separation distributions derived from the three redshift slices in the upper panel: \(4500<cz/{\rm km~{}s}^{-1}<5500\) (blue circles); \(5500<cz/{\rm km~{}s}^{-1}<6500\) (grey squares); \(6500<cz/{\rm km~{}s}^{-1}<7500\) (red triangles). We have decreased the bin size to 20 kpc and overplot the fitted model curves for each of the three redshift slices. In the lower panel we show the average of the three slices with the uncertainties reflecting the error in the mean using standard deviation of those values rather than the Poisson uncertainties shown in the upper panel. the overall relation. Nevertheless, we confirm the qualitative conclusion of previous studies (Karunakaran and Zaritsky, 2023; Li et al., 2022) that \(S_{\rm{UDG}}\) for MWAs is approximately consistent with a linear extrapolation of the relation established using halos of larger total mass. The near linearity of this relation over more than 3 orders of magnitude in mass might appear to challenge models where UDG satellites are a hybrid population, for example those where a significant fraction of UDG satellites are born as such and the remainder consists of galaxies transformed by the environment (Liao et al., 2019; Sales et al., 2020). However, at least in terms of the number of UDG satellites, the Liao et al. (2019) study is consistent with our finding -- predicting \(\sim\) 1 UDG satellite per MWA. Nevertheless, there is likely to be a fine tuning challenge if simulations showing that only a small fraction of UDGs that fall into clusters survive (\(\sim\)20%; Jiang et al., 2019) are correct. This challenge could become acute when a precise slope is empirically determined. A version of Figure 4 redone with homogeneous data and selection is critical to further confrontation to the models. ### UDG Satellite Colors We now examine the behavior of red and blue UDGs separately. In Figure 5 we present the pair separation distribution for red and blue UDGs in the redshift slice spanning \(4500<cz/(\rm{km~{}s^{-1}})<5500\). The results for the other slices are similar. While the surface density rise at small separations is present in both populations, confirming the existence of both red and blue UDG satellite populations, it is dominated by red UDGs, indicating that the majority of UDG satellites of MWAs have not recently been forming stars at a significant rate. Roughly 80% of the UDG satellites we find are red. As such, and in contrast to our conclusion regarding the number of UDGs, there is a strong environmental signature in the stellar populations of UDG satellites. Interestingly, however, there are some star-forming UDGs even at small (projected) separations, suggesting that whatever environmental quenching there may be is either not rapid or entirely efficient. This result follows on the suggestion from Karunakaran et al. (2021) that the overall satellite populations of galaxies indicate that quenching may be overestimated in current simulations. We close by noting that the divergence by color in the populations is evident even at large radii (\(>\) 1 Mpc), well outside the virial radii of MWAs, much like it is in the general galaxy population surrounding galaxy clusters (Lewis et al., 2002; Gomez et al., 2003). As such, it suggests that much like for more massive galaxies, a full understanding of the environmental effects will be challenging to reach and must involve pre-processing (Zabludoff and Mulchaey, 1998; McGee et al., 2009; De Lucia et al., 2012) that occurs prior to the galaxy's arrival in its current environment. ### UDG Satellite Luminosity Function Figure 4: \(S_{\rm{UDG}}\) vs. host halo mass. For our sample we adopt the MW mass estimate of Shen et al. (2022). The plot, the fitted relationship, and other measurements are adopted from Karunakaran and Zaritsky (2023) and the original measurements referenced in the legend (Román and Trujillo, 2017; Mancera Pina et al., 2019; van der Burg et al., 2016, 2017; Janssens et al., 2019; Yagi et al., 2016; Forbes et al., 2020; La Marca et al., 2022; Venhola et al., 2022). \begin{table} \begin{tabular}{c c c c} \hline \hline Luminosity Range & 4500\(<cz/\rm{km~{}s^{-1}}<5500\) & 5500\(<cz/\rm{km~{}s^{-1}}<6500\) & 6500\(<cz/\rm{km~{}s^{-1}}<7500\) \\ \hline \(-17.0<M_{r}<-16.0\) & \(0.02^{+0.16}_{-0.03}\) & \(0.02^{+0.01}_{-0.01}\) & \(0.07^{+0.02}_{-0.02}\) \\ \(-16.5<M_{r}<-15.5\) & \(0.06^{+0.06}_{-0.03}\) & \(0.08^{+0.02}_{-0.02}\) & \(0.11^{+0.01}_{-0.01}\) \\ \(-16.0<M_{r}<-15.0\) & \(0.14^{+0.03}_{-0.03}\) & \(0.24^{+0.04}_{-0.03}\) & \(0.16^{+0.02}_{-0.02}\) \\ \(-15.5<M_{r}<-14.5\) & \(0.23^{+0.05}_{-0.04}\) & \(0.28^{+0.04}_{-0.04}\) & \(0.12^{+0.02}_{-0.01}\) \\ \(-15.0<M_{r}<-14.0\) & \(0.26^{+0.06}_{-0.05}\) & \(0.18^{+0.03}_{-0.03}\) & \(0.06^{+0.01}_{-0.01}\) \\ \(-14.5<M_{r}<-13.5\) & \(0.21^{+0.07}_{-0.05}\) & \(0.07^{+0.02}_{-0.02}\) & \(0.00^{+0.00}_{-0.00}\) \\ \hline \end{tabular} \end{table} Table 3: Number of UDG Satellites per MWA in Luminosity Bins For each of the three redshift slices, we present the number of UDG satellites per magnitude over the range of magnitudes to which we are sensitive in Table 3 and Figure 6. Together these provide both a selection-uncorrected view of the UDG satellite luminosity function and the associated uncertainties. The results among the three redshift slices agree well down to \(\rm M_{r}\sim-15\) and then begin to diverge. That divergence is systematic in that the decline in numbers with fainter luminosity begins at brighter luminosity with the most distant redshift slice and continues in sequence up until the nearest slice. This behavior is as expected because we are less sensitive both to fainter and smaller UDG satellites at larger distances. Even in the intermediate redshift slice, the SMUDGes angular selection criterion already excludes UDGs with physical effective radii of less than about 2 kpc. The turnover in the luminosity function obtained from the nearest redshift slice suggests that it too is incomplete below \(\rm M_{r}\sim-15\). If so, then this means that the total satellite numbers we provide in SS3.2 are, to some degree, underestimates of the full number of UDG satellites. However, there are reasons to believe that the UDG population does not extend in large numbers to fainter surface brightnesses than those captured by SMUDGes (Zaritsky et al., 2022). To place these numbers in context, we compare our luminosity function for UDG satellites with the satellite luminosity functions of four nearby, extremely well-studied MWAs (M 84, M 91, M 101, and Cen A; Bennet et al., 2019, and references therein) in Figure 7. We show the cumulative satellite luminosity function for the combined set of nearby MWAs and that for our UDG satellites. We have corrected our values of \(\rm M_{r}\) to \(\rm M_{V}\) using a mean value of \(g-r\) of 0.6 for UDGs and a correction (Fukugita et al., 1995) from V to \(g\) of 0.2 mag for early type galaxies (80% of UDG satellites are non star forming; SS3.3). We expect that the published luminosity functions for the local MWAs include any UDG satellites that are there because those studies used deep, wide field observations intended to reach both faint and low luminosity systems. For \(-16<\rm M_{V}<-14\), we find that UDG satellites are \(\sim 10\%\) of the satellite population. We conclude that the UDG satellite population at a given luminosity, for \(\rm M_{V}<-14\), is well sub-dominant and there is no significant, lurking population of large, low surface brightness satellites at these luminosities. ### UDG Satellite Mass Function A mass function measurement would be ideal for a direct comparison to models. Although cosmological simulations do produce UDGs (Tremmel et al., 2020; Wright et al., 2021), we are always at the mercy of assumptions in the baryonic sub-grid physics if we can only compare the luminous properties of galaxies. A check on those assumptions would be to have both the luminosities and masses of UDGs (or at least internal kinematics). As we mentioned previously, the total mass-to-light (\(M/L_{total}\)) ratios of UDGs are likely to be significantly larger than those of comparably massive galaxies. At the limit of our current understanding of UDGs, they appear to have \(M/L_{total}\) that is at least an order of magnitude larger (van Dokkum et al., 2019), with perhaps some unusual exceptions (van Dokkum et al., 2022). If indeed \(M/L_{total}\) for UDG satellites is a factor of 10 larger than for non-UDG satellites of similar luminosity, then we should slide the UDG LF in Figure 7 to the right by 2.5 magnitudes to appropriately compare the numbers of similarly massive satellites. At this point, the UDG satellites would still be subdominant in number, but now only by a factor of a few rather than an order of magnitude. As such, they could play a significant role in the satellite/subhalo accounting at LMC-like masses. To explore this topic a bit further, we estimate the total mass of these low mass galaxies using only photometry (Zaritsky and Behroozi, 2023). In this approach, a scaling relation is used to recover the velocity dispersion at the effective radius and, therefore, an estimate of the enclosed mass within this radius. By assuming an NFW density profile (Navarro et al., 1996), we then determine which model produces the measured enclosed mass at \(r_{e}\). The method was used by Zaritsky and Behroozi (2023) to explore the stellar mass-halo mass relation and by Zaritsky (2022) to study the relation between globular cluster populations and total mass. We use the relation to isolate UDGs with masses comparable to or larger than Figure 5: Pair surface density as a function of UDG color. UDG-MW pairs that include a UDG classified as red are shown in the red circles, while those classified as blue are shown as blue squares. Dashed lines are power law plus flat background fits to the data. that of the LMC (log \((M_{h}/M_{\odot})=11.14\); Erkal et al., 2019). We present results for pair separation involving UDGs inferred to have \(10.9<\log(M_{h}/M_{\odot})<12\). As a caution, we note that the scaling relation has not been fully vetted to apply to UDGs because of the paucity of spectroscopic data for UDGs. Where comparison is possible, the results are in acceptable agreement and provide masses within a factor of a few, which is comparable to the overall precision limit of the method and within the range of our order of magnitude mass selection bin. Further discussion of the use of this approach for UDGs will be presented in Zaritsky et al. (2022). We find results from the three slices that are consistent (\(0.08^{+0.08}_{-0.04}\), \(0.06^{+0.04}_{-0.03}\), and \(0.07\pm 0.1\), for the lowest to highest redshift slices respectively) and that on average suggest that the number of UDG satellites with LMC-like or larger masses per MWA is \(0.07^{+0.02}_{-0.01}\), or alternatively, corresponding to \(\sim 13\%\) of our deepest satellite sample. Roughly 1 in 14 MWAs have a UDG satellite that is of comparable mass to the LMC. Comparing that result to the calculation based on standard \(\Lambda\)CDM that \(\sim 40\%\) of \(10^{12}\) M\({}_{200}\) halos should host something nearly as massive as the LMC (Wang et al., 2012) suggests that \(\sim 18\%\) of this population may fall in the ultra-diffuse class. ## 4 Conclusions We present a correlation analysis between UDG candidates from the SMUDGes catalog (Zaritsky et al., 2019, 2022, 2022) and Milky Way analogs (MWAs) drawn from the SIMBAD database (Wenger et al., 2000) from which we identify a population of UDGs that are physically associated with MWAs. We find the following: \(\bullet\) A population of UDG satellites exists that surrounds MWAs. The distribution of those satellites (projected surface density \(\propto r^{-0.84\pm 0.06}\)) is entirely consistent in character with that of normal satellite galaxies. We conclude that the processes by which most of these UDG satellites form are related to how low mass galaxies form in general. We exclude exotic formation mechanisms for UDG satellites as a primary formation channel. A consistent conclusion was reached in a recent study of an entirely different population of UDGs (Jones et al., 2023). \(\bullet\) On average, each MWA has \(\sim 0.5\pm 0.1\) UDG satellites at projected radii between 20 and 250 kpc and \(-17<\) M\({}_{r}<-13.5\). \(\bullet\) We compare our measurement of the number of UDG satellites per MWA to published measurements of the number of UDG satellites in hosts of different masses. We confirm previous findings that the number of UDG satellites of MWAs is consistent with a nearly linear trend between the number of UDG satellites and total halo mass (Karunakaran and Zaritsky, 2023; Li et al., 2022). We interpret this finding as providing further evidence against specific, UDG formation scenarios that are unconnected with the general formation path of low mass galaxies. \(\bullet\) We find that red UDGs are far more tightly clustered around MWAs than blue UDGs and that red UDGs comprise \(\sim 80\%\) of the UDG satellite population of MWAs out to 250 kpc (where blue is defined as being more than 0.1 mag bluer than the red sequence in \(g-r\) vs. M\({}_{r}\)). Although environmental quenching is likely involved, we note that, as with normal galaxies near galaxy clusters Figure 6: UDG satellite luminosity function. We present the number of UDG satellites within 1 magnitude bins as derived from our three redshift slices. The results for the nearest slice (\(4500<cz/(\)km s\({}^{-1})<5500\)) are represented by the black line, while those of the other two slices are represented by blue circles and gray squares, with the squares representing the farthest of the three slices. Horizontal bars represent the bin widths, while vertical error bars are statistical uncertainties. Units for \(S_{\rm UDG}\), are number per MWA per magnitude. Figure 7: Comparison of cumulative satellite luminosity function for local MWAs (data from Bennet et al. 2019 and references thererin, see text) in blue and our mean measurements for UDG satellites of MWA in red. At a given luminosity, over the range of our measurements, UDG satellites are typically about \(\sim\)10% of the total number of satellites. (Lewis et al., 2002; Gomez et al., 2003), the color changes happen well outside the virial radius and the trend likely results from a far more complex history (e.g., De Lucia et al., 2012) than that of simple quenching scenarios. \(\bullet\) We find that for \(-17<\rm{M_{r}}<-13.5\) UDG satellites are \(\sim 10\%\) of the total satellite population down to a similar magnitude limit. However, we note that UDGs have been shown, in the limited number of cases studied, to be strongly dark matter dominated and may therefore represent a larger fraction of satellites down to a correspondingly larger total mass limit. In support of this claim we estimate halo masses using the Zaritsky & Behroozi (2023) methodology and conclude that UDG satellites may comprise \(\sim\)18% of the satellites with halo masses of at least half the mass of the LMC. In summary, UDG satellites appear to be directly connected to the overall satellite population in a manner that suggests that there is not a distinct, separate formation channel. They are a minority, but still significant fraction of the satellite populations of Milky Way analogs and should be included in discussions involving satellite galaxy populations. The authors acknowledge financial support from NSF AST-1713841 and AST-2006785 for SMUDGes. An allocation of computer time from the UA Research Computing High Performance Computing (HPC) at the University of Arizona and the prompt assistance of the associated computer support group is also gratefully acknowledged. AK acknowledges financial support from the grant CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033 and from the grant POST-DOC_21_00845 funded by the Economic Transformation, Industry, Knowledge and Universities Council of the Regional Government of Andalusia. Astropy (Astropy Collaboration et al., 2013, 2018), astroquery (Ginsburg et al., 2019), galpy, (Bovy, 2015), Matplotlib (Hunter, 2007), NumPy (van der Walt et al., 2011), pandas (McKinney, 2010), SciPy (Oliphant, 2007; Millman & Aivazis, 2011),
2307.12687
Identification of the melting line in the two-dimensional complex plasmas using an unsupervised machine learning method
Machine learning methods have been widely used in the investigations of the complex plasmas. In this paper, we demonstrate that the unsupervised convolutional neural network can be applied to obtain the melting line in the two-dimensional complex plasmas based on the Langevin dynamics simulation results. The training samples do not need to be labeled. The resulting melting line coincides with those obtained by the analysis of hexatic order parameter and supervised machine learning method.
Hu-Sheng Li, He Huang, Wei Yang, Cheng-Ran Du
2023-07-24T11:02:57Z
http://arxiv.org/abs/2307.12687v1
Identification of the melting line in the two-dimensional complex plasmas using an unsupervised machine learning method ###### Abstract Machine learning methods have been widely used in the investigations of the complex plasmas. In this paper, we demonstrate that the unsupervised convolutional neural network can be applied to obtain the melting line in the two-dimensional complex plasmas based on the Langevin dynamics simulation results. The training samples do not need to be labeled. The resulting melting line coincides with those obtained by the analysis of hexatic order parameter and supervised machine learning method. keywords: plasma crystals, melting, unsupervised machine learning + Footnote †: journal: Fundamental Plasma Physics ## 1 Introduction A complex plasma is composed of an weakly-ionized gas and micron-sized solid particles [1; 2; 3]. Due to the higher thermal velocity of electrons, the particles are negatively charged and interact with each other via screening Coulomb (Yukawa) interaction [4; 5; 6; 7]. In the laboratory, monodisperse microparticles can be levitated in the sheath and confined in a single layer, where gravity force is balanced by the electrostatic force [8; 9]. Under certain conditions, particles can self-organize in a triangular lattice with hexagonal symmetry, forming a two-dimensional (2D) plasma crystal [10; 11]. Upon heating, a plasma crystal melts and the regular structure vanishes [12; 13; 14; 15]. In fact, the thermodynamic status of a complex plasma depends on the coupling parameter \(\Gamma=Q^{2}/4\pi\epsilon_{0}\Delta k_{b}T\) and the screening parameter \(\kappa=\Delta/\lambda_{D}\), where \(Q\) is the charge, \(T\) is the kinetic temperature, \(\Delta\) is the interparticle distance, and \(\lambda_{D}\) is the Debye length. The melting line in the phase diagram of the complex plasma has been extensively studied in the past years [16; 17]. Molecular dynamics simulations have been applied to study the phase diagram in the 2D [18] and three-dimensional (3D) complex plasmas [19], where the liquid-solid transition was identified by the measurement of the free energy and order parameter, respectively. Recently, machine learning methods have been widely applied in the investigations of the complex plasmas [20; 21; 22]. Particularly, it was employed to obtain the phase diagram in the 2D complex plasmas based on the simulation results and also applied to study the melting in the experiments [23]. The convolutional neural network was applied to the synthesized images of the particle suspension, whose thermodynamics status was labeled for the liquids at high temperatures and for the crystals at low temperatures. Such method is known as supervised learning, which requires prerequisite knowledge of the particle structure for different statuses. In this paper, we applied an unsupervised machine learning method to obtain the melting line in the phase diagram of the 2D complex plasma. The evolution of the particle positions upon heating were obtained using the Langevin dynamics simulations. The convolutional neural network was applied in an unconventional manner, resulting in the melting temperature at different \(\kappa\). ## 2 Methods A standard Langevin dynamics simulation was employed to simulate the melting process in the 2D complex plasma. The dynamics of individual particles in the suspension were govern by the equation of motion \[m\ddot{\mathbf{r}}_{i}+m\nu\dot{\mathbf{r}}_{i}=-\sum_{j\neq i}\bigtriangledown\!\!\! \bigtriangledown\!\!\phi_{ij}+\mathbf{L}_{i}, \tag{1}\] where \(\mathbf{r}_{i}\) is the position of particle \(i\), \(m\) is the particle mass, and \(\nu\) is the damping rate, which results from the neutral gas. The Brownian motion was included in the equation and the corresponding Langevin force \(\mathbf{L}_{i}\) at a certain kinetic temperature \(T\) was defined by \(\langle\mathbf{L}_{i}\rangle=0\) and \(\langle\mathbf{L}_{i}(t)\mathbf{L}_{j}(t+\tau)\rangle=2\nu mk_{b}T\delta_{i,j}\delta( \tau)\mathbf{I}\), where \(\delta_{ij}\) is Kronecker delta, \(\delta(\tau)\) is the delta function, and \(\mathbf{I}\) is the unit matrix. For the monodisperse particles immersed in the plasma, the interaction between charged particles reads \[\phi_{ij}=\frac{Q^{2}}{4\pi\epsilon_{0}r_{ij}}\exp(-\frac{r_{ij}}{\lambda_{D} }), \tag{2}\] where \(r_{ij}\) is the distance between particle \(i\) and \(j\) and the constant charge is assumed as \(Q=8000\)e. In total 6400 particles were included in the simulation, where the periodic boundary conditions were used. In the melting process, the temperature rose from 100 K to 70000 K. The local structure around particle \(i\) could be quantified by the hexatic order parameter \[\Psi_{6,i}=\frac{1}{6}\sum_{k=1}^{6}e^{j6\theta_{k}}, \tag{3}\] where six nearest neighboring particles were considered and \(\theta_{k}\) is the angle between \(\mathbf{r}_{k}-\mathbf{r}_{i}\) and the \(x\) axis. If particle \(i\) is located in the center of a perfect hexagon cell, its \(\Psi_{6}\) is unity. As shown in Fig. 1, the averaged hexatic order parameter \(\overline{|\Psi_{6}|}\) gradually decreased with the temperature for \(T<30000\) K at \(\kappa=0.34\). The majority of the particles self-organized in a triangular lattice, where \(\Psi_{6}\approx 1\). A few defects were embedded in the plasma crystal. The drop of \(\overline{|\Psi_{6}|}\) became faster in the temperature range from 30000 to 35000 K, in which the melting transition should happen. As the temperature further increased, \(\overline{|\Psi_{6}|}\) decreased even more slowly than the initial stage of the heating, where the local ordered structure vanished completely, as shown in the inset of Fig. 1. A convolutional neural network (CNN) was applied to train on the simulation results. As CNN is mainly used in the image analysis such as image recognition, object detection, and segmentation [24], we converted particle positions with time into sequences of images, which were similar to the experiment recordings [25; 26]. The resulting images had a gray scale, where particles appeared as white spots and the background was black. An example is demonstrated in Fig. 2. The details of the image synthesis can be found in Ref. [23]. The CNN method used in this study contained three \(3\times 3\) kernel 2D convolutional layers, as shown in Fig.2. The rectified linear unit (RELU) was used as the activation function. Max pooling was applied to preserve the maximum value within local receptive fields and discard all other values. Gaussian dropout was introduced to prevent overfit and the feature maps were flattened to fully connected layers. Eventually, the softmax function was applied to achieve binary classification with a loss function of cross-entropy, leading to the identification of the images as either crystal or liquid. ## 3 Results An unsupervised machine learning method was applied to investigate the phase diagram of 2D complex plasmas. For a certain \(\kappa\), we can arbitrarily assume a melting temperatures and divide the image Figure 1: Dependence of the averaged hexatic order parameter \(\overline{|\Psi_{6}|}\) on the kinetic temperature at \(\kappa=0.34\). The structure of the particle suspension for the low (left) and high (right) temperature are demonstrated in the insets, where the hexatic order parameter for the individual particles are colored. In the plasma crystal, the majority of the particles self-organized in the triangular lattice with hexagonal symmetry, where a defect chain was embedded. In the liquid complex plasma, the regular structure was absent and the hexatic order parameters were significantly smaller than unity. sequences into samples of liquid and crystal accordingly. Obviously, if this assumed temperature approaches the lowest and highest temperature in the total samples, all the images are virtually regarded in one single thermodynamics status, resulting in a high accuracy in the validation. Except for these extreme scenarios, if the assumed melting temperature is different from the true melting temperature, the neural network is confused by the mixed samples and naturally results in a relatively low validation accuracy. However, if the assumed melting temperature coincides with the true melting temperature, the neural network can successfully recognize the different thermodynamics status of the 2D complex plasmas in the images based on the structures of the particle positions and thus results in a high validation accuracy [27]. The dependence of the validation accuracy on the assumed melting temperature is shown in Fig. 3 for \(\kappa=0.34\). The images were divided into the training and validation samples by a ratio of 80% to 20%. The evolution of the training and validation accuracy against epochs for three assumed melting temperature are shown in Fig. 3(a-c). In the panel (a), the assumed melting temperature was lower than the true melting temperature, the training accuracy rose relatively fast and reached 0.99 after training for 10 epochs, while the validation accuracy was always lower than the training accuracy. As the assumed melting temperature coincided with the true melting temperature, shown in the panel (b), the training accuracy rose even faster than in panel (a) and reached unity after 5 training epochs. More importantly, the validation accuracy was also close to unity for the whole training procedure. In the panel (c), the assumed temperature was much higher than the true melting temperature, the training accuracy rose relatively slowly and reached 0.98 after 10 epochs. However, the validation accuracy was significantly lower and fluctuated around 0.82, indicating a wrong classification. The overall dependence of the validation accuracy on the assumed melting temperature exhibited a W-like shape, as shown in Fig. 3(d). We zoomed the peak in the inset and refined the sampling step of the assumed melting temperature. The peak lay at the temperature of 33170 K, which was the true melting temperature of the system for \(\kappa=0.34\). Note that here we did not label the samples based on any features, which was usually required in the supervised machine learning methods. Finally, we repeated this procedure for different \(\kappa\) and obtained the melting line in the phase diagram of the 2D complex plasmas. The results are shown Figure 3: Training and validation accuracy for the unsupervised machine learning method at \(\kappa=0.34\). (a-c) The training and validation accuracy for the assumed melting temperature at \(22240,33170,57390\) K. (d) The dependence of the validation accuracy on the assumed melting temperature, exhibiting a W-like shape. The red lines highlight the shape of this dependence. The peak of the validation accuracy is zoomed in the inset and the corresponding temperature is essentially the true melting temperature for the selected \(\kappa\). Figure 2: Architecture of the convolutional neural network for the identification of the melting line. Three \(3\times 3\) kernel 2D convolutional layers (Conv.) were included, followed by max-pooling (MaxP.). Gaussian dropout (Drop) was applied to prevent overfit and the layers were flattened (Flatten). The last two layers were fully connected, leading to a binary classification. as red squares in Fig. 4. The solid line resulted from the analysis of the hexatic order parameter of the 2D complex plasma, where the melting temperature was defined such that \(\overline{|\Psi_{6}|}=0.45\)[18; 28]. The dashed line was the fitted curve to the melting temperature obtained by the supervised machine learning method, where the liquid and crystal were labeled before training at extreme temperatures [23]. Our results generally agree with those obtained by the above two methods. The thermodynamic status in the 2D complex plasma can be identified by the transient particle structure alone. ## 4 Conclusion and outlooks In this paper, we demonstrate that the unsupervised machine learning method can be applied to the identification of the melting line in the 2D complex plasmas. The resulting melting line coincides with those obtained from the analysis of hexatic order parameter and supervised machine learning method. It turns out that \(\overline{|\Psi_{6}|}=0.45\) is indeed an suitable critical value of the order parameter to identify the melting transition of 2D complex plasmas. As the samples do not need to be labeled based on any order parameter, such method can be potentially applied to study the phase diagram of the 3D complex plasmas, where the structures of the plasma crystal deviate from the standard bcc, fcc and hcp structure [29; 30]. Such deformed structures may be caused by the anisotropic effects such as ion drag or gravity, and thus can not be accurately constructed and labeled. The advantage of the unsupervised machine learning method shall overcome such obstacles. Besides, this method can also be applied to the experiment results as long as sufficiently large samples can be collected. We leave these for the future work. ## 5 Acknowledgments This work was supported by the National Natural Science Foundation of China (NSFC), Grant No. 11975073 & 21035003, and the Fundamental Research Funds for the Central Universities, Grant No. 2232023G-10.
2310.11999
IRAS4A1: Multi-wavelength continuum analysis of a very flared Class 0 disk
Understanding the formation of substructures in protoplanetary disks is vital for gaining insights into dust growth and the process of planet formation. Studying these substructures in highly embedded Class 0 objects using the Atacama Large Millimeter/submillimeter Array (ALMA), however, poses significant challenges. Nonetheless, it is imperative to do so to unravel the mechanisms and timing behind the formation of these substructures. In this study, we present high-resolution ALMA data at Bands 6 and 4 of the NGC1333 IRAS4A Class 0 protobinary system. This system consists of two components, A1 and A2, separated by 1.8" and located in the Perseus molecular cloud at $\sim$293 pc distance. To gain a comprehensive understanding of the dust properties and formation of substructures in the early stages, we conducted a multi-wavelength analysis of IRAS4A1. Additionally, we sought to address whether the lack of observed substructures in very young disks, could be attributed to factors such as high degrees of disk flaring and large scale heights. To explore this phenomenon, we employed radiative transfer models using RADMC-3D. Our multi-wavelength analysis of A1 discovered characteristics such as high dust surface density, substantial dust mass within the disk, and elevated dust temperatures. These findings suggest the presence of large dust grains compared to the ones in the interstellar medium (ISM), greater than 100 microns in size within the region. Furthermore, while there's no direct detection of any substructure, our models indicate that some, such as a small gap, must be present. In summary, this result implies that disk substructures may be masked or obscured by a large scale height in combination with a high degree of flaring in Class 0 disks. [Abridged]
O. M. Guerra-Alvarado, N. van der Marel, J. Di Francesco, L. W. Looney, J. J. Tobin, E. G. Cox, P. D. Sheehan, D. J. Wilner, E. Macías, C. Carrasco-González
2023-10-18T14:31:08Z
http://arxiv.org/abs/2310.11999v1
# IRAS4A1: Multi-wavelength continuum analysis of a very flared Class 0 disk ###### Abstract Context:Understanding the formation of substructures in protoplanetary disks is vital for gaining insights into dust growth and the process of planet formation. Studying these substructures in highly embedded Class 0 objects using the Atacama Large Millimeter/submillimeter Array (ALMA), however, poses significant challenges. Nonetheless, it is imperative to do so to unravel the mechanisms and timing behind the formation of these substructures. Aims:In this study, we present high-resolution ALMA data at Bands 6 and 4 of the NGC1333 IRAS4A Class 0 protobinary system. This system consists of two components, A1 and A2, separated by 1.8" and located in the Perseus molecular cloud at \(\sim\)293 pc distance. Methods:To gain a comprehensive understanding of the dust properties and formation of substructures in the early stages, we conducted a multi-wavelength analysis of IRAS4A1. Additionally, we sought to address whether the lack of observed substructures in very young disks, could be attributed to factors such as high degrees of disk flaring and large scale heights. To explore this phenomenon, we employed radiative transfer models using RADMC-3D. We employed different approaches and compared the model outcomes with our observational data. This comparison allowed us to gain insights into the challenges in detecting substructures in nascent disks and shed light on the potential influence of the dust scale height on observations of protoplanetary disks. Results:The continuum data revealed the presence of two disks/envelopes around A1 and A2, along with structure connecting the two sources. Furthermore, spectral index measurements indicate lower optical depth within the A2 disk compared to A1. Our multi-wavelength analysis of A1 discovered characteristics such as high dust surface density, substantial dust mass within the disk, and elevated dust temperatures. These findings suggest the presence of large dust grains compared to the ones in the interstellar medium (ISM), greater than 100 microns in size within the region. By employing RADMC-3D, we confirmed that increasing the scale height creates the appearance of an asymmetry in protoplanetary disks. Our findings indicate that a scale height of at least 0.3 (H/R) is necessary to produce this observed asymmetry. Furthermore, while there's no direct detection of any substructure, our models indicate that some, such as a small gap, must be present. However, reproducing the intensity profile along the major and minor axes necessitates considering other processes that may be occurring within the IRAS4A1 disk. Conclusions:In summary, this result implies that disk substructures may be masked or obscured by a large scale height in combination with a high degree of flaring in Class 0 disks. ## 1 Introduction Recent studies of young stellar objects (YSOs) have concluded that dust growth from small particles to planetesimals may occur very early in the lifetime of protoplanetary disks (Tychonice et al., 2020; Drazkowska et al., 2023). To stop the radial drift of dust particles and allow growth to happen, dust evolution models require dust particles to be trapped within disk substructures. This requirement implies that the formation of these substructures should be well underway during the Class 0/I phase of disk evolution. While such substructures have been detected in the disks of some Class I objects (Sheehan, 2020; Segura-Cox et al., 2020), these are still very limited in number. Moreover, they have not yet been observed in Class 0 protoplanetary disks. High-resolution studies of more evolved (Class II) protoplanetary disks, however, have revealed that substructures are common (ALMA Partnership et al., 2015; Andrews et al., 2018;Isella et al., 2016; Long et al., 2018). These substructures, such as gaps, rings, arcs in cavities, and spiral arms (e.g., Casassus et al., 2013; van der Marel et al., 2013; Perez et al., 2016; Huang et al., 2018), have proven crucial in reconciling the timescales for dust drift and planet formation, allowing larger dust particles to decelerate and grow further (Pinilla et al., 2012). Many groups propose that disk substructures form due to interactions between the disk and forming planets (Dong et al., 2015; Zhang et al., 2018). Other processes, however, may also contribute to their presence (Flock et al., 2015; Zhang et al., 2015; Okuzumi et al., 2016; Takahashi & Muto, 2018). The presence of substructures in evolved disks raises questions about when these substructures form and how are they linked to planet formation. The origin of these substructures, however, remains a subject of debate. In addition to radial drift, vertical settling is another crucial process that significantly influences the evolution of dust particles in protoplanetary disks allowing them to grow and move into the mid-plane. This settling refers to the vertical motion of particles within the protoplanetary disk, driven by the balance between the gravitational force from the central star and the gas drag experienced by the particles. This settling process is influenced by several factors, including the turbulence of the disk and the sizes of the dust grains (Dullemond & Dominik, 2004). In effect, larger dust grains are generally more efficient at settling due to their greater inertia, decoupling from the gas and moving into the mid-plane where they can grow further, while smaller grains experience stronger gas drag and tend to stay more mixed with the gas in higher layers of the disk (Barriere-Fouchet et al., 2005). Vertical settling is expected to occur faster than radial drift and can be particularly pronounced in the inner regions of disks (Laibe et al., 2014). In evolved Class II disks, some studies have observed that the larger dust particles are already primarily located at the mid-plane, indicating that settling has already occurred (Pinte et al., 2016). On the other hand, the larger dust particles in younger disks may not have had enough time to settle completely, and the settling process may be ongoing in Class I disks with larger vertical extend (Villenave et al., 2020, 2023). Furthermore, previous studies have provided enough evidence supporting the occurrence of grain growth in Class 0 young sources, as indicated by their low millimeter spectral indices. Notably, these studies have shown that the spectral index values are larger (ranging from 3.5 to 5) within the envelope at scales extending beyond 2000 au compared to the values (\(<3.5\)) at smaller scales (\(<\)200 au) (Kwon et al., 2009;Jorgensen et al., 2007). More recent findings suggest that young objects, particularly Class 0 YSOs, exhibit significant degrees of flaring and have considerable scale heights (Sheehan et al., 2022; Michel et al., 2022). This flaring and large scale height, irrespective of resolution or optical depth considerations, may conceal substructures in these systems or even prevent their formation. As a result, the settling of large particles is still ongoing, and much material remains in higher layers of the disk. More recently, a large study was performed in the eDisk survey (Ohashi et al., 2023) of Class 0/I objects with several new findings about their young disks. Understanding the dust properties, vertical structures, and evolution of substructures from the early disk stages is crucial for comprehending the onset and progression of planet formation, as dust evolution and grain growth play vital roles in that process. In this study, we investigate the protobinary system NGC1333-IRAS4A (IRAS4A), which is situated in the Perseus molecular cloud at a distance of 293 parsecs (pc) (Zucker et al., 2018). The system consists of two Class 0 protostars, namely IRAS4A1 and IRAS4A2, which are separated by an angular distance of 1.8" (Tobin et al., 2018). Both IRAS4A1 and IRAS4A2 are surrounded by an envelope with a total mass of approximately 8 \(M_{\odot}\) and a total luminosity of around 5 \(L_{\odot}\)(Maury et al., 2019). Both objects have very well-distinguished outflows (San \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline Project & ALMA & Repr. & ToS & Sensitivity & Rms & Min BL & Max BL & BW \\ Code & PJ & Band & Frequency (GHz) & (s) & Array & (mJy) & (mJy) & (mJy) & (m) & (m) & (GHz) \\ \hline \multicolumn{11}{c}{Long baseline observations} \\ \hline 2018.1.00510.S James Di Francesco & 6 & 265.88 & 7111 TMI(C43-8) & 0.0220 & 0.10 & 92.1 & 8547.6 & 248-268 \\ 2018.1.00510.S James Di Francesco & 4 & 140.84 & 8913 TM1(C43-9) & 0.017 & 0.013 & 83.1 & 16196.3 & 136-154 \\ \hline \multicolumn{11}{c}{Short baseline observations} \\ \hline 2018.1.00510.S James Di Francesco & 6 & 265.88 & 3267 TM2 (C43-5) & 0.04690 & 0.02050 & 15.1 & 2617.4 & 248-268 \\ 2018.1.00510.S James Di Francesco & 4 & 140.84 & 3125 TM2 (C43-6) & 0.037 & 0.045 & 15 & 2516.9 & 136-154 \\ \hline \end{tabular} \end{table} Table 1: Observations characteristics of IRAS4A \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & Central & Central & Synthesized beam & & & & & & \\ & Frequency & Wavelength & Major \(\times\) minor & Beam P.A. & rms & Peak flux A1 & Peak flux A2 & Robust & Peak SNR \\ Band & (GHz) & (mm) & (mas \(\times\) mas) & (deg) & (\(\mu\)Jy/beam) & (mJy/beam) & (mJy/beam) & & \\ \hline 6 & 256.994 & 1.2 & 78 \(\times\) 31 & 20.82 & 135.299 & 14.04 & 19.92 & 0.0 & 147.5 \\ 4 & 145.009 & 2.1 & 47\(\times\)29 & 5.79 & 26.27 & 3.13 & 2.63 & 0.0 & 120.4 \\ VLA Ka & 32.95 & 9.1 & 75\(\times\)54 & 79.94 & 9.773 & 0.719 & 0.306 & 0.5 & 80 \\ \hline \end{tabular} \end{table} Table 2: IRAS4A intrinsic continuum images characteristics \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{Synthesized beam} & & & \\ & Major \(\times\) minor & rms & Peak flux A1 & Peak flux A2 \\ Band & (mas \(\times\) mas) & (\(\mu\)Jy/beam) & (mJy/beam) & (mJy/beam) \\ \hline 6 & 78\(\times\)78 & 370.5 & 33.94 & 40.01 \\ 4 & 78\(\times\)78 & 62.53 & 13.09 & 8.17 \\ VLA Ka & 78\(\times\)78 & 9.89 & 0.943 & 0.355 \\ \hline \end{tabular} \end{table} Table 3: IRAS4A tapered continuum images characteristics tangelo et al. 2015). We present here the continuum emission of both A1 and A2. We aim to study the structure of IRAS4A1 and investigate the absence of substructures in this component using radiative models. Additionally, in a separate paper, we will discuss the line emission and the presence of complex organic molecules in the IRAS4A system, as well as the continuum analysis of IRAS4A2. ## 2 Observations The observations used in this paper were obtained using the Atacama Large Millimeter/submillimeter Array (ALMA). Band 4 (1.2 mm) and Band 6 (2.1 mm) data were taken as part of the project code 2018.1.00510.S (PI: James Di Francesco). The calibration of the data was performed by the ALMA staff and was restored by the allego team at Leiden University. For Band 4, the observations were carried out in five execution blocks spanning from October 16th, 2018 to September 12th, 2021. Band 6 data were acquired in four execution blocks from November 19th, 2018 to September 30th, 2019. The total observing time on source for Band 4 was 3.34 h, while Band 6 had a total observing time of 2.88 h. Table 1 provides additional information regarding the characteristics of the data utilized in this study. The data reduction process was carried out using the Common Astronomy Software Applications (CASA, (McMullin et al. 2007)) version 5.7.0. The continuum spectral windows were separated from the line spectral windows and then averaged into eight channels for both data sets, Band 6 has 12 spectral windows centered at 264 Ghz, 252 GHz, and 250 Ghz with a total bandwidth of 2 GHz each. Band 4 has 15 spectral windows centered at 138 GHz, 150 Ghz, and 152 GHz with a total bandwidth of 2 GHz each. The flux calibration errors are set to the nominal values of 5% at Bands 4, 6. Self-calibration techniques were employed for each spectral window individually using solution intervals of inf, 60s and 30s. Initially, we performed phase only self-calibration to the short baseline data which resulted in significant improvements for Band 4 data (signal-to-noise ratio, from 88 to 380). For Band 6 data, however, only an increase in the SNR from 61 to 69 was achieved. In any case, sufficient self-calibration solutions were found, leading to enhanced data quality. Amplitude self-calibration was also performed but we stopped after a single step for most of the spectral windows, as it did not yield substantial improvements in the signal-to-noise ratio. For the long baseline data, we also performed phase self-calibration, although the improvement observed was comparatively less significant than in the short baseline data (Band 4 SNR from 42 to 86 and Band 6 SNR from 16 to 19). The reason for the lesser improvement in long baselines compared to short baselines could be attributed to a higher frequency of returns to the phase calibrator source during the long-baseline observations. Additionally, since self-calibration was exclusively applied in the same configuration, the cleaning process for long-baseline data often struggles to model the largest angular scales, even though they are present. This limitation affects the visibility data, especially considering the substantial amount of large scale emission present in these data. Amplitude self-calibration was only applied to a few specific spectral windows due to minimal enhancements in the SNR. The final data sets were obtained after concatenating all the spectral windows together in which no alignment was needed for any of them. Moreover, Very Large Array (VLA) data of IRAS4A was obtained from the VLA Nascent Disk and Multiplicity (VANDAM) survey (Tobin et al. 2016) conducted in the Perseus molecular cloud. The observations took place on October 21, 2013, employing the B-array configuration. For the correlator setup, two basebands with a total bandwidth of 4 GHz were utilized. These basebands were centered at frequencies of 36.9 GHz and 28.5 GHz, respectively. The setup was then further divided into 32 spectral windows each having a bandwidth of 128 MHz. The VLA Ka-band data in B-configuration has a shortest baseline length of 210 m and an estimated amplitude calibration uncertainty of \(\sim\)10%. The final continuum images were created using task _clean_ in CASA. In addition, we used the MTMFS deconvolver (Rau & Cornwell 2011) with nterms=2, together with scales of 0, 10, 30, and 50 times the pixel size (0.003" and 0.01" for ALMA and VLA images, respectively). Briggs weighting was found opti Figure 1: Images of IRAS4A at 1.2 mm, 2.1 mm, and 9.1 mm imaged at 78 mas resolution. The central RA and Dec positions for Band 4 and Band 6 are 03:29:10.510 and +31.13.31.010, respectively. For the VLA image, the central Dec position is 03:29:10.502, and the central RA position remains the same, aligning with +31.13.31.010. Both sources are visible at a separation of 1.8” with some surrounded leftover emission seen between them at 1.2 and 2.1 mm but not at 9.1 mm (See appendix A.1, where the image color scale was changed to show the extended emission). Moreover, the peak emission of IRAS4A2 is larger than that of IRAS4A1 at 1.2 mm. The emission of both sources in the VLA image is more radially compact though it is very faint for IRAS4A2. mal for the purpose of this project, as it provided the best compromised between sensitivity and resolution, and several Robust values were explored when making the final images. Furthermore, for the Band 4 data, the uv range was modified to decrease the resolution. A smooth tapering was applied by setting _uvtaper_ to 0.058". Both, the Band 4 and Band 6 images were convolved to have the same 78 (milliarcsecond) beam. This common resolution allowed for a consistent analysis alongside the Very Large Array (VLA) data at 9.1 mm. Table 2 and Table 3 provide an overview of the characteristics of the images for Band 4 and Band 6, along with the VLA image obtained from the VANDAM survey. We acknowledge that the ALMA data for IRAS4A at Band 4, with its high resolution, time on source, and rms, can be favorably compared to the data obtained in the ALMA 2014 Long Baseline Campaign (LBC) Science Verification (SV) data of HL Tau at Band 6 (ALMA Partnership et al. 2015). The data for HL Tau was specifically designed to search for substructures, a goal that was also intended for the observation of IRAS4A1. The IRAS4A data set has a resolution of 47 mas, a time on source of 3.34 h and an rms of 13 \(\mu\)Jy, while the HL Tau data had a resolution of 35 mas, a time on source of 4.5 h and an rms of 11 \(\mu\)Jy. Given the numerous substructures identified in the HL Tau disk and the comparable nature of the data, one would expect these observations to be sufficient for detecting substructures in the IRAS4A1 disk. ## 3 Results Figure 1 displays the continuum images obtained from the observations. IRASA1 and IRAS2A2 are well resolved in both the Band 4 and Band 6 images. The majority of the submillimeter (sub-mm) emission originates from within each of these two sources. There is, however, additional faint emission observed between and surrounding A1 and A2, indicating some form of structure between the two sources. This structure is particularly evident at 2.1 mm and 1.2 mm wavelengths but not at 9.1 mm, which might be related to the lower sensitivity to thermal dust emission at 9.1 mm (see Appendix A.1). The origin of this extended emission remains unknown but the material could potentially be associated with the surrounding molecular cloud or with some diffuse envelope/core material at these scales. In contrast, the emission surrounding A1 or A2 is likely originating from the inner envelope or a very optically thick flared disk. Moreover, the brightness peak emission from A1 is lower than that from A2 at 1.2 mm, contrary to what is observed at 2.1 mm and 9.1 mm (see section 2 for the flux values). One possible explanation for this discrepancy could be that both sources have different scale heights and different optical depths. Despite the objects' similar age, Band 6 may be tracing different layers in A1 and A2, possibly not corresponding to the mid-plane. Furthermore, our ALMA images have been thoroughly examined, and no additional compact objects, such as low-mass companions or distant galaxies, have been detected within the field of view (\(>3\sigma\)). Due to the high sensitivity and resolution of our data, it's highly improbable that any such objects have been missed. This suggests that A1 and A2 are unlikely to be part of Figure 2: Upper right and left panels: Brightness temperatures at 1.2 mm, 2.1 mm, and 9.1 mm of A1 and A2, respectively. Lower right and left panels: Spectral indices between 9.1 mm - 2.1 mm and 2.1 mm - 1.2 mm of A1 and A2, respectively. In the inner parts of the A1 source, some free-free emission might be present at 9.1 mm. Moreover, the brightness temperature in A1 at 1.2 mm is lower than that at 2.1 mm which, as seen from the spectral index, might indicate very optically thick emission at those wavelengths and small dust particles (\(<\)1 mm). We only considered the statistical uncertainties for the brightness temperature and spectral index. a binary with a separation greater than 20 au, which is the long axis of the beam. However, is important to note that our data is not sensitive enough to detect a star lacking a circumstellar disk. Lastly, a distinct asymmetry is observed for A1 in the 1.2 mm image, which is not apparent in the 2.1 mm and 9.1 mm images. The cause of this asymmetry warrants further investigation as it may provide valuable insights into the vertical structure of its Class 0 protoplanetary disk. The radial profiles from these images were obtained by averaging the emission in elliptical rings for both sources and the central position of the radial profile of A2 was determined based on the peak emission in the respective images. Since there is an asymmetry in the A1 source, the central position of the radial profiles was determined by a Gaussian fit using _imfit_. Although a slight bias might remain in the Gaussian fit, it was considerably less pronounced than using the peak emission center. Consequently, the center from the Gaussian fit is likely much closer to the ac- tail center of the source. For A1, the inclination and position angle values were set to 20 degrees and 96 degrees (NE direction, from North axis moving towards East), respectively, as reported for the outflow in Ching et al. 2016. On the other hand, for A2 we took the inclination to be 14 degrees (Ching et al. 2016) while the position angle was taken from measurements on the inner outflow (Chuang et al. 2021) (122 degrees NE). Finally, the brightness temperature values were calculated by applying the full Planck equation to the radial intensity profiles as indicated by Rybicki & Lightman (1979), the concept refers to the temperature of a blackbody having the same brightness at that specific frequency. Moreover, two additional radial profiles were generated, representing the spectral indices between 9.1 mm and 2.1 mm, as well as between 2.1 mm and 1.2 mm. Figure 2 displays both the brightness temperature profiles and the spectral indices for A1 and A2. By examining Figure 2, we can observe the behavior of the spectral indices for A1 and A2. For A1, the spectral indices are very low at the center of the source. As the radius increases, however, these indices gradually become larger. This continues until the noise of the 9.1 mm image starts to dominate the emission. Comparing the spectral index between 2.1 mm and 1.2 mm for A1 with that of A2, we find that the index for A1 remains consistently below 2 across most radii. On the other hand, the spectral indices for A2 are consistently above 2 throughout the range of radii considered. This discrepancy suggests that the emission from A1 is significantly more optically thick than that from A2 at these wavelengths. The difference in spectral indices between A1 and A2 implies differences in the physical properties of the two sources. For example, A1 may have a denser and more optically thick environment, which affects the observed spectral behavior. Furthermore, dust self-scattering might be affecting the inner regions of the IRAS4A1 disk. Additionally, some free-free emission might be increasing the brightness temperature in the inner regions of the A1 VLA image, affecting the spectral index between 9.1 mm - 2.1 mm. In the recent study conducted by Galametz et al. (2019) using independent measurements from the CALYPSO sample at 1.3 and 3.2 mm, they reported discovering remarkably low values of spectral indices (\(<\)2.0) within the inner regions of the IRAS4A1 envelope, specifically at distances of less than 200 au. This is in agreement with our spectral index values of the IRAS4A1 inner regions. Additionally, Galametz et al. (2019) also observed higher spectral indices values extending up to 2000 au, which was attributed to grain growth processes occurring within the envelope. It is crucial to acknowledge that our high-resolution image might be causing the extended component of the envelope to be resolved out, thereby making it difficult to measure the spectral index of this particular component. Jorgensen et al. (2007), previously pointed out that when extracting emission from the envelope, the spectral index of compact components would be flattened. Then, spectral index values below 3.5 at smaller radii could be indicative of the presence of another component, most likely a disk. ### Multi-wavelength analysis of a Class 0 young stellar object. In this study, we will adopt the hypothesis that the emission detected in our high-resolution images originates primarily from a disk rather than the envelope. The reason behind this is that the high resolution of our imaging may result in the loss of most of the emission from the extended components (envelope) and that, as mentioned before, our findings of low values of the spectral indices in Figure 2 further support the notion of a disk scenario. Of course, it needs to be noted that some emission coming from the inner envelope might still be contributing to the total emission. While there might still be some confusion within the envelope, we unfortunately didn't account for a dynamical distinction. Separating the continuum from the envelope is challenging, and due to the optical depth, analyzing the lines becomes quite limited. Additionally, to align with previous research, we will also consider the disk to be flared, similar to observations and results found in Class 0 Young Stellar Objects (YSOs) using ALMA data (Sheehan et al. 2022;Michel et al. 2022) and as suggested by edge on observations (Villenave et al. 2020, 2023) and recently, the eDisk survey (Ohashi et al. 2023). Protoplanetary disks are commonly expected to have millimeter or even centimeter-sized dust particles. Because of such grain growth, the albedo of the dust can be high at millimeter wavelengths, indicating that scattering plays a significant role in the opacity of the dust emission. When scattering is a dominant factor, the spectral index of the dust emission can no longer be directly associated to a spectral index of the dust opacity (i.e., \(\beta\)) (e.g., Sierra & Lizano 2020; Zhu et al. 2019).To analyze the spectral energy distribution (SED) of protoplanetary disks properly, it is crucial to consider both absorption and scattering effects in the dust opacity. To include the scattering effect, we can write the source function in the radiative transfer equation as: \[S_{\nu}(T)=\omega_{\nu}J_{\nu}+(1-\omega_{\nu})B_{\nu}(T), \tag{1}\] where \(J_{\nu}\) is the local mean intensity and \(\omega_{\nu}\) is the albedo, defined by the scattering coefficient and the absorption coefficient as \(\omega_{\nu}=\frac{\sigma_{\nu}}{\sigma_{\nu}+\sigma_{\nu}}\). We can approximate this to the analytical solution found in Miyake & Nakagawa 1993 assuming a disk as a vertically isothermal slab and with isotropic scattering: \[J_{\nu}=B_{\nu}(T)[1+f(t,\tau_{\nu},\omega_{\nu})], \tag{2}\] where \[f(t,\tau_{\nu},\omega_{\nu})=\frac{\exp(-\sqrt{3}\epsilon_{\nu}t)+\exp(\sqrt {3}\epsilon_{\nu}(t-\tau_{\nu}))}{\exp(-\sqrt{3}\epsilon_{\nu}\tau_{\nu})( \epsilon_{\nu}-1)-(\epsilon_{\nu}+1)}, \tag{3}\] where \(t\) is the optical depth variable and \(\tau_{\nu}=\Sigma_{dust}\chi_{\nu}\), where both are measured perpendicular to the disk mid-plane. Also, \(\epsilon_{\nu}=\sqrt{1-\omega_{\nu}}\). Considering inclination effects by correcting the optical depth by the inclination angle (\(i\)) of the disk, we reach the emergent specific intensity obtained by Sierra et al. 2019: \[I_{\nu}=B_{\nu}(T)[(1-\exp(\tau_{\nu}/\mu))+\omega_{\nu}F(\tau_{\nu},\omega_{ \nu})], \tag{4}\] where, \[F(\tau_{r},\omega_{r})=\frac{1}{\exp(-\sqrt{3}\epsilon_{r}\tau_{r})( \epsilon_{r}-1)-(\epsilon_{r}+1)}\mathrm{x}\] \[\left[\frac{1-\exp(-(\sqrt{3}\epsilon_{r}+1/\mu)\tau_{r})}{\sqrt{3 }\epsilon_{r}\mu+1}+\frac{\exp(-\tau_{r}/\mu)-\exp(-\sqrt{3}\epsilon_{r}\tau_ {r})}{\sqrt{3}\epsilon_{r}\mu-1}\right], \tag{5}\] It is important to mention that for these equations isotropic scattering is assumed, which may be an incorrect approximation for \(2\pi a\geq\lambda\). To reduce the effect of the approximation, we replace the scattering coefficient in all equations with an effective scattering coefficient in the form (Ishimaru, 1978; Birnstiel et al., 2018): \[\sigma_{r}^{eff}=(1-g_{v})\sigma_{r}, \tag{6}\] where \(g_{v}\) is the asymmetry parameter, i.e., the expectation value of \(\cos\theta\), where \(\theta\) is the scattering angle (e.g., Ishimaru, 1978; Birnstiel et al., 2018). The values of \(g_{v}\) depend on the dielectric properties of the dust particles. For our calculations, the values obtained in Birnstiel et al. 2018 for \(\sigma_{r}^{eff}\) were used. In our analysis, the particle size distribution is assumed to follow a power law with a slope (\(n(a)\propto a^{-p}\)), where p is commonly assumed to be 3.5 according to measurements of the ISM (Mathis et al., 1977). Also, the DSHARP opacity data (Birnstiel et al., 2018) was employed, which considers particles without porosity and a composition of 20 % water fraction by mass, 32.91 % astronomical silicates, 7.43 % troilite, and 39.66 % refractory organics. Equation 5 then ultimately depends on only three free parameters: dust temperature (\(T_{dust}\)), the surface density (\(\Sigma_{dust}\)), and the particle size (\(a_{max}\)). With three or more observed wavelengths, it becomes possible to solve the equation and obtain estimates for the three free parameters (\(T_{dust}\), \(\Sigma_{dust}\), \(a_{max}\)). It is important to note that this model assumes a single temperature at each radius within the disk. This assumption generally holds when most of the dust is settled in the disk's midplane. In cases where the emission is originating from an envelope or a flared disk involving different layers, however, this assumption may not be valid. So, it is worth noting that the temperature structure within protoplanetary disks can be complex, particularly if there are significant vertical temperature gradients or if different layers of the disk are contributing to the observed emission at different wavelengths. In these situations, a more sophisticated modeling approach that considers the vertical structure and temperature gradients within the disk would be necessary to interpret the observed SED. Figure 3: Left panels: Probability distributions of the dust parameters (\(a_{max}\),\(T_{dust}\),\(\Sigma_{dust}\)) of A1, the red lines in each plot is the expected value of each of the dust parameters. Middle panel: Optical depths at each radius of the three different wavelengths used in this work. Right panels: Comparison of the radial intensities from the observations and the model at each wavelength. The plots show very high temperatures and surface densities along the radius of the disk. Particle sizes have increased and are large in comparison with the ISM but still very small for the mid-plane of protoplanetary disks, in agreement with the expectation that not much settling has occurred in very young protoplanetary disks. Lastly, A1 shows optically thick emission at Band 6 and 4 at the inner radii. A multi-wavelength analysis similar to ours here was previously performed before on HL Tau using four images between 8 mm and 0.9 mm (Carrasco-Gonzalez et al. 2019) by simplifying the spectral behavior of the extinction coefficient using a power law. After that, it has been used in several other papers (e.g. Macias et al. (2021), Sierra et al. (2021) and Guidi et al. (2022)) using the exact values of the dust opacity at each wavelength, including the work presented in this paper as well. This model is a first approach in determining the dust properties around a Class 0 YSO like A1. A Bayesian approach was employed to obtain the posterior probability distributions of the model parameters (\(a_{max},T_{dust},z_{dust}\)) at each radius. To achieve this, a standard log-normal likelihood function was used, which is defined as follows: \[\ln p(\bar{I}(r)\mid\Theta)=-0.5\sum_{i}\left((\frac{\bar{I}_{i}-I_{mid}}{ \bar{\sigma}_{\bar{I}_{i}}})^{2}+\ln(2\pi\bar{\sigma}_{\bar{I}_{i}}^{2}) \right), \tag{7}\] where \(\bar{I}\) is the azimuthally averaged intensity at radius \(r\) and at frequency \(\nu_{i}\), \(I_{m,i}\) is the model intensity from different combinations of the three free parameters at a radius \(r\), \(\Theta\) is the vector of the three free parameters. In addition, we assumed that the uncertainty \(\bar{\sigma}_{\bar{I}_{i}}\) at radius \(r\) is: \[\hat{\sigma}_{\bar{I}_{i}}=\sqrt{\sigma_{\bar{I}_{i},i}^{2}+(\delta_{i}\bar{I }_{i})^{2}}, \tag{8}\] where \(\sigma_{\bar{I}_{i}}^{2}\) is the error of the mean, obtained from the azimuthally averaged intensity profiles (See section 2), and \(\delta_{i}\) is the flux calibration error at each frequency. Figure 3 shows the analysis we performed, a model grid of intensities was created using various dust parameters. To infer the physical parameters of the dust particles, we compared the observed intensity at each radius with the expected spectral energy distribution (SED) derived from different combinations of the three free parameters in equations 5 and 6 (\(a_{max}\) from 0.001 - 10 cm, \(T_{dust}\) from 0.1 - 250 K, \(\Sigma_{dust}\) from 0.1 - 1000 \(gcm^{-2}\)). In order to better match the observational data, the probability distribution of each parameter was plotted, along with the corresponding expected value (represented by the red curve in Figure 3). The expected value of each parameter was obtained by: \[E(X)=\frac{\sum_{i}X_{i}P(X_{i}\bar{I}(r))}{\sum_{i}P(X_{i}\bar{I}(r))} \tag{9}\] where \(X_{i}\) is each value in all the parameters inside our grid, and \(P(X_{i}|\bar{I}(r))\) is the marginalized posterior probability of each parameter in every single cell of the grid. Multiple equally likely solutions close to each other were found for the A1 source, which equally explained the observed intensity. All the possible solutions have similar \(\Sigma_{dust}\) and dust temperature, which is explained in Zhang et al. (2023) as there are no strong Mie interference patterns when \(2\pi a<\lambda\). Finally, the optical depth values were derived from the analysis to provide insights into the dust properties at different locations within the disk. Figure 3 shows the dust parameters, the optical depths at each wavelength, and the intensities of the observations compared with the ones obtained from the model. From Figure 3, it is evident that the A1 disk exhibits high optical thickness at the inner radii of A1, which poses challenges in fitting the dust parameters accurately. This observation suggests that the disk is highly unstable and contains very small dust particles (hundreds of microns in size) relative to dust grain sizes in protoplanetary disks. Notably, the derived temperature from the multi-wavelength analysis in A1 appears to be higher compared to other Class II disks analyzed using similar methods (Macias et al. (2021), Sierra et al. (2021), Guidi (2019) and Carrasco-Gonzalez et al. (2019)). This discrepancy may be attributed to the young age of the source and other processes occurring within the system, like infalling material that can contribute to the elevated temperature of the dust particles, viscous heating or even back warming by the envelope (Natta 1993). Furthermore, A1 displays a notably high dust surface density and mass in comparison to Class II disks. This result aligns with the notion that a significant portion of the material remains distributed as sub-mm particles surrounding the star rather than having settled and grown in the disk's mid-plane where it cannot be detected by our observations due to high optical depths, the high mass inferred is expected for a very young source like A1, which is likely to be very gravitationally unstable having still a substantial circumprotostellar mass not yet accreted by the central star. We note that the particle sizes found in other disk studies often vary significantly depending on the presence of substructures, which are not detectable in the A1 source. Moreover, the particle sizes observed in other Class II disks tend to be larger (cm-sized particles) compared to the 0.1 mm particles found in A1. This disparity can be attributed to the different evolutionary stages of the disks, with the dust in the other disks having evolved and settled more in the mid-plane. Figure 3 indicates that the material flowing in to form the disk already contains large dust particles (\(>\) 10 microns) compared to the average ISM dust sizes. This suggests widespread grain growth across the entire disk radius. However, as one approaches the midplane and the central star, particles tend to become larger. These large dust particles compared to the ISM particles imply that grain growth is not limited to the midplane but also occurs in the flared regions of the disk where infall is the likely process that triggers this growth. Additionally, the increase in error at the outer region of the A1 disk is a result of the spatial sensitivity of the VLA image. We compare the temperature profile of A1 with other Class 0 sources and found that A1's temperature agrees with those derived from CO and \(H_{2}CO\) snowlines in IRAS04302 (Class I) and L1527 (Class I/O) by van't Hoff et al. (2020). Comparing with models from Yang et al. (2017) on the Class 0 Protostar BHR71; however, we note that the derived densities in A1 are at least an order of magnitude lower. This difference could potentially stem from the observations utilized by Yang et al. (2017) are of shorter wavelengths (_Herschel_) that are more sensitive to the cloud, the surrounding envelope, and smaller dust grains. Concerning particle sizes, our analysis indicates that the inner disk of A1 comprises particles nearly 0.3 mm in size. This suggests that the dust size distribution in the disk is primarily characterized by larger particles when compared to typical interstellar medium (ISM) dust sizes. However, in comparison to pebbles found in more evolved disks, these particles are relatively small. This suggests that although some dust growth has already occurred, the process is still ongoing. Several studies focused on Class 0 objects have measured dust sizes in the envelope using low dust emissivity indices, revealing that grain growth might already happen in this Class 0 objects maybe even up to mm-sized particles (Jorgensen et al. (2009),Galametz et al. (2019)). More specifically, scattering measurements from polarization observations in IRAS4A show the possibility of large millimeter size particles within the system (Cox et al. 2015). These findings di verge from the multi-wavelength analysis in our work that shows smaller dust particles in IRAS4A1. The findings presented in Figure 3 do not provide a definitive explanation for the spectral index below 2 in Figure 2. The particle sizes around 0.1 mm align with what is expected for dust self-scattering, indicating the presence of low spectral indices (Liu, 2019). Nevertheless, these observed values can also be rationalized by considering a highly optically thick disk within r\(<\)60 AU, where the inner layers are warmer than the outer layers. This scenario not only aligns with the observations but also corresponds to the outcomes illustrated in Figure 3, showcasing the high optical depth across all radii. ### Generic gap models with large scale heights. To explain the absence of observed substructures and the observed asymmetry in the IRAS4A1 disk, we employed radiative transfer models using RADMC-3D (Dullemond et al., 2012). For these models we assume that instead of observing a highly optically thick envelope with an embedded disk, IRAS4A1 is actually a flared disk with a significant scale height (the surrounding envelope has been resolved out in these high-res images, See section 3). When considering a greater scale height and flaring in the disk, it's crucial to differentiate between flared disks, which represent an equilibrium configuration of orbiting material, and an infalling model. In this study, we will model the flared disk solely from the perspective of the dust continuum, without incorporating a dynamic approach such as infalling or rotational motions. This flaring effect can create an asymmetry in the disk and make it challenging to detect substructures, if they exist. The combination of disk inclination, large scale heights, and optically thick emission, even at Band 6, contributes to this effect. Evidence supporting the presence of a highly flared disk instead of an envelope has been observed in the Class 0 Protostar L1527 IRS by Sheehan et al. (2022). To test this assumption, a model of the dust continuum emission at 1.2 mm was constructed using RADMC-3D. Initially, "generic gap models" inspired by the disk of HD163296 were made to investigate the disappearance of substructures with increasing scale height. Subsequently, we developed a specific model to the IRAS4A1 disk to reproduce the observed asymmetry at 1.2 mm in combination with the absence of substructures. These radiative transfer models allow us to perform a detailed examination of the disk's dust evolution and provide insights into its vertical structure. For the generic gap models, we fixed certain parameters based on previous studies of HD163296. For the star, the parameters in Table 1 from (Andrews et al., 2018) were used: \(M_{\star}\) = 2.04 \(M_{\odot}\), \(L_{\star}\) = 17 \(L_{\odot}\), \(T_{\star}\) = 9332 K, and a distance of 101 pc. The positions of the two most prominent gaps were taken to be 49 and 86 au with a fixed width of 10 and 8 au, respectively. The disk model was taken to have an inclination of i = 46.7\({}^{\circ}\), a position angle of 133.3\({}^{\circ}\), and a dust mass of 0.039\({}_{\odot}\) from Dullemond et al. (2020). In addition, a size of 110 au for the disk was chosen. Inside RADMC-3D, a generic protoplanetary disk model was used, with the scale height varied in each model. To incorporate the DSHARP dust particle opacities, the optool software (Dominik et al., 2021) was utilized, allowing for their utilization within RADMC-3D. Finally for completeness, RADMC-3D calculated the dust temperature using the density distribution for the generic protoplanetary disk model as follows: \[\rho(r,z)=\frac{\Sigma(r)}{H_{p}\sqrt{2\pi}}exp(-\frac{z^{2}}{2H_{p}^{2}}), \tag{10}\] , where r is the distance to the star from the disk, \(\Sigma(r)\), is the dust surface density, and \(H_{p}\) is the scale height of the dust disk. The scale height (\(H_{p}\)) in the generic protoplanetary disk model follows a power-law dependence on the radial distance as follows: \[H_{p}=H_{100}(\frac{r}{100AU})^{1+\Psi}, \tag{11}\] where \(\Psi\) is the flaring index, with a predefined value 0.14, and \(H_{100}\), is the value of the scale height at a distance of 100 au from the central star. The scale height parameter was increased in the generic gap models until the substructures disappeared due to shadowing, obscuration, and/or contrast effects. Figure 4 shows the images of these models together with a cut through their major and minor axes. In Figure 4, it is evident that substructures present in young Class 0 disks are challenging to observe, if present, due to the large scale heights that these disks may exhibit. The cuts shown in Figure 4 provide additional insights into the behavior of the disk at different scale heights. Along the major axis, even at a low scale height of 0.05, a strong flattening effect on the rings and gaps is observed. This effect is highly dependent on the inclination and position angle of the disk. The intensity variations along the minor axis reveal another interesting aspect: in the SW part of the disk, a lack of intensity is observed, an asymmetry caused by a large vertical structure in the disk, also seen in other sources, such as Lee et al. (2021), Lin et al. (2023). The direction of this asymmetry is determined by the orientation of the modeled disk. Furthermore, as the scale height increases, both the depth of the gaps and the visibility of substructures begin to flatten along the minor axis too. Eventually, there is a point where substructures (\(\leq\)10 au) can no longer be distinguished. This example demonstrates the impact of a highly flared disk on the visibility and discernibility of substructures, if any, in a young protoplanetary disk. ### Large scale height and very flared disk models of IRAS4A1 To investigate the asymmetry observed in the 1.2 mm image of the IRAS4A1 disk, additional modeling was performed in RADMC-3D. The objective was to determine whether or not the observed asymmetry could be reproduced in a large scaleheight flared disk scenario. To set up the RADMC-3D models, we fixed specific parameters. Due to the difficulty of determining the stellar properties directly from the literature for a highly embedded Class 0 object like IRAS4A1, average values of stellar properties in a number of Class I systems were obtained from Tables 1 and 2 in Fiorellino et al. (2023). These average values include the stellar mass (1.55 \(M_{\odot}\)), radius (2.1 \(R_{\odot}\)), and effective temperature (3700 K). The inclination and position angle of the disk were fixed at 20\({}^{\circ}\) and 99\({}^{\circ}\), respectively. The dust mass in the disk was taken from the multi-wavelength analysis, resulting in a value of 0.11\({}^{+0.08}_{-0.04}\)\(M_{\odot}\). The scale height in the RADMC-3D models for the IRAS4A1 disk was initially set to H/R = 0.3, based on the appearance of asymmetry in the generic gap models. In addition to this base model, eight more models were created. three with a fixed scale height, three with a fixed high flaring index (\(\Psi\) = 1.3), and two models with reduced gap widths. This variety allowed exploring different scale heights within the context of a consistently high flaring profile. Figure 5 shows the corresponding cuts through the major and minor axes in all eight models. The IRAS4A1 observation and the model that best reproduce its intensity along the major and minor axis are shown in Figure 6. By examining the outcomes of these various models, we can observe the influence of a gap presence, large flaring index, and large scale heights on the observed asymmetry and young Class 0 sources like the IRAS4A1 disk. From the radiative transfer models of IRAS4A1, it is evident that an asymmetry is formed on the North (compared to South) part of the disk at large scale heights. The inclination and position angle in the models greatly influence the resulting asymmetry, emphasizing the uncertainties in these results. Furthermore, the simplicity of the model employed in this study may limit its ability to reproduce accurately the complexities of a Class 0 young stellar object like IRAS4A1. Nevertheless, the intensity profiles along the major and minor axes suggest the presence of "substructures" or other unknown processes occurring in the actual observations, as most models appear flat unless a gap is included. In the upcoming paragraphs, we will speculate about the substructure scenario in the IRAS4A1 disk although it is pos Figure 4: Top panels: Generic gap models in RADMC-3D increasing the scale height up to H/R 0.3. Bottom panels: Normalized intensities of the major axis and minor axis cuts of the generic gap models. An asymmetry in the minor axis of the disk becomes more prominent as the scale height increases. In both the minor and major axes, substructures start to flatten with increasing scale height, and after 0.3 they become barely visible. sible that something else is shaping the intensity profiles along the major and minor axis. The models with a very small gap exhibit intensity profiles that more closely resemble the observed profile at 1.2 mm in both the major and minor axes. This difference may indicate that the gaps at these early stages are still forming and that we will need still higher resolution to see them. Regardless of whether IRAS4A1 is indeed a flared disk, a lower limit on the scale height for generating an asymmetry can be established (H/R \(>\) 0.3). Note that the intensity profiles in the Figure are normalized, as the primary goal of this study is not to replicate the flux of the IRAS4A1 source precisely, but rather to provide insights into the earliest stages of disk and planet formation. Nevertheless, if our observations are capturing emission from higher layers in the disk and if the emission remains highly optically thick, it may be challenging to detect substructures with ALMA at the available resolution. ## 4 Discussion The inferred large scale height (H/R \(>\) 0.3) in IRAS4A1 has significant implications for planet formation. Despite the fast settling expected during the disk's lifetime, the optical thickness and asymmetry observed at Band 6 indicate the presence of material with varying grain sizes in higher layers of the disk. This result implies that settling is still ongoing for millimeter-sized particles. Indeed, this state is expected considering the settling timescales (\(<\)1 Myr, Dullemond & Dominik 2004) and the estimated dynamical age of the outflows in IRAS4A (a few 0.01 Myr Taquet et al. 2020). Furthermore, settling and radial drift are likely acting together during these early stages of dust evolution and growth in the disk. The large scale height of the disk may also obscure young substructures, as suggested by models, particularly when combined with very narrow substructures measuring less than 4 au in size. While we are unable to directly resolve substructures (i.e., gaps) in the disk, our models suggest that some must be present to explain the observed bumps in the radial profile of IRAS4A1. These small-scale features are challenging to observe directly with current resolution capabilities (no substructures observed in IRAS4A1), but their presence at these early stages could indicate two possibilities. Firstly, if these substructures are caused by planet-disk interactions, it suggests that planets formed nearly instantaneously after the collapse of the molecular cloud. Furthermore, given sufficient time, substructures within protoplanetary disks are expected to widen, resembling those observed in other systems. This widening occurs as planets grow in size by accreting and carving out material from their surroundings. We note that substructures can potentially arise from mechanisms other than planet-disk interaction. If this is the case, it introduces an intriguing possibility. In this scenario, the substructures initially form early on and are narrow, as indicated by the narrow gap width in the models. Given settling-induced growth of dust particles and other processes occurring within these narrow substructures, substructure formation through alternative mechanisms may itself trigger planet formation within such gaps. Hence, planet formation could take place very early Figure 5: Normalized intensities of the major axis and minor axis cuts of the IRAS4A1 RADMC-3D models. Top panels: The scale height of each model was fixed to H/R=0.3. Middle panels: The \(\Psi\) flaring index values were fixed to 1.3 in the RADMC-3D models. Bottom panels: The scale height was fixed to 0.3 and the flaring index to 1.3, but the gap width was reduced in each model. The ‘shoulder’ shape and the asymmetry of the disk are better reproduced using a large scale height, a large flaring index, and a small gap width. Unfortunately, an excess of emission is found in the models and the small size of the gap needed cannot be observed at the current resolution. in the disk's evolution, following an evolutionary path similar to the first scenario. An additional crucial aspect to consider is the large flaring index observed in IRAS4A1. If the "shoulder" position, which served as the basis for defining the gap position for the models, around the range of 20-40 AU is pointing to a substructure, this substructure could be relatively close to the disk mid-plane. For example, smaller scale heights may occur in the center of the disk, meaning that the closer a substructure forms to the center the easier it would be to detect it in a very flared disk. Consequently, any mechanism responsible for carving out these substructures probably starts in the mid-plane and is unable to reach large-scale heights, as seen in other protoplanetary disks when scattered light observations and sub-mm observations with ALMA are compared. It is important to note, however, that the combination of large scale heights and a large flaring index could still hide further substructures in the outer radii of the IRAS4A1 disk. On the other hand, if there are no substructures in the flared disk of IRAS4A1, planet formation may then occur only at a later stage when larger particles have already settled in the disk midplane, taking into account the timescales required for settling. Recently, similar results including the "shoulders", asymmetries, and large scale heights were found in YSOs in studies by the eDisk survey team (Ohashi et al. 2023). Regardless of the specific dynamics within the IRAS4A1 disk, it is becoming evident that Class 0 Young Stellar Objects (YSOs) exhibit flared disks with significant scale heights, providing valuable insights into the planet formation process. ## 5 Summary and conclusions We have shown high-resolution ALMA images (78 mas) of the IRAS4A binary system in Bands 4 and 6. In summary, the key findings of this paper can be outlined as follows: * No substructures were detected in either A1 or A2 at the current resolution. * Analysis of spectral indexes and brightness temperatures indicated that A1 is significantly more optically thick than A2. * A multi-wavelength image analysis was carried out showing the dust parameters in A1. The expected values of the dust parameters inferred high temperatures (\(>\)50 K), high surface densities (\(>\)10 \(gcm^{-2}\)), and large dust size particles (\(>\)30 \(\mu\)m) at all radius (\(<\) 60 au) in the IRAS4A1 disk. In addition, the analysis showed high optical depth in the inner disk in Band 6 and Band 4. * Radiative transfer models using RADMC-3D have shown that a minimum scale height of H/R \(>\) 0.3 is adequate to render the substructures invisible and produce an asymmetry in the disks. Moreover, the models that incorporated a narrow gap around 34-50 au and increased flaring index, provided better matches to the observed intensity profiles, suggesting the presence of potential hidden substructures within a very flared disk in the IRAS4A1 system, even in these early stages of disk formation. Observations with high resolution and sensitivity at cm wavelengths with the ngVLA can help unveil any substructure that might exist in IRAS4A1. ###### Acknowledgements. We thank the referee for the very constructive comments. We also thank Dominique M. Segura-Cox for the useful discussion. We acknowledge assistance from Allegro, the European ALMA Regional Centre Figure 6: Left: RADMC-3D model 8 convolved with a 0.078\({}^{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}{{{}}}{{{}}{{{}}{{{}}{{{}{{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{ }{{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{{}{}{}{{}{}{{}{}{{}{{}{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{{}{{}{}{}{{}{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{}{{}{}{}{{}{{}{{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{}{{}{{}{}{{}{{{}{{}{}{{}{{}{{}{{{}{{}{{}{{}{{{}{{}{{}{{{{}{{}{{{}{{{}{{}{{}{{{}{{}{{}{{}{{}{{}{{}{{}{{}{{{{{}{}{{{{}{{}{{{}{{{{}{{{{{}{{{{{{}}{{{{{{{}}{{{{{}}{{{{}}{{{{{}}{{{{}}{{{{{{} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}}}}}\}}}\}}}}\}}}}\}}}}}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\ \ \ }\ \ \ \ \ }\ \ \ \ }\ \ \ }\ \ \ }\ \ \ }\ \ \ }\ \ \ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ \ }\ \ \ }\ \ }\ \ \ }\ \ \ }\ \ \ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ \ }\ \ \ }\ \ }\ \ }\ \ }\ \ \ }\ \ \ }\ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ }\ \ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ }\ \ \ }\ \ }\ \ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ \ }\ \ }\ \ }\ \ }\ \ \ }\ \ }\ \ }\ \ \ }\ \ \ }\ \ }\ \ }\ \ }\ \ \ }\ \ \ }\ \ }\ \ }\ \ \ }\ \ }\ \ }\ \ \ \ }\ }\ \ \ }\ \ \ node in the Netherlands. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2018.10.0510.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOI. EGC acknowledges support from the National Science Foundation through the NSF MPS-Asecond Fellowship Grant number 2213275. L.W.L. acknowledges support from NSF AST-19101364 and NSF AST-2108794.
2303.17208
Production of cumulative pions and percolation of strings
Production of pions in high-energy collisions with nuclei in the kinematics prohibited for free nucleons ("cumulative pions") is studied in the fusing color string model.The model describes the so-called direct mechanism for cumulative production. The other, spectator mechanism dominates in production of cumulative protons but is suppressed for pions. In the model cumulative pions are generated by string fusion which raises the maximal energy of produced partons above the level of the free nucleon kinematics. Momentum and multiplicity sum rules are used to determine the spectra in the deep fragmentation region. Predicted spectra of cumulative pions exponentially fall with the scaling variable $x$ in the interval $1<x<3$ with a slope of the order 5$\div$5.6, which agrees well with the raw data obtained in the recent experiment at RHIC with Cu-Au collisioins. However the agreement is worse for the so-called unfolded data, presumably taking into account corrections due to the expermental set-up and having rather a power-like form.
M. A. Braun
2023-03-30T08:04:39Z
http://arxiv.org/abs/2303.17208v1
# Production of cumulative pions and percolation of strings ###### Abstract Production of pions in high-energy collisions with nuclei in the kinematics prohibited for free nucleons ("cumulative pions") is studied in the fusing color string model.The model describes the so-called direct mechanism for cumulative production. The other, spectator mechanism dominates in production of cumulative protons but is suppressed for pions. In the model cumulative pions are generated by string fusion which raises the maximal energy of produced partons above the level of the free nucleon kinematics. Momentum and multiplicity sum rules are used to determine the spectra in the deep fragmentation region. Predicted spectra of cumulative pions exponentially fall with the scaling variable \(x\) in the interval \(1<x<3\) with a slope of the order 5\(\div\)5.6, which agrees well with the raw data obtained in the recent experiment at RHIC with Cu-Au collisioins. However the agreement is worse for the so-called unfolded data, presumably taking into account corrections due to the experimental set-up and having rather a power-like form. ## 1 Introduction Production of particles in nuclear collisions in the kinematical region prohibited in the free nucleon kinematics ("cumulative particles") has long aroused interest both from the theoretical and pragmatic point of views. On the pragmatic side, this phenomenon, in principle, allows to raise the effective collision energy far beyond the nominal accelerator one. This may turn out to be very important in the near future, when all possibilities to construct still more powerful accelerator facilities become exhausted. Of course one should have in mind that the production rate falls very rapidly above the cumulative threshold, so that to use the cumulative effect for practical purposes high enough luminosity is necessary. On the theoretical side, the cumulative effect explores the hadronic matter at high densities, when two or more nucleons overlap in the nucleus. Such dense clusters may be thought to be in a state which closely resembles a cold quark-gluon plasma. Thus cumulative phenomena could serve as an alternative way to produce this new state of matter. There has never been a shortage of models to describe the cumulative phenomena, from the multiple nucleon scattering mechanism to repeated hard interquark interactions [1, 2, 3, 4]. However it should be acknowledged from the start that the cumulative particle production is at least in part a soft phenomenon. So it is natural to study it within the models which explain successfully soft hadronic and nuclear interactions in the non-cumulative region. Then one could have a universal description of particle production in all kinematical regions. The non-cumulative particle production is well described by the color string models, in which it is assumed that during the collisions color strings are stretched between the partons of colliding hadrons (or nuclei), which then decay into more strings and finally into observed produced hadrons [5]. As was argued long ago (see e.g. [6] and references therein) that apart from the slow Fermi motion of nuclear components, absolutely inadequate to explain the observed cumulative phenomena, there are basically three mechanisms of the cumulative particle production: direct, spectator and rescattering. In the direct mechanism cumulative particles are generated in the process of collision. In the spectator mechanism (also known as multinucleon correlations inside the nucleus) cumulative particles exist in the nucleus by itself, independent of collisions, the role of the latter being just to liberate them. Finally rescattering may move the initially produced non-cumulative particle into a cumulative region. It was found that the role of these three mechanisms is different for different energies and particles. In particular rescattering can play its role at small energies and degrees of cumulativity but quickly dies out with the growth of both. The spectator mechanism strongly dominates in the production of cumulative protons. Cumulative pions in contrary are mostly produced by the direct mechanism. In the color string picture, for the spectator mechanism to operate, the strings should be formed within the nucleus between its partons moving at large relative momenta. This is a very different picture as compared with the standard color string approach when there are no such partons inside the nucleus and string are stretched between partons of the projectile and target. The common color string picture corresponds to the direct mechanism. So restricting to this picture we hope to describe production of cumulative pions but not protons produced mostly by the spectator mechanism. A working model of string fusion and their percolation was proposed by the authors some time ago [7, 8]. It proved to be rather successful in explaining a series of phenomena related to collective effects among the produced strings, such as damping of the total multiplicity and strange baryon enhancement. One expects that fusion of strings which enhances the momenta of the produced particles may also describe production of cumulative particles with momenta far greater than without fusion, Old preliminary calculations of the production rates in the cumulative region at comparatively low energies gave encouraging results [9, 10, 11]. They agree quite well with the existing data for production of cumulative pions in hA collisions at \(E_{cm}=27.5\) GeV [12, 13] but not of cumulative protons for which the cross-section turned out to far below experiment However to pass to higher energies and heavy-ion collisions one has to considerably update these old treatment. We stress that the string picture has been introduced initially to describe particle production in the central region, where the production rate is practically independent of rapidity but grows with energy. As mentioned, its results agree with the data very well [5]. In contrary cumulative particles are produced in the fragmentation region. near the kinematical threshold, where the production rates do depend on rapidity and go to zero at the threshold. So from the start it is not at all obvious how the color string approach may give reasonable results also in the deep fragmentation region. Accordingly an important part of our study is to describe the rate of pion production from the initial and fused strings valid in the fragmentation region. To this aim we shall use color and momentum conservation imposed on the average and sum rules which follow. As we shall see from our results, we reproduce a very reasonable description of the pion production rates for \(1<x<2\) at 27.5 GeV [5]. However we are not able to describe the proton rates, which remain experimentally two orders of magnitude greater than our predictions. As explained the bulk of cumulative protons come from the spectator mechanism, which lies outside our color string dynamics. Note that the spectator mechanism was included in the Monte-Carlo code of [10] where it gave results for nucleon production in agreement with the experimental data. The bulk of our paper is devoted to production of cumulative pions at in AA collisions at RHIC and LHC facilities, related to the performed and planned experimental efforts in this direction. It is to be noted that in older calculational models HIJING [16] and DP-MJET [17], devoted to the overall spectra in heavy-ion collisions, particles emitted with energies up to 2\(\div\)2.5 times greater than allowed by proton-proton kinematics were found. A recent experimental study devoted specifically to cumulative jet production was performed for Cu-Au collisions at 200 Gev [18]. Comparison of our predictions with these data will be postponed until the discussion at the end of our paper. With certain reservations the data confirm the universality of particle production in the fragmentation region and in particular in the cumulative region. ## 2 The model The color string model assumes that each of the colliding hadrons consists of partons (valence and sea quarks), distributed both in rapidity and transverse space with a certain probability, deduced from the experimentally known transverse structure and certain theoretical information as to the behavior of the \(x\) distributions at its ends. These distributions are taken to be the ones for the endpoints of the generated strings. As a result, the strings acquire a certain length in rapidity. We shall choose the c.m. system for the colliding nucleons with the nucleus (projectile) consisting of \(A\) nucleons and moving in the forward direction. Each of the projectile nucleons is taken to carry momentum \(p_{1}\), so that the total momentum of the projectile nucleus is \(Ap_{1}\). The target is assumed to be just the nucleon with momentum \(p_{2}\). The cumulative particles will be observed in the forward hemisphere in the \(z\) direction of the fast moving nucleus. Their longitudinal "+" momenta will be \(x_{+}p_{1=}\) with \(x_{+}>1\). In the following \(x_{+}\) will be called cumulatiity index, or simply cumulativity. Theoretically the maximal value for \(x_{+}\) is \(A\) but in practice we find \(x_{+}\leq 5\). The nucleons for both projectile and target are split into partons as shown in Figs. 1 and 2 for the projectile where the partons (quarks and diquarks) are illustrated by dashed lines. Color strings are stretched between partons of the projectile and targets as in Fig. 1 and some of these simple strings can be fused into strings with more color. In Fig. 2 it is shown that the initial 4 simple strings combine into fused strings attached to quark-antiquark pairs within the same nucleons (left) or different nucleons (right) in the projectile nucleus. Let a parton from the projectile carry a part \(x_{1+}\) of the "+" component of nucleon momentum \(p_{1}\) and a partner parton from the target carry a part \(x_{2-}\) of the "-" component of nucleon momentum \(p_{2}\). The total energy squared for the colliding pair of nucleons is \[S=2p_{1+}p_{2-}=m^{2}e^{Y} \tag{1}\] where \(m\) is the nucleon mass and \(Y\) is the total rapidity. (We assume that the energy is high, so tat \(S>>m^{2}\)). The c.m. energy squared accumulated in the string is then \[s=x_{1+}x_{2-}S. \tag{2}\] Note that the concept of a string has only sense in the case when \(s\) is not too small, say more than \(m^{2}\). So both \(x_{1+}\) and \(x_{2-}\) cannot be too small. \[x_{1+},x_{2-}>x_{min}=m/\sqrt{S}=e^{-Y/2}, \tag{3}\] We relate the scaling variables for the string endpoints to their rapidities by \[y_{1}=Y/2+\ln x_{1+},\ \ y_{2}=-Y/2-\ln x_{2-}. \tag{4}\] Figure 2: \(pA\) collision for \(A=2\) with creation of 4 color strings which fuse within individual nucleons (left panel) or between different nucleons (right panel). Nucleons of the projectile are shown by solid lines, partons in which they split (quarks and diquarks) by dashed lines. Cumulative particles are shown by thick solid lines. Figure 1: \(pA\) collision for \(A=2\) with creation of 4 color strings. Nucleons of the projectile are shown by solid lines, partons in which they split (quarks and diquarks) by dashed lines Due to (4) \(y_{1}\geq 0\) and \(y_{2}\leq 0\). The "length" of the string is just the difference \(y_{1}-y_{2}\). Due to partonic distribution in \(x\) the strings have different lengths and moreover can take different position in rapidity respective to the center \(y=0\). The sea distribution in a hadron is much softer than the valence one. In fact the sea distribution behaves as \(1/x\) near \(x=0\), so that the average value of \(x\) for sea partons is small, of the order \(x_{min}\)[14]. As a result, strings attached to sea partons in the projectile nucleus carry very small parts of longitudinal momentum in the forward direction, which moreover fall with energy, so that they seem to be useless for building up the cumulative particles. This allows us to retain only strings attached to valence partons, quarks and diquarks, in the projectile and neglect all strings attached to sea quarks altogether. This is reflected in Fig. 2 where we have shown only the valence partons in the projectile. Note that the number of the former is exactly equal to \(2A\) and does not change with energy. So for a given nucleus we shall have a fixed number of strings, independent of the energy. The upper end rapidities of the strings attached to diquarks are usually thought to be larger than of those attached to the quarks, since the average value of \(x\) for the diquark is substantially larger that for the quark. Theoretical considerations lead to the conclusion that as \(x\to 1\) the distributions for the quark and diquark in the nucleon behave as \((1-x)^{3/2}\) and \((1-x)^{-1/2}\) respectively, modulo logarithms [14]. Neglecting the logarithms and taking also in account the behavior at \(x\to 0\) we assume that these distributions for the quark\(q\) and diquark\(qq\) are \[q(x)=\frac{8}{3\pi}x^{-1/2}(1-x)^{3/2} \tag{5}\] and \[qq(x)=q(1-x)=\frac{8}{3\pi}x^{3/2}(1-x)^{-1/2} \tag{6}\] The quark and diquark strings will be attached to all sorts of partons in the target nucleon: valence quark and diquark and sea quarks. Their position in rapidity in the backward hemisphere will be very different. However we are not interested in the spectrum in the backward hemisphere. So, for our purpose, limiting ourselves with the forward hemisphere, we may take lower ends of the strings all equal to \(x_{min}<<1\). As a result, in our model at the start we have \(2A\) initially created strings, half of them attached to quarks and half to diquarks, their lower ends in rapidity all equal to \[y_{2}=Y/2+\ln x_{min}\] and their upper ends distributed in accordance with (5) and (6). As soon as they overlap in the transverse space they fuse into new strings with more color and more energy. This process will be studied in the next section. ## 3 Fragmentation spectra and fusion of strings ### One sort of strings and particles The following discussion closely follows that of [11]. To start we shall study a simplified situation with only one sort of strings. We shall consider both the original string stretched between the partons of the projectile and target and the fused strings of higher color which are generated when \(n\) original strings occupy the same area in the transverse plane. Consider first the original simple string. Let it have its ends at \(x_{1+}\equiv x_{1}\) and \(x_{min}\). For cumulative particles we shall be interested only in the forward hemisphere and only in "+" components of momenta so that in the following we omit subindex "+". We shall be interested in the spectrum of particles emitted from this string with the longitudinal momentum \(xp_{1}\). Evidently \(x\) varies in the interval \[x_{min}<x<x_{1},\] or introducing \(z=x/x_{1}\) in the interval \[z_{min}<z<1,\ \ z_{min}=\frac{x_{min}}{x_{1}}.\] The multiplicity density of produced particles (pions) will be \[\tau_{1}(z)=\frac{d\mu}{dy}\] and the total multiplicity of particles emitted in one of the two hemispheres is \[\int_{z_{min}}^{1}dz\tau_{1}(z)=\frac{1}{2}\mu_{0},\] where \(\mu_{0}\) is the total multiplicity in both hemispheres. The emitted particles will have their "+" momenta \(k_{+}\) in the interval \[x_{min}p_{1+}<k_{+}<zx_{1}p_{1+}\] and since \(z,x_{1}\leq 1\) \[x_{min}p_{1+}<k_{+}<p_{1+}.\] So the particles emitted from the simple string cannot carry their "+" momenta greater than a single incoming nucleon. They are non-cumulative. Now let several simple strings coexist without fusion. Each of these strings will produce particles in the interval dictated by its ends. If the \(i\)-th string has its upper end \(x_{1}^{(i)}\) then the total multiplicity density of \(n\) not fused strings will be \[\tau^{(n)}(x)=\sum_{i=1}^{n}\tau_{1}^{(i)}(x),\] where \(\tau_{1}^{(i)}\) is the multiplicity density of the \(i\)-th string, different from zero in the interval \[x_{min}<x<x_{1}^{(i)}\leq 1.\] As a result all produced particles will have their "+" momenta lying in the same interval \(<p_{+}\) as for a single string, so that they all will be non-cumulative. We conclude that there will not appear any cumulative particles without string fusion. Only fusion of strings produces cumulative particles in our picture. Now consider that \(n\) simple strings fuse into a fused string. The process of fusion obeys two conservation laws: those of color and momentum. As a result of the conservation of color, the color of the fused string is \(\sqrt{n}\) higher than that of the ordinary string [7,8]. From the 4 momentum conservation laws we shall be interested mostly in the conservation of the "+" component, which leads to the conservation of \(x\). The fused string will have its upper endpoint \[x_{n}=\sum_{i=1}^{n}x^{(i)},\] where \(x^{(i)}\) are upper ends of fusing strings. This endpoint can be much higher than individual \(x^{(i)}\) of the fusing strings. In the limiting case when each fusing string has \(x^{(i)}=1\) we find \(x_{n}=n\). Consequently the particles emitted from the fused string will have their maximal "+" momentum \(np_{+}\) and be cumulative with the degree \(n\) of cumulativity. At this point we have to stress that there are some notable exceptions. The maximal value \(n\) for \(x_{n}\) can be achieved only when different strings which fuse are truly independent, which is so if the strings belong to different nucleons in the projectile. To see it imagine that two string fuse which belong to the same nucleon (one starting from the quark and the other from antiquark). In this case \(x^{(1)}+x^{(2)}=1\) and \(x_{2}\) will have the same value as \(x_{1}\). So fusing of strings inside the nucleon does not give any cumulative particles. Such particles are only generated by fusing of strings belonging to different nucleons in the projectile, compare let and right panels in Fig 2. The multiplicity density of particles emitted from the fused string will be denoted \[\tau_{n}=\frac{d\mu_{n}}{dy}\] where \(\mu\) is the generated multiplicity. It is different from zero in the interval \[x_{min}^{(n)}\leq x\leq x_{n},\ \ x_{min}^{(n)}=nx_{\rm min}, \tag{7}\] or again introducing \(z=x/x_{n}\) in the interval \[z_{min}\leq z<1,\ \ z_{min}=\frac{x_{min}^{(n)}}{x_{n}}\] We are interested in emission at high values of \(x\), or \(z\) close to unity, that is in the fragmentation region for the projectile. Standardly it is assumed that the multiplicity density is practically independent of \(x\) in the central region, that is at small \(x\). However \(\tau_{n}\) cannot be constant in the whole interval (7) but has to approach zero at its end in the fragmentation regions. At such values of \(z\)\(\tau_{n}\) is expected to strongly depend on \(z\), Our task is to formulate the \(z\) dependence of \(\tau_{n}\) in this kinematical region. To this aim we set up certain sum rules which follow from the mentioned conservation laws and restrict possible forms of the spectrum of produced hadrons. The total number of particles produced in the forward hemisphere by the fused string should be \(\sqrt{n}\) greater than by the ordinary string. This leads to the multiplicity sum rule: \[\int_{nx_{min}}^{x_{n}}\frac{dx}{x}\tau_{n}(x)=\frac{1}{2}\mu_{0}\sqrt{n} \tag{8}\] where as before \(\mu_{0}\) the total multiplicity from a simple string in both hemispheres. The produced particles have to carry all the longitudinal momentum in the forward direction. This results in the sum rule for \(x\): \[\int_{nx_{min}}^{x_{n}}dx\tau_{n}(x)=x_{n} \tag{9}\] In these sum rules \(x_{min}\) is given by (3) and is small. Passing to the scaled variable \[z=x/x_{n}\] we rewrite the two sum rules as \[\int_{z_{n}}^{1}\frac{dz}{z}\tau_{n}(z)=\frac{1}{2}\mu_{0}\sqrt{n} \tag{10}\] \[\int_{z_{n}}^{1}dz\tau_{n}(z)=1 \tag{11}\] where \[z_{n}=nx_{min}/x_{n} \tag{12}\] These sum rules put severe restrictions on the form of the distribution \(\tau_{n}\), which obviously cannot be independent of \(n\). Comparing (7) and (8) we see that the spectrum of the fused string has to vanish at its upper threshold faster than for the simple string. In the scaled variable \(z\) it is shifted to smaller values (and thus to the central region). This must have a negative effect on the formation of cumulative particles produced at the extreme values of \(x\). To proceed, we choose the simplest form for the distribution \(\tau_{n}\): \[\tau_{n}(z)=a_{n}(1-z)^{\alpha_{n}-1},\ \ \alpha>1 \tag{13}\] with only two parameters magnitude \(a_{n}\) and slope \(\alpha_{n}\). The \(x\) sum rule relates \(a_{n}\) and \(\alpha_{n}\): \[a_{n}=\alpha_{n}(1-z_{n})^{-\alpha_{n}}. \tag{14}\] The multiplicity sum rule finally determines \(\alpha_{n}\) via \(\mu_{0}\): \[\alpha_{n}(1-z_{n})^{-\alpha_{n}}\int_{z_{n}}^{1}\frac{dz}{z}(1-z)^{\alpha_{n }-1}=\frac{1}{2}\mu_{0}\sqrt{n} \tag{15}\] This equation can be easily solved when \(z_{n}\to 0\) We present the integral in (14) as \[\int_{z_{n}}^{1}\frac{dz}{z}[(1-z)^{\alpha_{n}-1}-1]+\ln\frac{1}{z_{n}}. \tag{16}\] The integral term is finite at \(z_{n}=0\) so that we can write it as a difference of integrals in the intervals \([0,1]\) and \([0,z_{n}]\). The first can be found exactly \[I_{1}=\int_{0}^{1}\frac{dz}{z}[(1-z)^{\alpha_{n}-1}-1]=\lim_{\epsilon\to 0 }\int_{0}^{1}dzz^{-1+\epsilon}[(1-z)^{\alpha_{n}-1}-1]=\] \[\lim_{\epsilon\to 0}\Big{[}{\rm B}(\alpha_{n},\epsilon)-\frac{1}{\epsilon} \Big{]}=\psi(1)-\psi(\alpha_{n}). \tag{17}\] The second term has an order \(-(\alpha_{n}-1)z_{n}\) and is small unless \(\alpha_{n}\) grows faster than \(n\), which is not the case as we shall presently see. In fact we shall find that \(\alpha_{n}\) grows roughly as \(\sqrt{n}\), which allows to neglect the second factor in (14) and rewrite it in its final form \[\alpha_{n}\Big{[}\ln\frac{1}{z_{n}}+\psi(1)-\psi(\alpha_{n})\Big{]}=\frac{1} {2}\mu_{0}\sqrt{n}. \tag{18}\] Note that the total multiplicity \(\mu_{0}\) from a simple string is just \(Y\). Also \(1/z_{n}=nx_{min}/x_{n}\) and so \[\ln\frac{1}{z_{n}}=\frac{Y}{2}+\ln\frac{x_{n}}{n}\] So Eq. (18) can be rewritten as \[\alpha_{n}=\sqrt{n}\Big{(}1+\frac{2}{Y}(\ln\frac{x_{n}}{n}+\psi(1)-\psi( \alpha_{n}))\Big{)}^{-1} \tag{19}\] This transcendental equation determines \(\alpha_{n}(x_{n})\) for the fused string. Obviously at \(Y>>1\) the solution does not depend on \(x_{n}\) and is just \(\alpha_{n}=\sqrt{n}\). To finally fix the distributions at finite \(Y\) we have to choose the value of \(\alpha\) for the simple string. We take the simplest choice \(\alpha_{1}=1\) for an average string with \(x=x_{0}=1/2\), which corresponds to a completely flat spectrum and agrees with the results of [14]. This fixes the multiplicity density for the average string \[\tau_{1}(y)=1 \tag{20}\] which favorably compares to the value 1.1 extracted from the experimental data [8]. After that the equation for \(\alpha\) takes the form \[\alpha_{n}\Big{(}\ln x_{n}+\psi(1)-\psi(\alpha_{n})\Big{)}=\sqrt{n} \tag{21}\] At finite \(Y\) it has to be solved numerically to give \(\alpha_{n}(x_{n},Y)\) where \(x_{n}\) is the upper end of the string \(n\) We find that with the growing \(n\) the spectrum of produced particles goes to zero at \(z\to 1\) more and more rapidly. So although strings with large \(n\) produce particles with large values of \(x\leq x_{n}\), the production rate is increasingly small. ### Different strings and particles In reality strings are of two different types, attached to quarks or antiquarks. Also various types of hadrons are produced in general. In the cumulative region the mostly studied particles are nucleons and pions, the production rates of the rest being much smaller. As mentioned in the Introduction the dominant mechanism for emission of cumulative nucleons is the spectator one, which lies outside the color string picture. So we restrict ourselves to cumulative pions. The multiplicity densities for each sort of fused string will obviously depend on its flavor contents, that is, of the number of quark and diquark strings in it. Let the string be composed of \(n-k\) quarks and \(k\) diquarks, \(k=0,1,...n\) We shall then have distributions \(\tau_{nk}\) for the produced pions. The multiplicity and momentum sum rules alone are now insufficient to determine each of the distribution \(\tau_{nk}\) separately. To overcome this difficulty we note that in our picture the observed pion is produced when the parton (quark or diquark) emerging from string decay neutralizes its color by picking up an appropriate parton from the vacuum. In this way a quark may go into a pion if it picks up an antiquark or into a nucleon if it picks up two quarks. The quark counting rules tell us that the behavior at the threshold in the second case will have two extra powers of \((x-x_{n})\). Likewise a diquark may go either into a nucleon picking up a third quark or into two pions picking up two antiquarks, with a probability smaller by a factor \((x-x_{n})^{2}\) at the threshold. On the other hand at the threshold the probability to find a quark in the proton is \((1-x)^{2}\) smaller than that of the diquark, Eqs. (5) and (6). The two effects, that of color neutralization and threshold damping in the nucleus, seem to compensate each other. So that in the end the pion production rate from the antiquark string is just twice the rate from the quark string provided the distribution of the former in the nucleus is the same as for the quark strings. This enables us to take the same distributions (5) for quark and antiquark strings in the nucleus and for the fragmentation function \(\tau_{nk}\) use \[\tau_{nk}=\tau_{n}\Big{(}1+\frac{k}{n}\Big{)} \tag{22}\] where \(\tau_{n}\) is the distribution (13) determined in the previous subsection. Eq. (22) takes into account doubling of the pion production from antiquark strings. For the simple string it correctly gives \[\tau_{10}=\tau_{1},\ \ \tau_{11}=2\tau_{1}\] Averaging (22) over all \(n\)-fold fused strings one has the average \(<k>=n/2\), so that \(\tau_{nk}\) can be well approximated by \[\tau_{nk}=\frac{3}{2}\tau_{n} \tag{23}\] Note that should we want to consider cumulative protons then quark strings would give practically no contribution being damped both at the moments of their formation and neutralization of color. The antiquarks in contrast will dominate at both steps and give practically the total contribution. So then one has to consider only antiquark strings and only one multiplicity distribution, that of nucleons \(\tau_{n}^{(N)}\) for which our sum rules will be valid with the only change \(\mu_{0}\rightarrow\mu_{o}(N)\), the total multiplicity of nucleons. However one then have to use distribution (6) for antiquark strings in the nucleus which grows in the fragmentation region. ## 4 Nucleus-nucleus scattering In the preceding sections we studied \(pA\) scattering in the system where the nucleus is moving fast in the positive direction \(z\). Correspondingly we were interested in the forward hemisphere in the deep fragmentation region with attention to particles emitted with longitudinal momenta higher than that of the projectile nucleons. The role of the target proton was purely spectatorial, since we were not interested in particles moving in the opposite direction to the projectile nucleus. The only information necessary about the target was that all strings attached to the projectile nucleus could be attached to the target. This was related to existence of sea partons in the target apart from the dominant valence ones. If one substitutes the proton target by the nucleus (say of the same atomic number \(A\)) nothing will change in the projectile nucleus hemisphere, so that all our previous formulas remain valid. The only difference will be that strings attached to the nucleons in the projectile nucleus can now be coupled to valence partons in the target nucleus provided both nuclei overlap in the transverse area. So the number of all strings will depend on geometry, more concretely on the impact parametr \(b\). As for pA collisions, formation of cumulative strings with \(x>1\) will require that fusing strings belong to different nucleons in the projectile, So the picture of cumulative production will not change, except that it will be different for different \(b\). The final cumulative multiplicity will be obtained as usual by integration over all values of impact parameter \(b\). Thus so far as the cumulative particles are concerned the difference between \(pA\) and \(AA\) collisions reduces to the geometry in the transverse plane and the ensuing change of string configurations. ## 5 Probability of cumulative strings ### Geometric probability of string fusion As stressed, the cumulative production in our scenario is totally explained by formation of fused strings, which follows when \(n\geq 2\) strings overlap in the transverse space. The exact nature of this overlapping may be different, total or partial. In the transverse space such fused strings may have different forms and dimensions thus presenting complicated geometrical structures. The detailed analysis of their geometry and dynamical properties presents an exceptionally complicated and hardly realizable task even when the number of strings is quite small, to say nothing of the realistic case when this number is counted by hundreds or even thousands. However the study of cases with a small number of strings shows that equivalent results can be well reproduced within a simplified picture [15]. Cover the transverse area of interaction by a lattice with cells having areas of the simple strings (circles of the radius \(\sim 0.3\) fm). Strings stretched between the projectile and target turn out to appear in one or different cells. Once some cell contains \(n\) strings then they are assumed to fuse and give rise to a \(n\)-fold fused string occupying this cell. In this approach formation of fused strings proceeds in several steps. Consider pA collisions. At the first step one sets up the mentioned lattice to cover all nucleus area. Cells form the file \(z_{c}(m)\)\(m=1,2,\), of their points \(z_{c}=(x,y)=x+iy\) in the transverse plane with the center of the nucleus at \(z_{A}=(0,0)\). Second step is to throw randomly \(A\) nucleons at points \(z_{N}\) with the probability given by the transverse density \(T(b)\). They are thrown successively and with each new nucleon one passes to Third step which is to randomly throw 2 strings around each of the thrown nucleon at distance from its center dictated by the appropriate matter density within the nucleon (Gaussian). Each of the two strings then arrives into some cell \(m\) which enhances its string content \(\nu(m)\), \(\nu=0,1,2,..\) by unity. At this point one has to take into account that two fusing strings, quark and antiquark, attached to the same nucleon in the projectile nucleus do not generate a cumulative string with their upper end \(x_{n}>1\) (see Section 3.1). They are to be excluded from the total set of fused strings leaving only those which are generated when to the target two strings from different nucleons of the projectile nucleus are attached. To do this we note that the two strings from the same nucleon may be put in different cells \(m_{1}\) and \(m_{2}\) or in the same cell when \(m_{1}=m_{2}\). In the former case both \(\nu(m_{1}\) and \(\nu(m_{2}\) are each enhanced by unity. In the latter case \(\nu(m_{1})=\nu(m_{2})\) does not change. As a result in the cumulative production a fused string is only generated when it belong to different nucleons in the projectile nucleus, so that they have to overlap in the transverse area. This introduces factor of smallness, roughly the ratio of the transverse areas of the nucleon to nucleus for each successive fusion of \(n=2,3,...\) strings and is responsible for the fast decrease of the cumulative cross-section with the growth of cumulatively number \(x\). At the fourth step one searches all cells with more than 2 strings. One finds \(N_{c}(2)\) cells with 2 strings, that is 2-fold fused strings, \(N_{c}(3)\) cells with 3 strings, that is 3-fold fused strings, and so on. Different cells mark overlap of several nucleons at different locations in the transverse plane and physically to different trajectories of the target proton at each collision. So to find the total cross-section on has to take the sum of contributions of all cells with a given number of \(n\) strings calculated with the relevant dynamical probability \(p_{n}\) and particle distribution \(\tau_{n}(z)\) Eq,(13). This corresponds to the cross-section at all impact parameters \(b\) of the target proton as it crosses the nucleus. One has to recall that the cumulative string with \(x_{n}>1\) can only be formed when it starts from valence quarks in the projectile nucleus. So only two strings can be attached to each nucleon and the total number of simple strings is fixed to \(2A\). In the contrary for non-cumulative production also sea quarks in the projectile contribute with the number od strings from each nucleon and the resulting multiplicity steadily increasing with energy. From this one immediately concludes that cumulative production depends only very weakly on energy, all dependence coming from powers \(\alpha_{n}(x,Y)\). For AA scattering the procedure does not change with the only difference that the projectile nucleus is substituted by the nuclei overlap depending on the impact parameter \(b\) for the collision. So one will get different cumulative multiplicity for different \(b\). The total multiplicity will be obtained after integrations over all \(b\). ### Probability of cumulativity of the fused string Strings are distributed in the nucleons with probabilities (5) and (6) for quarks and diquarks. As argued above we shall assume that they all be distributed with the quark distribution (5). To eliminate the steep growth at \(x=0\) we pass to variable \(u=\sqrt{x}\). In terms of \(u\) the distribution takes the simple form \[\rho(u)=(1-u^{2})^{3/2}=(1-x)^{3/2},\ \ 0<u<1. \tag{24}\] The probability to find a string with the upper end at \(x\) consisted of \(n\) simple ones with ends \(x_{1},...x_{n}\) is given by the multiple integral \[p_{n}(x)=\int_{0}^{1}\prod_{i=1}^{n}\Big{(}du_{i}\rho(x_{i})\Big{)}\delta\Big{(} x-\sum_{i=1}^{n}x_{i}\Big{)}=\int_{0}^{1}\prod_{i=1}^{n-1}\Big{(}du_{i}\rho(x_{i}) \Big{)}\rho\Big{(}\Big{(}x-\sum_{i=1}^{n-1}x_{i}\Big{)}. \tag{25}\] We have to determine the limits of successive integrations starting from the \((n-1)\)-th. From the start it is obvious that \(p_{n}(x)\) can be different from zero only in the interval \(0<x<n\). For two strings we have \[p_{2}(x)=\int_{0}^{1}du_{1}\rho(x_{1})\rho(x-x_{1}),\ \ u_{1}=\sqrt{x_{1}}. \tag{26}\] Evidently we should have \[0<x_{2}=x-x_{1}<1,\ \ \mbox{or}\ \ 0<x-x_{1},\ \ x-x_{1}<1.\] These two conditions determine the lower and upper limits \(a_{1}\) and \(b_{1}\) of integration over \(u_{1}\): \[x_{1}>a_{1}(x)=\max(x-1,0),\ \ x_{1}<b_{1}(x)=\min(x,1), \tag{27}\] or correspondingly the limits in \(u_{1}\) \[\sqrt{a_{1}(x)}<u_{1}<\sqrt{b_{1}(x)}. \tag{28}\] Probability \(p_{2}(x)\) is different from zero in the region of \(x\) such that \(a_{1}(x)<b_{1}(x)\). If \(0<x<1\) then \(a_{1}=0\) and \(b_{1}=x\) so \(a_{1}<b_{1}\), if \(1<x<2\) then \(a_{1}=x-1\) and \(b_{1}=1\) so \(a_{1}<b_{1}\) provided \(x<2\). If \(x>2\) then \(a_{1}=x-1\) and \(b_{1}=1\) so \(a_{1}>b_{1}\) and \(p_{2}(x)=0\), as noted previously. As a result a nonzero result is obtained at \(0<x<2\) but with different integration limits. In the case of interest \(x>1\) the limits are \(a_{1}=x-1\) and \(b_{1}=1\). Now consider \(p_{n}(x)\) for \(n>2\) From (25) one finds a recurrent relation \[p_{n}(x)=\int_{u_{min}}^{u_{max}}du_{1}\rho(x_{1})p_{n-1}(x-x_{1}).,\ \ u_{1}=\sqrt{x_{1}}. \tag{29}\] The limits \[u_{min}=\sqrt{a}\ \ u_{max}=\sqrt{b}\] are determined by the condition \(p_{n-1}(x-x_{1})\neq 0\), which limits \(x-x_{1}\) to the region \[0<x-x_{1}<n-1.\] From this we find \[x_{1}<b=\min(x,1),\ \ x_{1}>a=\max(x-n+1,0).\] For \(x>1\) it follows that \(b=1\) is independent of \(x\) As to \(a\) for \(x<n-1\) we get \(a=0\). However in the interval \(n-1<x<n\) we obtain \(a=x-n+1\). Formula (29) can be used for the calculation of \(p_{n}(x)\) starting from \(p_{2}(x)\) explicitly given by integral (26). ## 6 Calculations For both proton-nucleus and nucleus-nucleus collisions one has to know the probabilities of fused strings formation \(p_{n}(x)\) and observed particle distribution \(\tau_{n}(x)\). The former are determined by Eq. (26) and recurrent relation (29). The latter are fully expressed by the powers \(\alpha_{n}(x,Y)\), which in turn are determined by Eq. (21). In fact fused strings formed from more than 5 simple strings are not found in our calculations both for p-A and A-A collisions. For \(n=1,2,..5\) the results of our numerical calculations give \(p_{n}(x)\) and \(\alpha_{n}(x,Y)\) which are presented in Figs. 3,4 and 5 Once these characteristics of cumulative strings formation are known one can start the Monte-Carlo procedure to finally find the string distributions and cumulative multiplicities. We performed 10000 runs in our Monte-Carlo program. For pA collisions we choose p-Ta at 27.5 GeV. This case at comparatively low energies is hardly suitable for our color string picture (the length of cumulative strings is then restricted by \(Y/2\sim 3\)) We choose it having in mind the existing old experimental data on cumulative pion production [12, 13]. For AA we considered Cu-Cu and Au-Au collisions at 200 GeV (RHIC) and Figure 4: Powers \(\alpha_{n}(x)\) at \(\sqrt{s}=27.5\) and 200 GeV. Curves from bottom to top correspond to \(n=2,3,4,5\) Figure 3: Probabilities \(p_{n}(x)\). They are different from zero in the interval at \(1<x<n\) TeV (LHC). ### p-Ta at 27.5 GeV The described numerical calculation gave the numbers of fused strings show (NFS) in Table.1 The data for \(n=1\) give the number of non-fused strings and so with \(x_{1}\leq 1\). It has been given only for comparison of fused and non-fused strings. We repeat that the data refer only to the cumulative situation when the number of spring is restricted to two for each nucleon in the nucleus. Should one leave this restriction the numbers will considerably grow and strongly increase with energy. ### Aa In this case the distribution of cumulative strings and multiplicities depends on the impact parameter \(b\). We split roughly our results into three categories depending on the value of \(b\): central with \(0\leq b\leq 0.4R_{A}\), mid-central with \(0.4R_{A}\leq b\leq 0.8R_{A}\) and peripheral with \(b\geq 0.8R_{A}\) where \(R_{A}=A^{1/3}1.2\) fm is the effective "nucleus radius". The distribution of cumulative strings is shown in Tables 2.3 and 4 for collisions Cu-Cu and AuAu at 200 GeV and Pb-Pb at 5.02 TeV. \begin{tabular}{|c|r|r|r|} \hline \(n\) & NFS central & NFS mid-central & NFS peripheral \\ \hline 1 & 166 & 36 & 22 \\ \hline 2 & 20 & 7.5 & 3.2 \\ 3 & 4.8 & 1.5 & 0.40 \\ 4 & 0.96 & 0.2 & 0.042 \\ 5 & 0.16 & 0.039 & 0.030 \\ \hline \end{tabular} **Table 3** Au-Au at 200 GeV \begin{tabular}{|r|r|r|r|} \hline \(n\) & NFS central & NFS mid-central & NFS peripheral \\ \hline 1 & 139 & 76 & 46 \\ \hline 2 & 62 & 26 & 11 \\ 3 & 24 & 8.0 & 2.6 \\ 4 & 7.4 & 2.0 & 0.50 \\ 5 & 2.06 & 0.48 & 0.077 \\ \hline \end{tabular} **Table 4** Pb-Pb at 5.02 TeV \begin{tabular}{|r|r|r|r|} \hline \(n\) & NFS central & NFS mid-central & NFS peripheral \\ \hline 1 & 148 & 80 & 47 \\ \hline 2 & 66 & 27 & 12 \\ 3 & 25 & 8.5 & 2.8 \\ 4 & 8.2 & 2.3 & 0.52 \\ 5 & 2.26 & 0.47 & 0.079 \\ \hline \end{tabular} The corresponding multiplicities per unit rapidity are shown in Figs. 7 and 8. The total multiplicities obtained after integration over all \(b\) are illustrated in Figs. 9 and 10 ## 7 Discussion In all cases our obtained multiplicities as a function of cumulativity have a simple exponential form at \(1<x<3\) \[\mu(x)=C\exp^{-\beta x},\ \ 1<x<3. \tag{30}\] Figure 6: Multiplicities per unit rapidity for production of cumulative pions at cumulativity \(x\geq 1\) in p-Ta collisions at 27.5 GeV. \begin{table} \begin{tabular}{|r|r|r|r|} \hline \(n\) & NFS central & NFS mid-central & NFS peripheral \\ \hline 1 & 166 & 36 & 22 \\ \hline 2 & 20 & 7.5 & 3.2 \\ 3 & 4.8 & 1.5 & 0.40 \\ 4 & 0.96 & 0.2 & 0.042 \\ 5 & 0.16 & 0.039 & 0.030 \\ \hline \end{tabular} **Table 3** Au-Au at 200 GeV \begin{tabular}{|r|r|r|r|} \hline \(n\) & NFS central & NFS mid-central & NFS peripheral \\ \hline 1 & 139 & 76 & 46 \\ \hline 2 & 62 & 26 & 11 \\ 3 & 24 & 8.0 & 2.6 \\ 4 & 7.4 & 2.0 & 0.50 \\ 5 & 2.06 & 0.48 & 0.077 \\ \hline \end{tabular} **Table 4** Pb-Pb at 5.02 TeV \begin{tabular}{|r|r|r|r|} \hline \(n\) & NFS central & NFS mid-central & NFS peripheral \\ \hline 1 & 148 & 80 & 47 \\ \hline 2 & 66 & 27 & 12 \\ 3 & 25 & 8.5 & 2.8 \\ 4 & 8.2 & 2.3 & 0.52 \\ 5 & 2.26 & 0.47 & 0.079 \\ \hline \end{tabular} The corresponding multiplicities per unit rapidity are shown in Figs. 7 and 8. The total multiplicities obtained after integration over all \(b\) are illustrated in Figs. 9 and 10 Figure 8: Multiplicities per unit rapidity for production of cumulative pions at cumulativty \(x\geq 1\) in central (upper curve), mid-central (middle curve) and peripheric (lower curve) regions in Pd-Pb collisions at 5.02 TeV. Figure 7: Multiplicities per unit rapidity for production of cumulative pions at cumulativity \(x\geq 1\) in central (upper curve), mid-central (middle curve) and peripheral (lower curve) regions in Cu-Cu and Au-Au collisions at 200 GeV. Figure 9: Total multiplicities per unit rapidity for production of cumulative pions at cumulativty \(x\geq 1\) in Cu-Cu and Au-Au collisions at 200 GeV. The value of \(\beta\) turns out to be of the order 5 and weakly varies for different cases. For p-Ta at 27.5 GeV we find \(\beta=5.0\). For Cu-Cu and Au-Au at 200 GeV we obtain \(\beta=5.6\) and 5.2 respectively. Finally for Pb-Pb at 5.02 TeV we get \(\beta=5.3\). We do not see any conclusive explanation for this small variation, which may arise from the difference in energy, nuclear wave function and insufficient number of runs. As to the coefficient \(C\) its values for p-Ta is 657 and for central Cu-Cu, Au-Au and Pb-Pb are 166, 517 and 572 respectively. Inspecting all cases we see that cumulative production of pions shows a large degree of universality, which is typical for the fragmentation region of particle production. The slope \(\beta\) to a very large degree is explained by overlapping of individual nucleons in the nucleus and roughly comes from the probability to find \(n>2\) nucleons occupying the same area in the nucleus. It is essential however that in the color string picture overlapping of nucleons only occurs due to formation of strings and so interaction with the target. So comparing to the initial very old (not to say ancient) idea of the existence of "fluctons" in the nucleus with a larger mass and so capable to produce particles in the cumulative kinematics the string picture does not see fluctons in the nucleus from the start. Very recently experimental study of cumulative production was performed with Cu-Au collisions at 200 GeV [18]. Cumulative jets were detected with the cm. energy \(E\) greater than allowed by the proton-proton kinematics \(E<100\) GeV. The data have been presented in two forms: raw data coming from the detectors and the so-called unfolded data which presumably take into account distortions due to different sources of the concrete experimental setup. Remarkably in the cumulative region the raw data fall exponentially as given by (30) with the slope \(\beta\simeq 5.1\) which does not practically depend on centrality nor jet characteristics. Our results in this paper refer rather to collisions of identical nuclei, such as Cu-Cu or Au-Au. But as we argued the cumulative production in the projectile region does not depend on the target whose influence is only felt via the overlap in the transverse space. With different nuclei the number of active nucleons will be different but in our picture it will influence only the magnitude of the production rate but not its \(x\)-dependence. The observed slope \(\beta=5.1\) therefore agrees well with our predictions. On the other side the unfolded data have a different \(x\) dependence in the power-form \[\frac{dN}{NdE}=\Big{(}-\frac{E}{E_{0}}\Big{)}^{p}\Big{(}\frac{E}{E_{0}}\Big{)} ^{-q}\] with \(p\) and \(q\) adjusted to the data and \(E_{0}=163\) or 193 GeV depending on the cone width of the jet but not on centrality. These unfolded data do not behave in accord with our predictions. This discrepancy may proceed from our simplfied picture of parton fragmentauion Figure 10: Total multiplicities per unit rapidity for production of cumulative pions at cumulativty \(x\geq 1\) in Pb-Pb collisions at 5.02 TeV. (our partons go into pions with 100% probability) and certainly deserves better study both of our treatment and experimental subtleties. As mentioned in the Introduction cumulative pions were also seen in the HIJING and DPMJET models of Cu-Au collisions. Since the authors were mostly interested in the overall spectra no special attention and analysis were given to the cumulative region. However one can note that they give different predictions for cumulative pions (HIJING more paricles and with greater energies) and that in HIJING rather strong centrality dependence was found. This latter property contradicts both our model and experimental findings. Note in conclusion that the flucton idea for cumulative production cannot be discarded altogether. One can envisage formation of a very fast particle already before the interaction. As mentioned in the Introduction, cumulative production within this picture corresponds to the so-called spectator mechanism. One expects in this approach that the leading particle will be one of the nucleons from the fast nucleus itself. The possibility of its formation was discussed in our old paper in the framework of a simple quark-parton model [4]. It was later shown that for the cumulative protons this spectator mechanism gives the bulk of the contribution and the direct mechanism considered in this paper is suppressed [6]. This may explain why applied to the cumulative proton production in p-Ta collisions at 27.5 GeV the color string approach gave multiplicities two orders of magnitude smaller the experiment [11].
2310.17330
CQM: Curriculum Reinforcement Learning with a Quantized World Model
Recent curriculum Reinforcement Learning (RL) has shown notable progress in solving complex tasks by proposing sequences of surrogate tasks. However, the previous approaches often face challenges when they generate curriculum goals in a high-dimensional space. Thus, they usually rely on manually specified goal spaces. To alleviate this limitation and improve the scalability of the curriculum, we propose a novel curriculum method that automatically defines the semantic goal space which contains vital information for the curriculum process, and suggests curriculum goals over it. To define the semantic goal space, our method discretizes continuous observations via vector quantized-variational autoencoders (VQ-VAE) and restores the temporal relations between the discretized observations by a graph. Concurrently, ours suggests uncertainty and temporal distance-aware curriculum goals that converges to the final goals over the automatically composed goal space. We demonstrate that the proposed method allows efficient explorations in an uninformed environment with raw goal examples only. Also, ours outperforms the state-of-the-art curriculum RL methods on data efficiency and performance, in various goal-reaching tasks even with ego-centric visual inputs.
Seungjae Lee, Daesol Cho, Jonghae Park, H. Jin Kim
2023-10-26T11:50:58Z
http://arxiv.org/abs/2310.17330v1
# CQM: Curriculum Reinforcement Learning ###### Abstract Recent curriculum Reinforcement Learning (RL) has shown notable progress in solving complex tasks by proposing sequences of surrogate tasks. However, the previous approaches often face challenges when they generate curriculum goals in a high-dimensional space. Thus, they usually rely on manually specified goal spaces. To alleviate this limitation and improve the scalability of the curriculum, we propose a novel curriculum method that automatically defines the semantic goal space which contains vital information for the curriculum process, and suggests curriculum goals over it. To define the semantic goal space, our method discretizes continuous observations via vector quantized-variational autoencoders (VQ-VAE) and restores the temporal relations between the discretized observations by a graph. Concurrently, ours suggests uncertainty and temporal distance-aware curriculum goals that converges to the final goals over the automatically composed goal space. We demonstrate that the proposed method allows efficient explorations in an uninformed environment with raw goal examples only. Also, ours outperforms the state-of-the-art curriculum RL methods on data efficiency and performance, in various goal-reaching tasks even with ego-centric visual inputs. ## 1 Introduction Goal-conditioned Reinforcement Learning (RL) has been successfully applied to a wide range of decision-making problems allowing RL agents to achieve diverse control tasks [42; 1; 30]. However, training the RL agent to achieve desired final goals without any prior domain knowledge is challenging, especially when the desired behaviors can hardly be observed. In those situations, humans typically adopt alternative ways to learn the final goals by gradually mastering intermediate sub-tasks. Inspired by the way humans learn, recent RL studies [29; 10; 6] have solved uninformed exploration tasks by suggesting which goals the agent needs to practice. In this sense of generating curriculum goals, previous approaches proposed various ideas to involve providing intermediate-level tasks [10; 38], quantifying the uncertainty of observations [4; 31; 33; 25; 18], or proposing contextual distance to gradually move away from the initial distribution [15; 6]. However, previous curriculum RL studies are mostly not scalable. Namely, they suffer from serious data inefficiency when they generate curriculum goals in high dimensions. Because of this limitation, they usually rely on the assumption that manually specified goal spaces (e.g., global X-Y coordinates) and clear mappings from high-dimensional observations to the low-dimensional goal spaces are available. Such an assumption requires prior knowledge about observations and the tasks, which remains a crucial unsolved issue that restricts the applicability of previous studies. In order to design a general curriculum solution without the need for prior knowledge about the observations, defining its own goal space for the curriculum could be an effective scheme. To do so, the two following operations need to be executed concurrently. (1) composing the _semantic goal space_ which contains vital information for the curriculum process from the arbitrary observation space, and (2) suggesting to the agent which goal to practice over the goal space. Let us consider an agent that tries to explore an uninformed environment with final goal images only. To succeed, the agent needs to specify the semantic goal space from the high-dimensional observation space, and suggest the curriculum goals (e.g., the frontier of the explored area) over the composed goal space to search the uninformed environment. However, most previous studies focused solely on one of these, specifying the low-dimensional goal space without considering how to provide intermediate levels of goals [17; 27], or just suggesting curriculum goals in manually specified semantic goal spaces [10; 21; 6]. The challenge of simultaneously managing (1) specifying the goal space and (2) suggesting curriculum goals is that they are intimately connected to each other. If the agent constructs an ill-formed goal space from the observation space, it would be difficult to propose curriculum goals over it. Conversely, if the method fails to suggest goals to enable the agents to explore the unseen area, it would also be difficult to automatically learn the goal space that covers the uninformed environment based on the accumulated observations. Therefore, it is essential to develop an algorithm that addresses both defining goal space and providing curriculum goals concurrently. In this paper, we propose a novel curriculum reinforcement learning (RL) method which can provide a general solution for a final goal-directed curriculum without the need for prior knowledge about the environments, observations, and goal spaces. First, our method defines its own semantic goal space by quantizing the encoded observations space through a discretization bottleneck and restoring the temporal relations between discrete goals via a graph. Second, to suggest calibrated guidance towards unexplored areas and the final goals, ours proposes uncertainty and temporal distance-aware curriculum goals that converge to the final goal examples. The key contributions of our work (CQM: Curriculum RL with **Q**unatized World **M**odel) are: * CQM solves general exploration tasks with the desired examples only, by simultaneously addressing the specification of a goal space and suggestion of curriculum goals (Figure 3). * CQM is the _first_ curriculum RL approach that can propose calibrated curriculums toward final goals from high-dimensional observations, to the best of our knowledge. * CQM is the _only_ curriculum RL method that demonstrates reliable performance despite an increase in the problem dimension, among the 10 methods that we experimented with. (Even state-based \(\rightarrow\) vision-based) * Ours significantly outperforms the state-of-the-art curriculum RL methods on various goal reaching tasks in the absence of a manually specified goal space. ## 2 Related Works Curriculum Goal Generation.Although various prior studies [41; 19; 48; 8; 45; 22] have been proposed to solve exploration problems, enabling efficient searching in uninformed environments still Figure 1: CQM simultaneously tackles the interrelated problems of specifying the goal space and suggesting which goal the agent needs to practice. CQM trains a VQ-VAE to form a discretized goal space and constructs a graph over it, capturing the relations between the discretized observations (landmarks). Concurrently, CQM suggests the agent which goal to practice based on uncertainty and temporal distance. remains a challenge. An effective way to succeed in such tasks with hardly observed final goals is **identifying uncertain areas** and instructing an agent to achieve the goals in these areas. To identify the uncertain areas and provide the goals sampled from them, previous studies employ uncertainty-based curriculum guidance by the state visitation counts [4; 31], absolute reward difference [37], and prediction model [32; 5]. Other approaches propose to utilize disagreements of ensembles[33; 50; 28] or sample the tasks with high TD errors [18] to generate goals in uncertain areas. An alternative way for solving the exploration tasks is to execute a **final goal-directed exploration** to propose tailored guidance. To this end, some studies perform successful example-based approaches [25; 6] or propose to minimize the distance between the final goals and curriculum goals [38; 21], measuring it by the Euclidean distance metric. Some studies also employ contextual distance-based metrics to perform final goal-directed **exploration away from the initial distribution**[15; 6]. However, these methods usually assume that agents have prior knowledge about the observations and unrestricted access to manually specified semantic goal space (e.g. global X-Y coordinates) because they are not scalable to handle the high-dimensional goal spaces. For example, the meta-learning classifier-based uncertainty metrics [25; 6] suffer from distinguishing uncertain areas as the dimension of the goal space increases. Also, some of the methods rely on Euclidean distance metric [38; 21] over the goal space. Moreover, generating curriculum goals [10], employing various prediction models [32; 5; 33; 50], fitting Gaussian mixture models [37], and utilizing disagreements of ensembles-based methods [33; 50] also face difficulty in solving increasingly complex problems in high-dimensional goal spaces. Although there have been attempts to propose a curriculum in high-dimensional observations [36; 13] or include an encoder in their model-based agent [28; 14], unfortunately, these approaches do not incorporate a convergence mechanism to the final goals, which are crucial for efficient curriculum progresses. Our method incorporates the benefits of the aforementioned methods without manually specified goal spaces: **exploring uncertain areas** and **moving away from the initial distribution** while **converging to the desired outcome**. Although there is a study that has also incorporated these three aspects [6], it retains its performance only in manually specified goal spaces, as other curriculum methods (Figure 2). (We included conceptual comparisons between CQM and more related works in Table 1 in Appendix B.) Discretizing Goal Space for RL.Vector quantized-variational autoencoder (VQ-VAE) is an autoencoder that learns a discretized representation using a learnable codebook. The use of this discretization technique for learning discrete representations in RL is a recent research topic [17; 27] and has shown an improved sample efficiency. Islam et al. [17] proposes to apply VQ-VAE as a discretization bottleneck in a goal-conditioned RL framework and demonstrates the efficiency of representing the continuous observation spaces into discretized goal space. Also, Mazzaglia et al. [27] utilizes VQ-VAE to discover skills in model-based RL by maximizing the mutual information between skills and trajectories of model states. Unfortunately, the aforementioned methods require a pre-collected dataset or extra exploration policy, which are not necessary in CQM. Training VQ-VAE with a pre-collected dataset implies that the agent has access to the full information about the task or that it already possesses an agent capable of performing the task well. Although it is possible to obtain a pre-collected dataset through a random rollout policy, this is only in the case where exploring the environments is easy enough to succeed with only random actions. ## 3 Preliminaries We consider a Markov Decision Process (MDP) which can be represented as a tuple \((\mathcal{O},\mathcal{A},\mathcal{T},\mathcal{R},\rho_{0},\gamma)\), where \(\mathcal{O}\) is an observation space, \(\mathcal{A}\) is an action space, \(\mathcal{T}(o_{t+1}|o_{t},a_{t})\) is a transition function, \(\rho_{0}\) is an initial distribution, and \(\gamma\) is a discount factor. Note that the MDP above Figure 2: When the curriculum methods are not scalable to handle the high-dimensional goal; the performance drop in the absence of manually specified goal space. does _not_ contain a goal space, since we do not assume that the manually specified goal space is provided. Instead, we consider a discrete low-dimensional discrete goal space \(\mathcal{G}\), which is defined by the agent automatically. Also, we assume that the final goal examples \(o^{f}\) are provided by the environment, and we denote the projection of these examples into the goal space \(\mathcal{G}\) as \(g^{f}\). We represent curriculum goal as \(g^{c}\), which is sampled from the goal space \(\mathcal{G}\), and the reward function in the tuple can be represented as \(\mathcal{R}:\mathcal{O}\times\mathcal{A}\times\mathcal{G}\rightarrow\mathbb{R}\). Furthermore, we denote actor network as \(\pi:\mathcal{O}\times\mathcal{G}\rightarrow\mathcal{A}\), and critic network as \(Q:\mathcal{O}\times\mathcal{A}\times\mathcal{G}\rightarrow\mathbb{R}\). (Thus, \(Q(o,a,g)\) indicates the goal-conditioned state-action value where goal \(g\), action \(a\), and observation \(o\) throughout this paper.) ## 4 Method In order to provide a general solution for efficient curriculum learning, our method defines its own goal space and suggests to the agent which goal to practice over the goal space simultaneously. To compose a _semantic goal space_ which reduces the complexity of observation space while preserving vital information for the curriculum process, we first quantize the continuous observation space using a discretization bottleneck (section 4.1) and restore temporal relations in discretized world model via graph (section 4.2). Over the automatically specified semantic goal space, we generate a curriculum goal and guide the agent toward achieving it (section 4.3). ### Specifying Goal Space via VQ-VAE In order to define a discrete low-dimensional goal space which allows a scalable curriculum with high-dimensional observations, we utilize VQ-VAE as a discretization bottleneck [46; 40; 47] as recently been proposed [17; 27]. VQ-VAE utilizes a codebook composed of \(k\) trainable embedding vectors (codes) \(e_{i}\in R^{D},i\in{1,2,...k}\), combined with nearest neighbor search to learn discrete representations. The quantization process of an observation \(o_{t}\) starts with passing \(o_{t}\) through the encoder \(\phi\). The resulting encoded vector \(z_{e}=\phi(o_{t})\) is then mapped to an embedding vector in the codebook by the nearest neighbor look-up as \[z_{q}=e_{c},\quad\mathrm{where}\ c=\mathrm{argmin}_{j}||z_{e}-e_{j}||_{2}. \tag{1}\] The discretized vector \(z_{q}\) is then reconstructed into \(\hat{o}_{t}=\psi(z_{q})\) by passing through the decoder \(\psi\). We closely follow [17] and train quantizer, encoder, and decoder using a vector quantization loss with a simple reconstruction loss. The first term in Eq. 2 represents the reconstruction loss, while the second term represents the VQ objective that moves the embedding vector \(e\) towards the encoder's output \(z_{e}\). We update the embedding vectors \(e\) using a moving average instead of a direct gradient update [17; 27]. The last term is the commitment loss, and we use the same \(\lambda_{\mathrm{commit}}\) (\(=0.25\)) across all experiments. \[L_{\mathrm{VQ}}=||\psi(\phi(o_{t}))-o_{t}||_{2}^{2}+||\mathrm{SG}[z_{e}]-e||_ {2}^{2}+\lambda_{\mathrm{commit}}||z_{e}-\mathrm{SG}[e]||_{2}^{2},\quad( \mathrm{SG}:\mathrm{stop\ gradient}) \tag{2}\] By utilizing VQ-VAE, the RL agent can specify a quantized goal space that consists of discrete landmarks \(L=\{l_{1},l_{2},\cdots l_{m}\}\) in two ways. The first approach is to obtain each landmark by decoding each code as \(l_{j}=\psi(e_{j})\). Alternatively, one can obtain the landmarks by passing (encoding, quantizing to the closest embedding vector, and decoding) the continuous observations sampled from the replay buffer through the VQ-VAE. We utilize the first approach as the default, and provide the ablation study to examine the effectiveness of the second approach. It should be noted that this set of landmarks \(L=\{l_{1},l_{2},\cdots l_{m}\}\) only represents discretized observations and does _not_ involve relations among the observations. We describe our approach that can better restore the temporal relations between landmarks in the next section. ### Graph Construction over Quantized Goal Space In this section, we present the graph construction technique over the quantized goal space to allow the agent to capture the temporal information between the landmarks of the quantized world model. We consider a graph \(\mathbf{G}=(\mathbf{V},\mathbf{E})\) over the goal space where the vertices \(\mathbf{V}\) represent the landmarks \(L=\{l_{1},l_{2},\cdots l_{m}\}\) obtained from decoding discrete codes of VQ-VAE, and the edges \(\mathbf{E}\) represent the temporal distance. We utilize Q-value to reconstruct the edge costs, following the method proposed in [49; 24]. If an agent receives a reward of 0 when reaching a goal and -1 otherwise, the timesteps required to travel between landmarks can be estimated using Q-value as (derivation: Appendix A) \[\mathrm{TemporalDist}(l_{i}\to l_{j})=log_{\gamma}(1+(1-\gamma)Q(l_{i},a,l_{j})). \tag{3}\] Using Eq. 3, we connect vertices with the distance below the cutoff threshold, and the resulting graph restores the temporal relations between the landmarks over a discretized goal space based on the temporal distance. In this way, the agent can calculate geodesic distances between landmarks, \[\mathrm{TemporalDist}^{\mathbf{G}}(l_{0}\to l_{f})=\Sigma_{(l_{i}\to l_{j}) \in\mathrm{shortest\ path}(l_{0}\to l_{f})}\mathrm{TemporalDist}(l_{i}\to l_{j}), \tag{4}\] which enables better prediction of temporal distances in environments with arbitrary geometric structures. Also, to incorporate extended supports of the explored area into the graph by creating landmarks in newly explored areas, we periodically reconstructed the graph following [24]. ### Uncertainty and Temporal Distance-Aware Curriculum Goal Generation In the previous sections, we proposed a method for specifying a discretized goal space with semantic information. It is important to provide curriculum goals located in the frontier part of the explored area to expand the graph effectively toward the final goal in an uninformed environment. To achieve this objective, we propose an uncertainty and temporal distance-aware curriculum goal generation method. To obtain a curriculum goal from graph \(\mathbf{G}=(\mathbf{V},\mathbf{E})\) over the specified goal space, our method samples the landmarks that are considered uncertain and temporally distant from the initial distribution \(\rho_{0}\). Thanks to the quantized world model, quantifying uncertainty in a countable goal space is straightforward and computationally light. We quantify the count-based uncertainty of each landmark as \(\eta_{\mathrm{ucert}}(l_{i})=1/(\mu(l_{i})+\epsilon)\), based on the empirical distribution \(\mu(l_{i})\) derived from the recent observations as \[\mu(l_{i})=\frac{N(l_{i})}{\sum_{i=1}^{k}N(l_{i})}, \tag{5}\] where \(N(l_{i})\) indicates the number of occurrences of landmark \(l_{i}\) in recent episodes and is periodically re-initialized when the graph is reconstructed with a new set of landmarks. Finally, we deliver the sampled landmarks as curriculum goals to the agent, considering both temporal distance and uncertainty aspects: \[\mathrm{argmax}_{l_{i}\in L^{\mathrm{top-k}}}\Big{[}\eta_{\mathrm{ucert}}(l_{ i})\cdot u_{i}\Big{]} \tag{6}\] where \(u_{i}\) is a uniform random variable between 0 and 1 used to perform weighted sampling, and \(L^{\mathrm{top-k}}\) represents a subset of \(L\) that includes the top-k elements with the largest \(\mathrm{TemporalDist}^{\mathbf{G}}\) (Eq. 4) values from initial state distribution. Our method, based on the uncertainty and temporal distance-aware objective (Eq. 6), is capable of providing calibrated curriculum guidance to the agent even in environments with arbitrary geometric structures, without requiring prior knowledge of the environment or a manually specified goal space. Furthermore, the curriculum guidance makes composing the goal space easier, illuminating the unexplored areas and vice versa. Convergence to the final goal.The curriculum objective in Eq.6 provides a calibrated curriculum towards unexplored areas. In addition to this frontier-directed method, providing final goal-directed guidance can further improve the efficiency of exploration especially when the agent sufficiently explored the environment, i.e., the supports of the final goal and explored area start to overlap [35; 6]. In order to acquire the capability to propose a final goal-directed curriculum, we gradually shift the direction of exploration from the frontier of the explored area to the final goal distribution. To do so, we determine whether to provide the _curriculum goal_\(g^{c}\in\mathcal{G}\) that is sampled via Eq. 6 or the _final goal_\(g^{f}=\phi(o^{f})\in\mathcal{G}\) which the environment originally provided (\(\psi(g^{c})\in\mathcal{O},o^{f}\in\mathcal{O}\)). We utilize a mixture distribution of curriculum goals, following the approach proposed in [35], \[p_{g^{c^{\prime}}}=\alpha p_{g^{f}}+(1-\alpha)p_{g^{c}}, \tag{7}\] where \(p_{g^{f}}\) is the distribution of the final goal, and \(p_{g^{c}}\) is the distribution of curriculum goals. The mixture ratio \(\alpha\) measures whether the achieved goal distribution \(p_{ag}\) "covers" the final goal distribution using KL divergence as \(\alpha=1/\max\big{(}\beta+\kappa D_{\mathrm{KL}}(p_{g^{f}}||p_{ag}),1\big{)}\). When the support of achieved goal distribution \(p_{ag}\) (= visited state distribution) covers that of the final goal distribution \(p_{g^{f}}\), \(\alpha\) produces a value close to 1, and a value close to 0 when the supports of both distributions are not connected. By combining the curriculum goal objective (Eq. 6) with the mixture strategy (Eq. 7), our approach generates instructive curriculum goals towards unexplored areas and provides the curriculum goals \(g^{c^{\prime}}\) to the agent that "cover" the final goal distribution at the appropriate time when the agent is capable of achieving the final goal. Planning over the graphAs presented above, CQM constructs a graph to restore the temporal relations between landmarks (Section 4.2) and utilizes it to calculate geodesic distances (Eq. 4). In addition to these benefits, we highlight that the graph can also provide the strength of planning, which allows the agent to reason over long horizons effectively [9; 16; 13; 49; 3; 24]. To generate a sequence of waypoints for achieving each goal, we perform shortest path planning (Dijkstra's Algorithm), following the details proposed in the previous graph-guided RL method [24]. Consider a task of reaching a curriculum goal \(g^{c}\in\mathcal{G}\) from the current observation \(o_{0}\in\mathcal{O}\). CQM first adds the encoded observation \(\phi(o_{0})\) to the existing graph structure. Then, it finds the shortest path between the curriculum goal and current observation to return a sequence of waypoints \((\phi(o_{0}),w_{1},...,w_{n},g^{c})\) where \(n\) indicates the number of waypoints in the shortest path. Finally, the agent is guided to achieve each decoded waypoint \(\psi(w_{i})\) during \(\mathrm{TemporalDist}(\psi(w_{i-1})\rightarrow\psi(w_{i}))\) (Eq. 3) timesteps, rather than achieving the curriculum goal directly. In other words, the RL agent produces goal-conditioned action \(\pi(\cdot|o_{t},w_{i})\), where \(o_{t}\) and \(w_{i}\) is observation and the waypoint (acting as a goal) respectively. After reaching the final waypoint \(\psi(w_{n})\), the agent receives the original curriculum goal, \(g^{c}\). The only change when the agent attempts to achieve the final goal \(g^{f}\) is that \(g^{f}\) comes at the end of the sequences, \((\phi(o_{0}),w_{1},...,w_{n},g^{f})\), rather than \(g^{c}\). In this way, the proposed approach not only provides a tailored curriculum for achieving the final goal but also allows the agent to access more elaborate instructions (waypoints) for practicing each curriculum goal. ## 5 Experiments The main goal of the experiments is to demonstrate the capability of the proposed method (CQM) to suggest a well-calibrated curriculum and lead to more sample-efficient learning, composing the goal space from the arbitrary observation space. To this end, we provide both qualitative and quantitative Figure 3: Left: changes in the discretized goal space of the CQM(ours) as learning progresses. Right: visualization of the curriculum goals proposed by the CQM and baseline algorithms. results in seven goal-reaching tasks including two visual control tasks, which receive the raw pixel observations from bird's-eye and ego-centric views, respectively. (refer to Appendix C for the detailed configurations of each task.) We compare our approach with previous curriculum RL methods and previous graph-guided RL methods. We do _not_ provide manually specified goal space in any of the environments; the agent could not map its global X-Y coordinates from the full observation which includes all the state variables for the RL agents (e.g. angle and angular velocity of the joint, position, velocity...). Also, the results of CQM and the baselines that utilize external reward functions (all the methods except OUTPACE [6]) are obtained by using sparse reward functions. For the baselines that could not be applied in vision-based environments [24; 6], we utilize an extra autoencoder with auxiliary time-contrastive loss [44; 13]. The baselines are summarized below: **OUTPACE**[6] proposes uncertainty and temporal distance-aware curriculum learning based on the Wasserstein distance and uncertainty classifier. **CURROT**[21] interpolates the distribution of the curriculum goals and the final goals based on the performance of the RL agent. **GoalGAN**[10] proposes the goals with appropriate levels of difficulty for the agent Using a Generative Adversarial Network. **PLR**[18] selectively samples curriculum goals by prioritizing the goals with high TD estimation errors. **ALP-GMM**[37] selects the goals based on the difference of cumulative episodic reward between the newest and oldest tasks using Gaussian mixture models. **VDS**[50] proposes the goals that maximize the epistemic uncertainty of the action value function of the policy. **DHRL**[24] constructs a graph between both levels of HRL, and proposes frontier goals when the random goals are easy to achieve. However, the original DHRL could not generate curriculum goals without the help of the environment. Thus we evaluated a variant of DHRL (DHRL+) with a modified frontier goal proposal module and architecture (Appendix D.3), in Figure 4: **(Lower is better) Distance from the curriculum goals to the final goals (PointNMaze, PointSpiralMaze, and AntUMaze). In the ‘n-way’ environments with multiple goals, we provide \(l2\) distance between the agent and the final goal at the end of the episodes, since calculating the average distance from the curriculum goal to multiple final goals is not possible.** Figure 5: **(Higher is better) Success rates of the results. The curves of baselines are not visible in some environments as they overlap each other at zero success rate. Shading indicates a standard deviation across 4 seeds.** addition to the original DHRL. **SFL**[13] constructs a graph based on successor features and proposes uncertainty-based curriculum goals. (refer to Appendix D for detailed implementations) ### Experimental Results First, we visualize the quantitative results to show whether the proposed method successfully and simultaneously addresses the two key challenges: 1) specifying goal space from arbitrary observation space and 2) suggesting a well-calibrated curriculum to achieve the final goal. Figure 3 illustrates the curriculum goals and changes in discrete goal space (graph) of CQM as learning progresses. Each node in the graph consists of the decoded embedding vectors of VQ-VAE, and each edge represents reachability between the decoded embeddings. The graphs of CQM in the figure gradually expand towards unexplored areas as the learning progresses, since the calibrated curriculum goal induces the agent to explore the unexplored area. In the opposite direction as well, the capability of providing proper curriculum goals on arbitrary geometric structures is facilitated by a compact goal space that contains semantic information which enables estimating the uncertainty and temporal distance well. As a result, our method provides tailored curriculum guidance across the environments, while the baselines suffer from the absence of the manually specified goal space. We also provide the quantitative results in Figures 4 and 5. Figure 4 indicates that the proposed method (CQM) can suggest a tailored sequence of goals that gradually converges to the final goal distributions while instructing the agent to achieve the increasingly difficult goals. Also, as shown in Figure 5, ours consistently outperforms both the prior curriculum method and graph-RL methods. It is noticeable that CQM is the only method that shows robust performance to the variation of the goal dimension, while other methods suffer from serious data inefficiency, especially in the tasks with higher-dimensional goal space (suffering more in Ant (29dims) compared to Point (6dims)). Curriculum learning and planning in visual control tasks.To validate the performance of the RL agent and the quality of generated curriculum goals in higher dimensional tasks, We conducted two additional vision-based goal-reaching tasks. PointNMAE-Viz receives only ego-centric view images to reach the goal, while PointSpiralMaze-Viz receives bird's-eye view images. Figure 6 visualizes the curriculum goals in the order of the episodes, and how the agent utilizes the benefit of planning over the discrete goal space in order to achieve the curriculum goals. To achieve an image-based final goal (**Goal: 8**), the agent generates the sequence of images (**(1, 2, 3,..., 8)**) as waypoints, and tries to achieve the waypoints sequentially. Interestingly, despite a significant increase in the observation dimension, CQM does not suffer from significant performance degradation in terms of data efficiency, which indicates that CQM effectively reduces the complexity of goal space by constructing a semantic goal space. We emphasize that the performance of our algorithm does not show significant differences between state-based and image-based environments (Compare PointNMAE in Figures 4 and 6). Another interesting point is that CQM can fully enjoy the advantage of planning over the discretized goal space, even in vision-based control tasks where the agent does not receive information about its global X-Y coordinates explicitly. These results validate that CQM possesses robust performance in terms of the dimensionality of the goal space, and the capability in extracting temporal relations between the discretized landmarks. ### Ablation Studies Curriculum guidance.First of all, we examine how important curriculum guidance is for an agent to solve goal-conditioned tasks. As shown in Figure 7, when only the final goal is provided without a tailored curriculum (**-w/o Curriculum**), the RL agent has difficulty achieving the final goal directly. Figure 6: Left: the distance from the agent to the final goals (**Lower is better**). Right: visualization of curriculum goals and waypoints of planning over the graph (CQM). Furthermore, we found that providing curriculum guidance greatly affects the goal space specification module and the absence of a curriculum leads to the ill-formed discrete goal space that barely covers only the observations near the initial distribution. We provide these qualitative results in Figures 13, 14 (Appendix E). Types of the discrete goal sampling method.The proposed method (CQM) can use two approaches to sample the landmark to form the discrete goal space as introduced in Section 4.1. The first approach is to decode the embedding vectors of the codebook \(l_{1:m}=\psi(e_{1:m})\), and the other approach is to sample an observation batch from the replay buffer and pass it through VQ-VAE to quantize it (**-Landmark from Replay Buff.**). As shown in Figure 7, there is no significant difference between them in terms of data efficiency. However, in terms of the stability of learning, utilizing the decoded embeddings of VQ-VAE shows better performance in some environments. Effect of the goal convergence method.To provide a final goal-directed exploration in addition to the naive curriculum toward the frontier areas, CQM includes a goal convergence module that guides the agent to practice the final goal after the agent sufficiently explored the environment (Section 4.3). Based on the KL divergence between the achieved goal distribution and the final goal distribution, CQM calculates the ratio of the mixture between the final goals and the frontier goals (the ratio of providing final goals as learning progresses is presented in Figure 11 in Appendix E). As shown in Figure 7, the absence of the final goal convergence method (**-w/o Goal Convergence**) results in unstable performance, since the agent repeatedly practices unexplored areas instead of converging towards the final goal even after the explored area "covers" the final goal distribution. Effect of Graph Construction and Planning.Finally, we examine the effect of constructing graphs and planning on the performance of RL agents. As explained in section 4.2, CQM not only utilizes the decoded embedding vectors from VQ-VAE as a set of discretized observations but also forms a graph by capturing the temporal relations between the discrete observations. First, we evaluated CQM without graph (**-w/o Graph**), which does not construct a graph and measure the distance between the landmarks through naive temporal distance prediction based on Q values (\(\mathrm{TemporalDist}\)), rather than the geodesic distance over the graph (\(\mathrm{TemporalDist}^{\mathbf{G}}\)). Also, we evaluate CQM without planning (**-w/o Planning**) since ours can optionally utilize the benefit of planning and reason over long horizons using the graph. As shown in Figure 7, CQM shows better performance than both CQM without a graph and CQM without planning, especially in some long-horizon tasks (AntUMaze and PointSpiralMaze). ## 6 Conclusions To solve the complex control tasks without the need for a manually designed semantic goal space, we propose to solve both issues of specifying the goal space and suggesting the curriculum goals to the agent. By constructing the quantized world model using the decoded embedding vectors of the discretization bottleneck and restoring the relations between these, CQM considers both the uncertainty and temporal distance and has the capability of suggesting calibrated curriculum goals to the agent. The experiments show that the proposed method significantly improves performance on various vision-based goal-reaching tasks as well as state-based tasks, preventing the performance drop in the absence of a manually specified goal space. Figure 7: **(Lower is better) Ablation study: the distance from the agent to the final goals at the end of the episodes.** Limitations and future works.While CQM shows great potential in addressing the limitations of previous studies, more research could further develop it. One area that could be explored is the use of reward-free curriculum learning methods, since CQM still requires minimal human efforts such as defining a success threshold to train agents. Also, this study only used single-code representations with VQ-VAE which would possess a limited capacity of representations, so expanding CQM to include multiple-code representations with discrete factorial representations could be an interesting future direction. ## 7 Acknowledgement This work was supported by Korea Research Institute for defense Technology Planning and advancement (KRIT) Grant funded by Defense Acquisition Program Administration(DAPA) (No. KRIT-CT-23-003, Development of AI researchers based on deep reinforcement learning and establishment of virtual combat experiment environment)
2307.02494
Comparison of Neural FEM and Neural Operator Methods for applications in Solid Mechanics
Machine Learning methods belong to the group of most up-to-date approaches for solving partial differential equations. The current work investigates two classes, Neural FEM and Neural Operator Methods, for the use in elastostatics by means of numerical experiments. The Neural Operator methods require expensive training but then allow for solving multiple boundary value problems with the same Machine Learning model. Main differences between the two classes are the computational effort and accuracy. Especially the accuracy requires more research for practical applications.
Stefan Hildebrand, Sandra Klinge
2023-07-04T06:16:43Z
http://arxiv.org/abs/2307.02494v1
# Comparison of Neural FEM and Neural Operator Methods for applications in Solid Mechanics ###### Abstract Machine Learning methods belong to the group of most up-to-date approaches for solving partial differential equations. The current work investigates two classes, Neural FEM and Neural Operator Methods, for the use in elastostatics by means of numerical experiments. The Neural Operator methods require expensive training but then allow for solving multiple boundary value problems with the same Machine Learning model. Main differences between the two classes are the computational effort and accuracy. Especially the accuracy requires more research for practical applications. Neural FEM; Neural Operator Methods; Machine Learning; Partial Differential Equation, Elastostatics ## 1 Introduction Induced by ever-rising compute power and successful applications in several domains, Artificial Intelligence (AI) systems and especially Machine Learning (ML) methods attract growing attention for advanced tasks in mechanical engineering [1, 2, 3]. This is supported by well-established and flexible Machine Learning frameworks like PyTorch [4] and Tensorflow [5]. One particular application of ML is the solution of Parameterized Partial Differential Equations (PPDE), which are traditionally solved by numerical discretization methods like Finite Element Method (FEM), Finite Difference Method (FDM), Finite Volume Method (FVM) or Boundary Element Method (BEM). Based on ML techniques, two new classes of methods arose, namely the Neural FEM and Neural Operator methods [6]. The aim of the current work is to compare these two classes of methods for applications in solid body mechanics. Therefor, their common representatives are applied to case studies, where the well-established FEM can serve as a benchmark. The mathematical problem to solve with either method can be described as follows: Let an arbitrary Parameterized Partial Differential Equation be given on an open domain \(B\) with piecewise smooth boundary \(\Gamma\) in the form: \[\mathcal{N}[\mathbf{u}(\mathbf{y});\mathbf{y}]=\mathbf{0}\quad\text{on }B,\quad\mathcal{B}[\mathbf{u}(\mathbf{y});\mathbf{y}]=\mathbf{0} \quad\text{on }\Gamma\, \tag{1}\] where \(\mathcal{N}\) is a nonlinear operator on the domain \(B\), \(\mathcal{B}\) an operator on \(\Gamma\) that determines the boundary conditions, and \(\mathbf{u}(\mathbf{y})\in\mathbb{R}^{d}\) the solutions of the PDE. All quantities are parameterized by \(\mathbf{y}\in\mathbb{R}^{n}\). The mapping \[G:\quad B\cup\Gamma\ \times\ \mathbb{R}^{n}\to\mathbb{R}^{d},\quad(\mathbf{X}, \mathbf{y})\mapsto\mathbf{u},\quad\mathbf{X}\in B\cup\Gamma,\quad n,d\in\mathbb{N} \tag{2}\] is called the solution operator of the PPDE.
2310.08210
CLExtract: Recovering Highly Corrupted DVB/GSE Satellite Stream with Contrastive Learning
Since satellite systems are playing an increasingly important role in our civilization, their security and privacy weaknesses are more and more concerned. For example, prior work demonstrates that the communication channel between maritime VSAT and ground segment can be eavesdropped on using consumer-grade equipment. The stream decoder GSExtract developed in this prior work performs well for most packets but shows incapacity for corrupted streams. We discovered that such stream corruption commonly exists in not only Europe and North Atlantic areas but also Asian areas. In our experiment, using GSExtract, we are only able to decode 2.1\% satellite streams we eavesdropped on in Asia. Therefore, in this work, we propose to use a contrastive learning technique with data augmentation to decode and recover such highly corrupted streams. Rather than rely on critical information in corrupted streams to search for headers and perform decoding, contrastive learning directly learns the features of packet headers at different protocol layers and identifies them in a stream sequence. By filtering them out, we can extract the innermost data payload for further analysis. Our evaluation shows that this new approach can successfully recover 71-99\% eavesdropped data hundreds of times faster speed than GSExtract. Besides, the effectiveness of our approach is not largely damaged when stream corruption becomes more severe.
Minghao Lin, Minghao Cheng, Dongsheng Luo, Yueqi Chen
2023-10-12T10:59:26Z
http://arxiv.org/abs/2310.08210v1
# CLE Extract: Recovering Highly Corrupted DVB/GSE Satellite Stream with Contrastive Learning ###### Abstract Since satellite systems are playing an increasingly important role in our civilization, their security and privacy weaknesses are more and more concerned. For example, prior work demonstrates that the communication channel between maritime VSAT and ground segment can be eavesdropped on using consumer-grade equipment. The stream decoder GSEExtract developed in this prior work performs well for most packets but shows incapacity for corrupted streams. We discovered that such stream corruption commonly exists in not only Europe and North Atlantic areas but also Asian areas. In our experiment, using GSEExtract, we are only able to decode 2.1% satellite streams we eavesdropped on in Asia. Therefore, in this work, we propose to use a contrastive learning technique with data augmentation to decode and recover such highly corrupted streams. Rather than rely on critical information in corrupted streams to search for headers and perform decoding, contrastive learning directly learns the features of packet headers at different protocol layers and identifies them in a stream sequence. By filtering them out, we can extract the innermost data payload for further analysis. Our evaluation shows that this new approach can successfully recover 71-99% eavesdropped data hundreds of times faster speed than GSEExtract. Besides, the effectiveness of our approach is not largely damaged when stream corruption becomes more severe. ## I Introduction Satellite systems are becoming infrastructures of modern civilization. It provides a wide range of services including media broadcasts that cover 100 million customers, Earth observation which contributes to environmental conservation efforts, and precise global positioning services. Especially in recent years, the New Space trend [18] significantly advances the development of organizations such as SpaceX and OneWeb that carry space missions like global broadband service. Nowadays, there are more than 2,000 operational satellites that orbit Earth and the market value exceeds $150 billion a year [14]. A satellite system consists of three major components: ground segment, space segment, and ground-space communication. While all components are reported vulnerable in previous works (_e.g.,_[8][15][13][6]), for adversaries, ground-space communication is the most accessible attack surface. Unlike firmware in ground-segment and satellite payload the reverse engineering of which requires physical access, ground-space communication relies on radio signals and covers a large area in the size of million square kilometers. As long as there is an antenna within the area and aligned to the satellite, attackers can eavesdrop on satellite streams and steal sensitive data. A prior work [16] demonstrates this threat by eavesdropping on maritime VSAT communications in the North Atlantic. The stream decoder GSEExtract developed in this work can extract between 40-60% of the GSE PDUs contained within the targeted streams. But for corrupted packets, GSEExtract shows incapacity by recovering only 10-25% of them. We discovered that such stream corruption commonly exists for satellite communication in not only European areas but also Asian areas. In our experiment, using GSEract, we are only able to decode 2.1% satellite stream we eavesdropped on in Asia. This higher corruption is presumably because of surface reflection which is not an issue for maritime VSAT eavesdropping. However, in addition to surface reflection, we identified three more factors that universally influence satellite stream quality and are difficult to be eliminated. The traditional Finite-State Machine (FSM) based decoding approach, like the one implemented in GSEExtract, fundamentally cannot recover such highly-corrupted satellite streams. This is because it relies on critical information in networking packet headers (_e.g.,_ length field, CRC-32) to perform decoding layer by layer. When this critical information is corrupted, the decoding can hardly carry on. In this work, we propose to use a contrastive learning technique with data augmentation to recover satellite streams. Instead of relying on critical information for decoding, contrastive learning directly learns the features of packet headers at different layers and identifies them in a stream sequence. By filtering them out, we can extract the innermost payload that can be further analyzed by tools like Wireshark. Our approach further employs data augmentation to entitle the trained contrastive learning model with robustness against unseen corruptions. We implemented our approach and named it as CLEExtract. Our experiment shows that CLExtract can successfully recover 71-99% eavesdropped data hundreds of times faster speed than GSEExtract. Besides, the effectiveness of CLExtract is not largely damaged when corruption becomes more severe. Making eavesdropping more practical, we develop CLExtract to facilitate investigation in security and privacy issues in satellite communication. To foster future research, we will open-source CLExtract once this work is accepted. In summary, this paper makes the following contributions: * Analysis into factors that cause satellite stream corruption and challenges of corrupted stream recovery. * Proposed contrastive learning technique with data augmentation to recover corrupted satellite streams. * Open-sourced implementation of the proposed technique and evaluation of its effectiveness and efficiency using eavesdropped data in the real world In the following, we first describe the background in Section II, followed by the design overview in Section III. Then, we elaborate on the technical details in Section IV. The evaluation results and ethics are in Section V. Finally, we conclude our work in Section VII. ## II Background and Challenges In this section, we first introduce the protocol layers of the satellite stream and packet formats of each layer. Then, we describe four major factors that can corrupt the integrity of DVB/GSE packets, especially in the scenario of eavesdropping. Finally, we discuss the challenges of recovering corrupted streams. **DVB/GSE Protocol Format.** DVB (Digital Video Broadcasting) and its following DVB-S2 and S2X designed by ETSI (The European Telecommunications Standards Institute) are the de facto standard for the communication of most satellites. Data following this standard is encapsulated into continuous streams using GSE (Generic Stream Encapsulation) protocol. As shown in Figure 1, from innermost to outermost, the encapsulation consists of three protocol layers: IP layer, GSE layer, and DVB layer. The IP layer supports common transport layer protocols (_e.g.,_ TCP, ICMP, and MPEG). The IP header has a field recording which transport layer protocol is used (_e.g.,_ 0x06 for TCP and 0x01 for ICMP). All transport layer data along with the IP header are encapsulated into PDUs (Protocol Data Units). PDUs are further divided into several slots and stored as data fields in continuous GSE packets. A GSE packet starts with a header of variable length. Figure 1 shows its layout. The fixed header is in two bytes, recording whether the current GSE packet is the starting packet (_i.e.,_ S=1) or the ending packet (_i.e.,_ E=1) for a complete PDU. Besides, it includes a length field that stores the size of the current GSE packet. Traditional FSM-based decoding approach relies on this length field to pinpoint the start position of the next GSE packet. More information like Fragment ID and Protocol Type is stored in variable fields. Due to the space limit, we omit details of this part. Readers can refer to [3] for more information. In the ending GSE packet of a complete PDU, there is a CRC-32 tail as the error detection code of the PDU. With all GSE packets encapsulated, at the outermost DVB layer, several GSE packets are concatenated to fill in the data field of a Base Band (BB) Frame. The BB frame starts with a BB header the size of which are 10 bytes. The first two bytes of the header are the MATYPE field which describes the input stream format and the type of Mode Adaptation. The second two bytes are the UPL field storing user packet length in bits (up to 65535 bits). The following two bytes are the DFL field which holds the length of the data field (_i.e.,_ concatenated GSE packets) in bits (up to 58112 bits). Readers can refer to [2] for details. **Four Factors Causing Stream Corruption.** Given a complete DVB/GSE packet, it is straightforward to decode it using a traditional FSM-based approach (_e.g.,_ GSE extract[16]) which decodes layer by layer according to information in headers (_e.g.,_ data field length). The innermost PDUs (_i.e.,_ IP packets) extracted in this approach can be further analyzed by tools like Wireshark. However, the quality of radio communication between the satellite and the ground segment, especially in the scenario of eavesdropping, is under the influence of many factors, which makes the FSM-based approach can hardly carry on. The first factor is solar activities which result in perturbations of the ionosphere. This perturbation can change the density structure of the ionosphere by creating areas of enhanced density. This change reflects, refracts, and absorbs radio waves, leading to the loss of signal. According to NASA's report [1], when communication is interrupted, some satellites can be tumbled out of control for several hours, and weather images can be lost. The second factor causing signal attenuation and absorption are rainstorms, snow, and heavy winds near the ground, given that the signals from satellites are transmitted through air [4]. Higher-frequency bands tend to be more affected by rain because the wavelength itself is close in size to water molecules. The third factor is the surface reflection from distant reflectors such as mountains and large industrial infrastructures and local environmental effects including shadowing and blockage from objects and vegetation near the terminal [11]. The last factor is the quality of the antenna. Attackers performing eavesdropping can seldom afford professional antennae and lack experience in antenna alignment. Therefore, the DVB/GSE packets received through consumer-grade antenna are more likely to be of low quality in comparison with the ground station. **Challenges of Recovering Corrupted Stream.** Due to the nature of satellite radio communication, the four factors are not easy to eliminate. As we will show in the evaluation (Section V-A), oftentimes, the DVB/GSE packets received by eavesdroppers are highly corrupted. To make matters worse, eavesdroppers are passive attackers and cannot ask for re-transmission if discovering that the received packets are corrupted. How to decode corrupted DVB/GSE packets and extract PDUs is the key to the success of eavesdropping. In the following, we discuss technical challenges in detail. Fig. 1: Protocol layers of satellite stream and packet formats of each layer. The first challenge is broken stream. Eavesdropping may start from the middle of a BB frame or because of bad weather, packet headers are missed. With such a broken stream, it is difficult to determine where to start decoding. One brute-force solution is to treat every byte as a potential header. However, it is impractical in the real world because such decoding speed is too low to catch up with the transmission speed: with the best practice, we are only able to decode 609.71 MB in one hour, not identifying even one complete header in the highly corrupted stream we eavesdropped. The second challenge is corrupted critical fields. One critical field is the length of packets. Even if we are able to identify the BB header or GSE header, for decoding, we need to determine the length of data fields at each layer so as to extract PDUs. Unfortunately, these length fields are corrupted and thus untrustworthy. Besides, signal loss can happen. Therefore, we cannot use the size of inner packets to correct the length field in outer packets. Without the right length, the decoding can hardly carry on. Another critical field is the protocol type in IP packets because once it is corrupted, Wireshark cannot determine which transport layer protocol is used and fails to analyze PDUs. The third challenge is nonfunctional CRC-32. The error detection code is designed to examine if PDUs are corrupted. In a clear and high-quality communication channel, CRC-32 helps find corruption. However, our experiment results show that, in eavesdropping, corruption is so common that CRC-32 itself can also be problematic. Therefore, we cannot rely on it to correct our decoding results. ## III Design Overview To resolve the technical challenges discussed above, in this work, we propose to employ state-of-the-art contrastive-learning techniques with data augmentations to decode and recover corrupted satellite streams. Instead of relying on critical fields or CRC-32 for decoding, contrastive learning directly learns the features of packet headers at different layers and identifies them in a stream sequence. By filtering them out, we can extract the innermost IP packets. Then we correct the transport layer protocol field in IP headers so that the extracted IP packets can be further analyzed by tools like Wireshark. Figure 2 shows the workflow of our approach. To start, we first construct a training and testing dataset that consists of positive data (_i.e.,_ successful decoded headers) and negative data (_i.e.,_ non-headers streams). Applying FSM-based GSEExtract on the eavesdropped data, we collect positive data - BB headers, GSE headers, and IP headers that can be successfully decoded. Although successful decoding does not mean these headers are 100% correct because the fields not used in FSM-based decoding can still be problematic, we deem these headers not corrupted and use them for training. This flaw in the training data, as we will describe in Section IV and show in the evaluation (Section V), won't influence the accuracy of our approach with the assistance of a pre-trained encoder network. Eliminating headers from successfully decoded streams, we obtain negative data - non-headers streams. With these two types of streams, we train our classification model. Given a sequence of bytes, this model can tell whether it is a header or not. We trained three instances of this classification model to identify BB headers, GSE headers, and IP headers, respectively. We first apply the BB header model to divide the whole eavesdropped streams into BB frames. Within each BB frame, we apply the GSE header model to identify GSE layer packets. Then, we apply the IP header model to determine whether a GSE layer packet includes an IP header. With the extracted IP packets in hand, we use Hamming distance to correct the transport layer protocol field in the IP header if it is corrupted. After correction, the PDUs can be fed to tools like Wireshark for further analysis. ## IV Technical Details In this section, we cover the technical details of our approach. First, we present the framework of our contrastive learning based classification model. Then, we take a scrutiny into the framework, elaborating on how it pre-trains an encoder network and how the encoder network is used to fine-tune a classification model. ### _Contrastive Learning Based Classification Framework_ As a state-of-the-art machine learning paradigm, contrastive learning has achieved great success in the computer vision [7][10] and natural language processing domains [19][12]. Figure 3 illustrates the overall framework of our proposed contrastive learning based classification model. The first component of the framework is a self-supervised contrastive pre-train which generates an encoder network. This encoder network is used in the second component to fine-tune a classifier model which predicts whether the stream input is a header or not. As the essential part of this framework, the encoder network learns a fixed dimensional vector representation for a stream input. Without technical details, Figure 4 illustrates the difference between representation with and without a pre-trained encoder network. A pre-trained encoder network can learn the features of non-corrupted headers in the meanwhile cluster corrupted headers with non-corrupted headers in the representation space. In comparison, an encoder network without pre-train will disperse corrupted and non-corrupted headers in the representation space, mixing them with non-header streams and thus influencing the effectiveness of fine-tuned classification model. Therefore, the encoder network with pre-train is robust to corruption noise, not only fixing the flaw of Fig. 2: The recovering workflow. We first train a classification model using the eavesdropped data and then apply this model to identify headers even if they are corrupted and extract PDUs after correction. our training data mentioned in Section III, but also equipping the classification model with the capacity to identify corrupted headers. ### _Encoder Network and Contrastive Pre-train_ **Architecture of Encoder network.** The encoder network maps the stream data to a fixed dimensional vector. Formally, we denote it by \(f_{\theta}(x):\mathbb{R}^{T}\rightarrow\mathbb{R}^{D}\). Here \(x\) is the input stream; \(T\) is the maximum length of possible headers; \(D\) is the dimensionality of the representation vector; \(\theta\) denotes the learnable parameters in the encoder network. Especially, for an input stream with a length smaller than \(T\), we first pad it with 0x0 to make sure that all inputs are with the same length, _i.e._, \(T\). The architecture of the adopted encoder is shown in Figure 5. It consists of a fully connected layer (input layer), a 10-layer dilated 1-dimensional convolutional neural network (1DCNN) module [20], a fully connected layer (output layer), and a pooling layer. Compared to the vanilla 1DCNN, the dilated version has a large receptive field to capture long-range dependencies. Formally, we have \[\begin{split}\mathbf{H}_{1}&=\text{InputLayer}(x)\\ \mathbf{H}_{2}&=\text{1DCNNs}(\mathbf{H}_{1})\\ \mathbf{H}_{3}&=\text{OutputLayer}(\mathbf{H}_{2})\\ \mathbf{z}&=\text{MeanPooling}(\mathbf{H}_{3}),\end{split} \tag{1}\] where \(\mathbf{H}_{1},\mathbf{H}_{2},\mathbf{H}_{3}\) are hidden representations, and \(\mathbf{z}\in\mathbb{R}^{D}\) is the representation vector. **Data Augmentation by Simulating Corruption.** To make the encoder network robust to corruption noise, we introduce data augmentation techniques. More specifically, we simulate corruption based on successfully decoded headers and add corrupted headers to pre-train dataset. The communication corruption is usually modeled as white Gaussian noise. The noise added to each bit is independent of the others. In general, there are two types of corruption. One is bit flip and another is bit loss. We simulate flip and loss with parameterized ratios \(\gamma_{1},\gamma_{2}\in[0,1]\). As shown in Algorithm 1, we first randomly sample an array \(\mathbf{d}\) of \(|x|\) real numbers from \((0,1)\) in Line 3, each corresponding to a bit in \(x\). First, we determine if flip happens by comparing the sampled numbers with ratio \(\gamma_{1}\). If \(r_{i}<\gamma_{1}\), we flip the \(i\)-th bit in \(x\) (line 4-6). Then, we determine if loss happens by comparing \(r_{i}\) with \(\gamma_{1}\) and \(\gamma_{2}\). If \(\gamma_{1}<r_{i}<\gamma_{2}\), we fill in the lost bit with 0 to simulate receivers (line 7-9). In other situations, we keep the bit. As such, the total corruption ratio is \(\gamma_{1}+\gamma_{2}\) with \(\gamma_{1}\) for flip and \(\gamma_{2}\) for loss respectively. **Contrastive Pre-train.** Drawing inspiration from recent self-supervised learning algorithms in computer vision and natural language processing, we adopt, a simple but effective contrastive learning framework, SimCLR framework [7] to learn representation vectors for input streams. SimCLR maximizes agreement between differently augmented streams of the same data example by using a contrastive loss in the latent space. Specifically, it contrasts the augmented streams generated from the same input (_i.e._, positive pairs) by pulling them close in the representation space, while pushing apart the augmented streams generated from different inputs (_i.e.,_ negative pairs). Fig. 4: The diagram illustrating the necessity of pre-train for a robust encoder network. Fig. 5: The encoder network that maps the radio stream to a fixed dimensional vector. Fig. 3: The framework of proposed contrastive learning based classification model. The framework first pre-trains an encoder network and then uses it to fine-tune a supervised classification model. Technically, for the contrastive pretrain objective, we follow SimCLR [7] to use the normalized temperature-scaled cross entropy loss (NT-XEnt) as the contrastive loss. Algorithm 2 summarizes our contrastive pretrain procedure. Specifically, we utilize the cosine similarity to define the similarity between two vectors \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\). Formally, \[\text{s}(\mathbf{z}_{i},\mathbf{z}_{j})=\frac{\mathbf{z}_{i}\cdot\mathbf{z}_{j}}{||\mathbf{z}_{i}||_ {2}\cdot||\mathbf{z}_{j}||_{2}} \tag{2}\] Given a batch of \(N\) stream instances, denote by \(\{x_{i}\}_{i=1}^{N}\), we corrupt each instance twice to get \(2N\) augmented streams. The ones corrupted from the same \(x\) are considered positive pairs. For a positive pair \((\mathbf{z}_{i},\mathbf{z}_{j})\), we have its contrastive loss \[\ell_{(i,j)}=-\log\frac{\exp(\text{s}(\mathbf{z}_{i},\mathbf{z}_{j})/\tau)}{\sum_{k=1} ^{2N}\mathbbm{1}_{[k\neq i]}\exp(\text{s}(\mathbf{z}_{i},\mathbf{z}_{k}))/\tau)}, \tag{3}\] where \(\tau\) is the temperature parameter and \(\mathbbm{1}_{[k\neq i]}\) is the indicator function defined as follows. \[\mathbbm{1}_{[k\neq i]}=\begin{cases}0&\quad\text{if }k=i\\ 1&\quad\text{if }k\neq i.\end{cases} \tag{4}\] Intuitively, minimizing \(\ell_{(i,j)}\) encourages the model to identify the positive partner \(\mathbf{z}_{j}\) from \(2N\) vectors for a given \(\mathbf{z}_{i}\). The batch loss is then computed by averaging all positive pairs in a mini-batch. Formally, we have \[\mathcal{L}=\frac{1}{2N}\sum_{i=1}^{N}[\ell_{(2i-1,2i)}+\ell_{(2i,2i-1)}]. \tag{5}\] ``` 1:Input: a set of stream instances \(\{x_{i}\}\), batch size \(N\), temperature \(\tau\),encoder networks \(f\); 2:Output: pretrained encoder networks \(f\); 3:for each minibatch \(\{x_{i}\}_{i=1}^{N}\) of stream instances do 4:for each instance \(x_{i}\)do 5:\(\tilde{x}_{i1}\leftarrow\) apply simulated corruptions on \(x_{i}\); 6:\(\tilde{x}_{i2}\leftarrow\) apply simulated corruptions on \(x_{i}\); 7:\(\mathbf{z}_{i2-1}\leftarrow f(\tilde{x}_{i1})\); 8:\(\mathbf{z}_{i1}\gets f(\tilde{x}_{i2})\); 9:endfor 10: compute batch loss \(\mathcal{L}\) with Eq. (5); 11: update parameters in \(f\) by minimizing \(\mathcal{L}\). 12:endfor 13:return encoder network \(f\). ``` **Algorithm 2**Contrastive Pre-train ### _Supervised Fine-tune and Classification Model_ After pre-train, the encoder network is robust to noise. We build our classifier on top of that to identify headers in the corrupted streams. For efficiency, we extract the Inputlayer and 1DCNNs from the pre-trained encoder network and stack them with 2 fully connected layers as the classifier. Then, we fine-tune the classification model with supervised successfully decoded headers, labeled with 1, and randomly sampled non-header streams, labeled with 0. During this step, we kept the Inputlayer and 1DCNNs frozen and don't update parameters in these layers. Since we have two labels, headers and non-headers, we adopt the binary cross entropy as the loss function. Finally, we fine-tune the model under supervision using Algorithm 3. To recover the corrupted transport layer protocol field in IP headers, we adopt Hamming distance to calculate the distance between the potentially corrupted protocol value and the non-corrupted protocol value in the training dataset. The protocol value is automatically corrected to the target protocol with the smallest distance. ## V Evaluation and Ethics ### _Experiment Setup and Ethics_ In our experiment, we built a platform to eavesdrop on satellite communication data. This platform consists of a TBS-6903 Professional DVB-S2 Dual Tuner PCIe Card and a professionally customized Ku-band antenna that can automatically align the dish. The total cost of this platform is around $15k. We deployed the platform in a suburban area of a metropolis with more than 10 million of population in Asia, eavesdropping on the spectrum range from 11 GHZ to 12.75 GHZ which covers seven commercial satellites. The eavesdropping was conducted over 20 days from July to October. Finally, we received 23.6 GB DVB/GSE stream data. Considering that sensitive information could be included in the stream, we followed ethical principles proposed by the prior work [16], not storing any data longer than necessary. Even for the learning model trained using the eavesdropped data, we deleted it immediately after completing the evaluation to prevent adversary data generation from the model (_e.g.,_ GAN [9]). In the experiment, we treated data units in IP packets as normal payloads and didn't make any attempt to decrypt them. For the sake of anonymity, we chose not to reveal the specific names of satellites and service providers that were eavesdropped on. Though we will open-source our implementation CLExtract after this work is accepted, we withhold the publishing of training data. This is our best effort to bolster future research. Here, we advocate service providers or authorities to build a NVD-like database (National Vulnerability Database) [5] which stores anonymized and desensitized data. It will significantly advance research on space security. From the eavesdropped data, we ran GSEExtract to extract BB frames that can be successfully decoded in an FSM-based approach. From these BB frames, we filter out BB headers, GSE headers, and IP headers - successfully decoded headers. Eliminating these headers, we obtain non-header streams. The two types of data construct our training and testing dataset. Using this approach, we collected in total 10471 BB frames (0.5 GB) from all streams (23.6 GB). This low success decoding rate (2.1%) indicates that corruption is very common in satellite communication. From such highly corrupted streams, as we will further show in our evaluation, tools like GSExtract which are built upon the traditional FSM-based decoding approach can only recover a very small portion of data. With the training and testing data in hand, we evaluate CLExtract. We divided the whole data set into two parts: 2/3 of successfully decoded headers and non-headers stream to train our model, and the remaining 1/3 to measure its **effectiveness**. During data augmentation in contrastive pre-train, we set the flip ratio and loss ratio as 10% and 10% respectively. To **compare** CLExtract with GSExtract, we apply both to streams with different corruption degrees and corruption types. More specifically, we synthesize corruption using the same algorithm for data augmentation. Starting from 2%, we gradually increase the corruption degree \((\gamma_{1}+\gamma_{2})\) by a step of 2%, until 20%. In the meanwhile, we adjust the relative ratio flip\((\gamma_{1})\) : loss\((\gamma_{2})\) to 1:3, 1:1, and 3:1 to examine whether the types of corruption affect the robustness. Since the model is trained over successfully decoded streams with data augmentation, there is no label leakage to apply the model for corrupted streams which can be considered as a distinct dataset. To evaluate to which extent **pre-trained network** with **data augmentation** improves the robustness, we compare the effectiveness of CLExtract with and without data augmentation. Finally, we run CLExtract and GSExract using a server with an Intel(R) Xeon(R) Gold 6258R CPU @ 2.70GHz, 1.5TB RAM, and an NVIDIA A100 80GB PCIe GPU, showing the efficiency of CLExtract. Each experiment mentioned above is repeated for 10 rounds and we report the average results. ### _Evaluation Results of_ CLExtract **Effectiveness.** We use four metrics to measure the effectiveness of CLExtract. The first metric is ACC which stands for the ratio of the number of correct predictions to the total number of input samples. In our scenario, it is calculated as (TP+TN)/(TP+TN+FP+FN). True positive (TP) means the headers identified by CLExtract are indeed headers, and true negative (TN) means CLExtract doesn't mistakenly identify non-header streams as headers. The second metric is Precision which is calculated as TP/(TP+FP). The third metric is Recall which is TP/(TP+TN). The last metric is F1 - the harmonic mean of Precision (r) and Recall (R). It is calculated as (2/ \(\rightarrow\) F1 - 1/P + 1/R). F1 is high if and only if both Precision and Recall are flip. In Table I, we show the effectiveness of CLExtract when the relative corruption ratio flip\((\gamma_{1})\) : loss\((\gamma_{2})\) is 1:1. Overall, the four metrics for header identification at different layers are all very high no matter how corrupted the stream is - all above 0.71.3 while most are over 0.9. This indicates that CLExtract can accurately identify headers in a corrupted satellite stream. From the table, we can further observe that, in comparison with BB headers and IP headers, the effectiveness of CLExtract in identifying GSE headers is relatively poor. The reasons are two-fold. On the one hand, unlike BB headers, the length of GSE headers is variable, ranging from 2 bytes to 12 bytes. In training, we have to pad the short GSE headers with 0x0 so that they can be uniformly represented in the encoder network. However, the padded 0x0s carry no information and mislead CLExtract in classification if the non-header stream contains too many 0x0s. On the other hand, unlike IP headers the length of which is at least 20 bytes, the length of GSE headers is relatively small. Therefore, the classification model fails to learn enough features in training. Enough if the effectiveness of CLExtract in identifying GSE headers is not as good as identifying BB headers and IP headers, in comparison with GSExtract, CLExtract is much better. In Table II, we compare the numbers of identified GSE headers using CLExtract with the number using GSExtract. Due to the variable length and the small size nature mentioned above, the number of identified GSE headers by CLExtract is slightly lower than GSExtract when the corruption degree is 0.02. However, when corruption becomes more severe, regardless of the ratio of corruption types, CLExtract can successfully identify much more GSE headers than GSExtract (15707 vs. 516 when \(\gamma_{1}+\gamma_{2}\)=20% and \(\gamma_{1}:\gamma_{2}\)=3:1 ). As Table I shows the results when the ratio of corruption types flip\((\gamma_{1})\) : loss\((\gamma_{2})\) is 1:1, we present the results for different ratios in Figure 6 for BB headers, Figure 9 for GSE headers, and Figure 10 for IP Headers (Figure 9 and 10 are in Appendix A). From the three figures, we can observe the effectiveness of CLExtract in recovering corrupted streams, which aligns with the results in Table I and II. Fig. 6: Corresponds to Table I, effectiveness and robustness of BB header identification with different corruption degrees (\(\gamma_{1}+\gamma_{2}\)) and ratios (\(\gamma_{1}:\gamma_{2}\)). More corresponding results are in Figure 9 and 10. Fig. 7: Corresponds to Table II, comparison between CLExtract and GSExtract in BB header identification with different corruption degrees (\(\gamma_{1}+\gamma_{2}\)) and fixed ratio (\(\gamma_{1}:\gamma_{2}\)). More corresponding results are in Figure 11 and 12. **Robustness.** We measure the robustness of CLExtract against different corruption degrees and ratios of corruption types. Still, from Table I, we can see that the four effectiveness metrics are not lowered too much when corruption becomes more severe. Taking BB header as an example, its ACC, Precision, Recall, and F1 only drop 7%, 2%, 12%, and 7%, respectively, when the corruption degree increases from 2% to 20%. Similar trends are also presented in Figure 6, 9, and 10 that correspond to Table I. From Table II, we can see that, in general, CLExtract performs well when the ratio of corruption types changes. More specifically, CLExtract identifies 16734 GSE headers when the corruption degree is 2% and flip(\(\gamma_{1}\)) : loss(\(\gamma_{2}\)) is 1:3. If we fix the corruption degree and switch the ratio to 3:1, the number is 16682 which is only 50 fewer. When the corruption degree increases to 20%, this gap becomes 1017 which takes over 4.2% of the total number of GSE headers. In comparison, GSExtract is significantly influenced by the ratio of corruption types. From Table II, we can observe that the number of identified GSE headers drops 83.3% (2956 to 516) if the ratio switches from 1:3 to 3:1. It indicates that GSExtract is more likely to be dis-functioned by bit flip than bit loss. This is because when a bit loss happens, there is still a 50% chance that the value is correct. However, when a bit flip happens, the value becomes completely wrong. For GSExtract which heavily relies on critical information in headers, a wrong value can ruin the following decoding, leading to the missing of many headers. While for CLExtract, it learns header features as a whole and is less influenced by the information loss of bit flip. In Figure 7, we present the comparison in BB header identification with fixed \(\gamma_{1}:\gamma_{2}\). We can observe that when the corruption degree is 0.2, GSExtract almost malfunctions while CLExtract can identify around 80% BB headers. Similar results are shown in BB header and IP header identification with different ratios (Figure 11 and 12 in Appendix A). As such, we conservatively conclude that CLExtract shows strong robustness and can recover satellite streams even if it is corrupted up to at least 20%. **Data Augmentation.** Recall that we employ data augmentation to improve the robustness of CLExtract, here, we evaluate to which extent data augmentation achieves this goal. In Figure 8 (and Figure 13 14 in Appendix A), we present the effectiveness and robustness of identifying BB headers, GSE headers, and IP headers, with and without data augmentation. We can clearly see that data augmentation significantly improves the effectiveness and robustness of CLExtract with the orange line lying over the green line. The only exception is the Precision metric. Actually, this is within expectation. Intuitively, data augmentation blurs the boundary between corrupted headers and non-header streams. Therefore, the model with data augmentation will mistakenly identify non-header streams as corrupted headers, increasing FP when calculating the Precision metric (Tp/(TP+FP)). However, it does not mean that data augmentation makes CLExtract perform poorer. On the one hand, the difference between Precision with and without data augmentation is less than 5%. On the other hand, CLExtract aims to balance between Precision and Recall (_i.e._, not missing TP and FN at the same time). Therefore, in terms of F1 which is a comprehension of Precision and Recall, data augmentation improves the effectiveness and robustness. **Efficiency.** In our evaluation, we compare CLExtract with GSExtract in terms of efficiency. With all eavesdropped data, it takes GSExtract 10, 391.52 seconds to process and identify in total 10,471 BB headers, 24,287 GSE headers, and 10,479 IP headers - our training and testing set. In other words, the identification rate is 0.1 BB headers, 0.23 GSE headers, and 0.1 IP headers per second. Using this dataset, it takes near 14.5 hours to train all models in CLExtract. Applying our models, from the whole eavesdropped data, CLExtract spends 19,870.47 seconds processing and identifying in total 498,819 BB headers, 1,156,412 GSE headers, and 497,566 IP headers. That is to say, CLExtract can identify 25.10 BB headers, 58.20 GSE headers, and 25.04 IP headers per second. Admitted that the headers pinpointed by CLExtract can be false positives, from the statistics above, we can roughly estimate that CLExtract is hundreds of times faster than GSExtract. In addition to more headers identified by CLExtract, another reason for the efficiency improvement is that CLExtract can be paralleled while GSExtract must perform decoding in a sequential fashion. ## VI Limitation and Future Work This work has the following limitations. First, the effectiveness of GSE header identification is not as good as that of BB header identification and IP header identification. The reasons, as discussed in Section V-B, are the variable length and the small size nature of GSE header. To address this problem, in the future, we plan to include bounding box regression and intersection over union techniques [17] to further improve CLExtract. Second, we evaluate the robustness of GSEExtract and CLExtract up to 20% corruption degree. We don't measure the performance when the corruption becomes more severe because 20% is high enough to render the data payload meaningless. However, as future work, we plan to further raise the corruption degree to examine how CLExtract performs in an extreme case. Third, to compare the efficiency of GSEExtract and CLExtract, we count the numbers of identified headers in one second. These numbers are accurate for GSEExtract but include false positives for CLExtract. We cannot eliminate false positives because we don't have the ground-truth of eavesdropped satellite streams. Though we can obtain a rough estimation from these numbers, to do compare more precisely, in the future, we plan to simulate the radio communication with the ground-truth. We will continue to open source our simulation platform to foster future research. ## VII Conclusion In this work, we design CLExtract to decode and recover highly corrupted satellite streams. CLExtract uses a contrastive learning technique with data augmentation to learn the features of packet headers at different protocol layers and identify them in a stream sequence. By filtering them out, CLExtract extracts the innermost data payload that can be further analyzed by tools like Wireshark. Compared with the state-of-the-art GSExtract, CLExtract can successfully recover 71-99% more eavesdropped data hundreds of times faster than GSEExtract. Moreover, the effectiveness of CLExtract is not largely damaged when corruption becomes more severe.
2308.12002
Neural oscillators for magnetic hysteresis modeling
Hysteresis is a ubiquitous phenomenon in science and engineering; its modeling and identification are crucial for understanding and optimizing the behavior of various systems. We develop an ordinary differential equation-based recurrent neural network (RNN) approach to model and quantify the hysteresis, which manifests itself in sequentiality and history-dependence. Our neural oscillator, HystRNN, draws inspiration from coupled-oscillatory RNN and phenomenological hysteresis models to update the hidden states. The performance of HystRNN is evaluated to predict generalized scenarios, involving first-order reversal curves and minor loops. The findings show the ability of HystRNN to generalize its behavior to previously untrained regions, an essential feature that hysteresis models must have. This research highlights the advantage of neural oscillators over the traditional RNN-based methods in capturing complex hysteresis patterns in magnetic materials, where traditional rate-dependent methods are inadequate to capture intrinsic nonlinearity.
Abhishek Chandra, Taniya Kapoor, Bram Daniels, Mitrofan Curti, Koen Tiels, Daniel M. Tartakovsky, Elena A. Lomonova
2023-08-23T08:41:24Z
http://arxiv.org/abs/2308.12002v1
# Neural oscillators for magnetic hysteresis modeling ###### Abstract _Hysteresis_ is a ubiquitous phenomenon in science and engineering; its modeling and identification are crucial for understanding and optimizing the behavior of various systems. We develop an ordinary differential equation-based recurrent neural network (RNN) approach to model and quantify the hysteresis, which manifests itself in sequentiality and history-dependence. Our _neural oscillator_, _HystRNN_, draws inspiration from coupled-oscillatory RNN and phenomenological hysteresis models to update the hidden states. The performance of HystRNN is evaluated to _predict generalized scenarios_, involving first-order reversal curves and minor loops. The findings show the ability of HystRNN to generalize its behavior to previously untrained regions, an essential feature that hysteresis models must have. This research highlights the _advantage_ of neural oscillators over the traditional RNN-based methods in capturing complex hysteresis patterns in magnetic materials, where traditional rate-dependent methods are inadequate to capture intrinsic nonlinearity. ## 1 Introduction Magnetic hysteresis pertains to a prevalent observed phenomenon in ferromagnetic and ferrimagnetic materials where the change in magnetization response _lags behind_ variations in the applied magnetic field. Specifically, the hysteresis effect is characterized by a delay in the magnetic flux density (\(B\)) to changes in the applied magnetic field strength (\(H\)), exhibiting _history dependency_, nonlinearity and non-monotonicity [1]. The relationship between \(B\) and \(H\) fields is represented as a hysteresis curve \(\mathcal{C}\) (\(B-H\) curve), which plays a pivotal role in comprehending hysteresis and governing the magnetization process during alterations in \(H\) (Fig. 1). The hysteresis loop offers insights into material behavior; for instance, the area of the hysteresis loop signifies the energy dissipated as heat during each cycle of magnetization and demagnetization. Accurate hysteresis modeling is pivotal in enhancing the operational _efficiency_ of electrical machines. For instance, in engineering systems that involve the movement of cables, hysteresis requires more sophisticated control strategies to compensate for its effects [2]. Similarly, the efficiency of electrical machines is intrinsically linked to the _precise modeling_ of the hysteresis characteristics exhibited by the steel materials employed [3]. Incorporating a robust model would avoid the costly manufacturing of multiple prototypes. Mathematically, the primary objective of hysteresis modeling is to predict the sequence of \(B\) values that correspond to a given sequence of \(H\) values. However, the relationship between \(B\) and \(H\) defies the mathematical definition of a single-valued function. Consequently, conventional function approximation techniques are _not suitable_ for modeling hysteresis as a function with domain \(H\) and codomain \(B\)[4]. Traditionally, modeling the hysteresis behavior is based on fundamental principles of physics [5]. However, in practical engineering scenarios, the manifestation of hysteresis behavior often stems from complex, large-scale effects and the multiphysical nature of the system, rendering deterministic models _inaccurate_[5, 1]. Consequently, phenomenological models are employed, establishing connections between desired behaviors and specific underlying phenomena rooted in principles of thermodynamics or elasticity, for instance. Notable phenomenological models include the Preisach [6, 1], Jiles-Atherton [7], and Bouc-Wen models [8, 9]. Generalizing these models across disciplines and incorporating them into systematic modeling approaches, fitting them to experimental data, and integrating them into other mathematical models _pose significant challenges_[10], such as sophisticated optimization techniques and increased computational burden [11]. To mitigate these limitations of phenomenological models, feed-forward neural networks (FFNNs) are commonly used for modeling magnetic hysteresis [12, 13, 14, 15]. However, owing to the _absence of a functional relationship_ between \(B\) and \(H\) fields, the traditional FFNN approach with input \(H\) and output \(B\) is inadequate and suboptimal. Instead, studies [13, 14] propose \(H\) and \(B_{-1}\) as input and \(B\) as output during training, where \(B_{-1}\) denotes the previous \(B\) value [4]. The notation \(B_{-1}\) is discussed in detail in the 'Method' section. However, this approach is characterized by two notable limitations. First, it lacks the incorporation of _sequential information_ and fails to capture interdependencies among output values, hence not respecting the underlying physics of the problem. Second, this strategy exhibits _limitations in generalizing_ to new situations, as it relies on single-step prior information during training [16]. Consequently, these models _struggle to extrapolate_ to scenarios beyond training data, limiting broader applications that require robust generalization. To address the limitations faced by FFNNs, models centered on recurrent neural networks (RNNs) [4, 17] have been employed, which provide a _natural framework_ for modeling the sequential hysteretic nature. However, the models that employ traditional RNNs, gated recurrent unit (GRU) [18], long-short-term memory (LSTM) [19] and their variants exhibit _limitations_ regarding their ability to _generalize effectively_ to unseen \(H\) variations, as we present in the _current work_. Although these recurrent networks model the underlying relationship and predict hysteresis loops exceedingly well as an interpolation task, the _primary objective_ of achieving _robust generalization remains inadequately addressed_[20]. An optimal recurrent-based technique should excel in _both_ interpolation tasks and demonstrate reasonable accuracy in generalization, effectively predicting \(B\) sequences for unseen \(H\) sequences. A possible approach to accomplishing efficient generalization could be to enforce the recurrent architecture to _incorporate the underlying dynamics_. An efficient way to represent time-varying dynamics involves representation through ordinary differential equations (ODEs) and dynamical systems, recognized for their capacity to model diverse, intricate and nonlinear phenomena across natural, scientific, and engineering domains [21]. This inclusion of inherent dynamics, which should effectively _encapsulate crucial physical attributes_ of the underlying magnetic material, motivates us to employ a system of ODEs to update the hidden states of the recurrent architecture, referred to as _neural oscillators_. Recently, neural oscillators have shown significant success in machine learning and artificial intelligence and have been shown to handle the exploding and vanishing gradient problem effectively with high expressibility [22, 23, 24, 25, 26]. The recent universal approximation property [27] also supports our belief in modeling magnetic hysteresis with neural oscillators. Our neural oscillator, referred to as _HystRNN_ (hysteresis recurrent neural network), is influenced by the principles of the coupled-oscillator recurrent neural network (CoRNN) [22], which integrates a second-order ordinary differential equation (ODE) based on mechanical system principles. CoRNN considers factors such as oscillation frequency and damping in the system. However, for magnetic hysteresis, these physical attributes are less significant. Instead, we focus on embedding the hysteric nature within the ODE formulation. We _leverage phenomenological differential hysteresis models_ to accomplish this, recognizing that models like Bouc-Wen [8, 9], and Duhem [28] utilize the absolute value function to represent the underlying dynamics. By incorporating this function into our model, we can _effectively capture_ and control the shape of the hysteresis loop. This integration into the recurrent model is expected to _facilitate robust generalization_, as it inherently captures the shape of the hysteresis loop, preserving _symmetry_ and _structure_. In this manuscript, we model _nonoriented electrical steel_ (NO27) to test the validity of the proposed method for magnetic materials. The hysteresis loops employed and modeled in this work are acquired using the Preisach model for an Epstein frame. This model has been adjusted to adhere to the IEC standard, and the core was assembled using 16 strips of NO27-1450H material. More details about the Preisach model are provided in supplementary material **SM S**B. We train all our models on a _major loop_ (represented by a blue loop in Fig. 1(a)) and use the trained model to test two different generalization tasks: predicting _first-order reversal curves_ (FORCs, represented by red curves in Fig. 1(b)) and _minor loops_ (represented by a red loop in Fig. 1(c)). We perform experiments for _four different_\(B\)_fields with applications relevant to modeling electrical machines. The remainder of the manuscript is structured as follows. The section 'Generalization in hysteresis' presents the _challenge_ of this manuscript and explains how the task amounts to _generalization_. The section 'Method' formulates the proposed HystRNN method and explains it in detail. The section 'Numerical experiments' _validates_ the proposed method through a series of numerical experiments. Finally, the 'Conclusions' section collates the key findings and implications of this study. ## 2 Generalization in hysteresis Traditional supervised machine learning methodologies employed to model magnetic hysteresis train the model for \((H_{i},B_{i})\in\mathcal{C}_{1}\), where \(1\leq i\leq N\), \(i\in\mathbb{Z}\) and \(N\) is the number of training samples. The trained model is then tested on \((H_{k},B_{k})\in\mathcal{C}_{2}\), where \(1\leq k\leq M\), \(k\in\mathbb{Z}\) and \(M\) is the number of testing samples. Traditionally, \(\mathcal{C}_{2}\subset\mathcal{C}_{1}\), with \(H_{i}\neq H_{k}\). However, this prediction reduces to an _interpolation task_[29]. In _contrast_, we are interested in training the model for \((H_{i},B_{i})\in\mathcal{C}_{1}\) and predicting a hysteresis trajectory for \((H_{k},B_{k})\in\mathcal{C}_{2}\), where \(\mathcal{C}_{2}\nsubseteq\mathcal{C}_{1}\), and \(\mathcal{C}_{2}\cap\mathcal{C}_{1}=\phi\). Here, \(\phi\) denotes the null set. Precisely, we train all our models on the major loop (\(\mathcal{C}_{\mathrm{major}}\)) as shown in Fig. 1(a). Then the trained model is tested for two different scenarios. First the FORCs (\(\mathcal{C}_{\mathrm{FORC}}\)) shown in Fig. 1(b) and second the minor loops (\(\mathcal{C}_{\mathrm{minor}}\)) presented in Fig. 1(c). Modeling FORCs and minor hysteresis loops play a significant role in _analyzing_ magnetic materials. FORC modeling reveals intricate interactions, enabling the differentiation between magnetization components that can be reversed and those that cannot. This knowledge is pivotal in the optimization of magnetic devices like memory and sensor technologies [30; 31; 32]. Minor hysteresis loop modeling complements this by providing insights into localized variations in magnetic behavior. More insights on how predicting \(\mathcal{C}_{\mathrm{FORC}}\) and \(\mathcal{C}_{\mathrm{minor}}\) entails to a generalization task is provided in Fig. 2. In Fig. 2(a) and 2(b), the time series of \(H\) and \(B\) fields are shown, respectively. The blue curve represents the training data (\(H\) vs \(B\) is \(\mathcal{C}_{\mathrm{major}}\)), and the red curve represents the region in which the prediction is sought (\(\mathcal{C}_{\mathrm{FORC}}\) in this case). The black curve behind the shaded region signifies the _history_ that the material has gone through, which is the _series of magnetization and demagnetization_ that is _unknown_ at the time of testing. Hence, this task amounts to _extrapolation in time or predicting in a generalized scenario_. Similarly, Fig. 2(c) and 2(d) represent the case for \(\mathcal{C}_{\mathrm{major}}\) and \(\mathcal{C}_{\mathrm{minor}}\). ## 3 Method HystRNN utilizes a recurrent structure akin to RNNs, with the _difference_ being in the hidden state update. HystRNN _employs ODEs_ for updating the hidden states. The approach involves two inputs, \(H\) and \(B_{-1}\), which are mapped to \(B\). The modeling process begins by collecting \(N_{e}\) number of experimental data points (\(H_{i},B_{-1}:=B_{i}\)) for \(\mathcal{C}_{\mathrm{major}}\), where \(1\leq i\leq N_{e}\), and \(i\in\mathbb{Z}\). Subsequently, (\(H_{j},B_{k}\)) and \(B_{j}\) are taken as the input and output of HystRNN, respectively, where \(2\leq j\leq N_{e}\), \(1\leq k\leq N_{e}-1\), \(k=j-1\), and \(j,k\in\mathbb{Z}\). The number of training points is denoted by \(N=N_{e}-1\). While sharing certain similarities with some feedforward neural network (FFNN) architectures employed for modeling hysteresis, this training approach diverges by _incorporating a recurrent relationship_ that captures _longer-time dynamics_ and _output dependencies_, which are absent in FFNNs. Next, the hidden states of HystRNN are updated using the following second-order ODE Figure 1: \(B-H\) magnetic hysteresis curves (FORC = first-order reversal curve) \[\mathbf{y}^{\prime\prime}=\sigma_{1}\left(\mathbf{W}_{1}\mathbf{y}+ \boldsymbol{\mathcal{W}_{1}}\mathbf{y}^{\prime}+\mathbf{V}_{1}\mathbf{u}+\mathbf{ b}_{1}\right) \tag{1}\] \[+\sigma_{2}\left(\mathbf{W}_{2}|\mathbf{y}|^{2}+\boldsymbol{ \mathcal{W}_{2}}|\mathbf{y}^{\prime}|^{2}+\mathbf{V}_{2}|\mathbf{u}|^{2}+ \mathbf{b}_{2}\right).\] Here, the hidden state of the HystRNN is denoted by \(\mathbf{y}=\mathbf{y}(t)\in\mathbb{R}^{m}\). \(\mathbf{y}^{\prime}\) indicates a time derivative, while \(\mathbf{y}^{\prime\prime}\) indicates a second-order time derivative. \(\mathbf{W}_{1},\mathbf{W}_{2},\boldsymbol{\mathcal{W}_{1}},\boldsymbol{ \mathcal{W}_{2}}\in\mathbb{R}^{m\times m}\), and \(\mathbf{V}_{1},\mathbf{V}_{2}\in\mathbb{R}^{m\times n}\) are the weight matrices, \(n=N\times 2\), and \(t\) corresponds to the time at which the training data has been collected. \(\mathbf{u}=\mathbf{u}(t)\in\mathbb{R}^{n}\) is the input to HystRNN. \(\mathbf{b}_{1},\mathbf{b}_{2}\in\mathbb{R}^{m}\) are the bias vectors. The activation functions \(\sigma_{1,2}:\mathbb{R}\mapsto\mathbb{R}\) are taken to be \(\sigma_{1,2}(u)=\tanh(u)\). By setting \(\mathbf{z}=\mathbf{y}^{\prime}(t)\in\mathbb{R}^{m}\), (1) becomes a system of first-order ODEs \[\begin{split}\mathbf{y}^{\prime}=\mathbf{z},\quad\mathbf{z}^{ \prime}=\sigma_{1}\left(\mathbf{W}_{1}\mathbf{y}+\boldsymbol{\mathcal{W}_{1}} \mathbf{z}+\mathbf{V}_{1}\mathbf{u}+\mathbf{b}_{1}\right)\\ +\sigma_{2}\left(\mathbf{W}_{2}|\mathbf{y}|^{2}+\boldsymbol{ \mathcal{W}_{2}}|\mathbf{z}|^{2}+\mathbf{V}_{2}|\mathbf{u}|^{2}+\mathbf{b}_{2 }\right).\end{split} \tag{2}\] Discretizing the system of ODEs (2) using an explicit scheme for \(0<\Delta t<1\) leads to \[\begin{split}\mathbf{y}_{n}&=\mathbf{y}_{n-1}+ \Delta t\mathbf{z}_{n},\\ \mathbf{z}_{n}&=\mathbf{z}_{n-1}+\Delta t\sigma_{1} \left(\mathbf{W}_{1}\mathbf{y}_{n-1}+\boldsymbol{\mathcal{W}_{1}}\mathbf{z}_{ n-1}+\mathbf{V}_{1}\mathbf{u}_{n}+\mathbf{b}_{1}\right)\\ \Delta t\sigma_{2}\left(\mathbf{W}_{2}|\mathbf{y}_{n-1}|^{2}+ \boldsymbol{\mathcal{W}_{2}}|\mathbf{z}_{n-1}|^{2}+\mathbf{V}_{2}|\mathbf{u}_{ n}|^{2}+\mathbf{b}_{2}\right).\end{split} \tag{3}\] Finally, the output \(\hat{B}\) is computed for each recurrent unit, where \(\hat{B}\in\mathbb{R}^{n}\) with \(\hat{B}=\mathcal{Q}\mathbf{y}_{n}\) and \(\mathcal{Q}\in\mathbb{R}^{n\times m}\). ### Related work Oscillator networks play a pervasive role across natural and engineering systems like pendulums in classical mechanics, among other instances. A _notable trend_ is emerging where RNN architectures are constructed based on ODEs and dynamical systems [33; 34; 35; 23]. Our study is _closely associated_ with CoRNN, where the oscillation and damping factors are integrated into model construction. _In contrast_, our approach _incorporates hysteretic terms_ into the model. Another recent study [36], demonstrates employing neural oscillators to extend the applicability of physics-informed machine learning. Our work shares similarities with this study, as both works aim to _generalize scientific machine_ learning and seek to predict quantities of interest beyond the scope of the training data _without_ relying on retraining or \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Test case} & \multicolumn{3}{c|}{L2-norm (\(\downarrow\))} & \multicolumn{3}{c|}{Explained variance score (\(\uparrow\))} & \multicolumn{3}{c|}{Max error (\(\downarrow\))} & \multicolumn{3}{c|}{Mean absolute error (\(\downarrow\))} \\ \cline{2-13} & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN \\ \hline \(\mathcal{C}_{\mathrm{FORC}_{1}}\) & 5.0204 & 0.8525 & 0.7764 & **0.2198** & -0.0721 & 0.1081 & 0.2007 & **0.8252** & 5.3597 & 2.4089 & 2.0075 & **1.2030** & 2.9888 & 1.1967 & 1.5550 & **0.6149** \\ \hline \(\mathcal{C}_{\mathrm{FORC}_{2}}\) & 6.4877 & 0.5253 & 0.4701 & **0.3088** & -0.2545 & 0.1875 & 2.0395 & **0.8844** & 5.3448 & 1.8428 & 1.8177 & **1.2371** & 3.6083 & 0.9327 & 0.8723 & **0.7613** \\ \hline \(\mathcal{C}_{\mathrm{minvert}}\) & 5.3506 & 1.4382 & 1.8028 & **0.4038** & -0.1013 & 0.0298 & 0.0776 & **0.9839** & 2.7641 & 1.7098 & 1.8925 & **0.3108** & 1.4877 & 0.7142 & 0.7797 & **0.1258** \\ \hline \(\mathcal{C}_{\mathrm{minvert}}\) & 12.3671 & 1.5785 & 2.0563 & **0.0786** & -2.7046 & 0.0248 & 0.0673 & **0.9661** & 3.7491 & 1.5726 & 1.7486 & **0.3630** & 1.9703 & 0.6544 & 0.7341 & **0.1450** \\ \hline \end{tabular} \end{table} Table 1: The generalization performance assessed using the metrics: L2-norm relative error, explained variance error, maximum error, and mean absolute error for the first experiment, where \(\max(B)=1.7\,\mathrm{T}\). For these metrics, higher (respectively, lower) values are favored for (\(\uparrow\)) (respectively, (\(\downarrow\))). The implication of arrows remains consistent for all the following Tables. Figure 2: Variation of magnetization and demagnetization in a magnetic material over time Figure 3: Experimental vs predicted hysteresis trajectories for experiment 1, where \(\max(B)=1.7\,\mathrm{T}\). The blue curve represents the training loop \(\mathcal{C}_{\mathrm{major}}\). The red curve represents the ground truth for \(\mathcal{C}_{\mathrm{FORC/minor}}\) and the black curve represents the prediction of the model. Top two rows: predictions for \(\mathcal{C}_{\mathrm{FORC}}\), and \(\mathcal{C}_{\mathrm{FORC}_{2}}\) respectively. Bottom two rows: predictions for \(\mathcal{C}_{\mathrm{minor}_{1}}\) and \(\mathcal{C}_{\mathrm{minor}_{2}}\) respectively. The colors are used consistently for the following figures. transfer learning methodologies. However, our work aims to predict trajectories of the hysteresis dynamics, whereas [36] predicts the solutions of partial differential equations in a generalized domain. ### Motivation The hidden state update in HystRNN is _motivated_ by the _differential models of hysteresis_[8; 9; 28] that describe the phenomenon, incorporating an _absolute value function_ to model the hysteretic nonlinearity. Examples of such phenomenological models include, but are not limited to, the Bouc-Wen and Duhem models, presented in **SM \(\$\)F**. These absolute valued components play a crucial role in capturing hysteretic characteristics. These terms allow the models to account for the different responses during magnetization and demagnetization, as well as the _effects of history_ on the behavior of the system. The inclusion of absolute valued terms enhances the ability of the model to capture the intricate dynamics of hysteresis and provides a more realistic representation of the observed phenomena. ## 4 Numerical Experiments A series of numerical experiments encompassing _four distinct scenarios_ is conducted, in which we systematically vary the upper limit of the magnetic field \(B\). The selection of diverse maximum \(B\) field values corresponds to the specific usage context of the material. As a result, these experiments are geared towards demonstrating the viability of the proposed methodology across a spectrum of electrical machines, all constrained by their respective permissible maximum \(B\) values. Precisely, we opt for the maximum \(B\) values of \(1.7\,\mathrm{T}\), \(1.5\,\mathrm{T}\), \(1.3\,\mathrm{T}\), and \(1.25\,\mathrm{T}\). _In each of these instances, we execute a total of four experiments_. After training for \(\mathcal{C}_{\mathrm{major}}\) until the \(B\) field reaches its saturation point, the experiments involve predicting \(\mathcal{C}_{\mathrm{FORC}}\) and \(\mathcal{C}_{\mathrm{minor}}\). Notably, the data set is generated from the Preisach model, which characterizes the behavior of non-oriented NO27 steel. It is imperative to preprocess the data before feeding it into the HystRNN model. We employ a normalization step using the min-max scaling technique, as elucidated in **SM \(\$\)C**. For all the numerical experiments, the software and hardware environments used for performing the experiments are as follows: Ubuntu 20.04.6 LTS, Python 3.9.7, Numpy 1.20.3, Scipy 1.7.1, Matplotlib 3.4.3, PyTorch 1.12.1, CUDA 11.7, and NVIDIA Driver 515.105.01, i7 CPU, and NVIDIA GeForce RTX 3080. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Test case} & \multicolumn{3}{c|}{L2-norm (\(\downarrow\))} & \multicolumn{3}{c|}{Explanated variance score (\(\uparrow\))} & \multicolumn{3}{c|}{Max error (\(\downarrow\))} & \multicolumn{3}{c|}{Mean absolute error (\(\downarrow\))} \\ \cline{2-13} & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN \\ \hline \(\mathcal{C}_{\mathrm{FORC}}\) & 0.670 & 0.9672 & 0.6652 & **0.0432** & 0.0541 & 0.5200 & 0.5497 & **0.9765** & 5.2184 & 1.5661 & 1.2194 & **0.8378** & 1.9992 & 0.1705 & 0.6247 & **0.1296** \\ \hline \(\mathcal{C}_{\mathrm{FORC}}\) & 2.7661 & 0.6730 & 0.4775 & **0.4837** & 0.0295 & 0.3691 & 0.4183 & **0.9785** & 3.2126 & 1.2992 & 0.9456 & **0.4055** & 2.1600 & 0.5800 & 0.5249 & **0.1371** \\ \hline \(\mathcal{C}_{\mathrm{minor}}\) & 10.2305 & 1.6042 & 0.9009 & **0.0301** & -0.0216 & 0.2669 & 0.2090 & **0.9774** & 2.27703 & 1.3222 & 0.9947 & **0.1857** & 1.7520 & 0.5923 & 0.4619 & **0.0855** \\ \hline \(\mathcal{C}_{\mathrm{minor}}\) & 18.2528 & 2.3069 & 1.0498 & **0.1580** & -0.1629 & 0.2528 & 0.2785 & **0.8780** & 2.5696 & 1.1249 & 0.8045 & **0.9066** & 1.7901 & 0.5446 & 0.3673 & **0.1491** \\ \hline \end{tabular} \end{table} Table 3: The generalization performance assessed using the metrics: L2-norm relative error, explained variance error, maximum error, and mean absolute error for the third experiment, where \(\max(B)=1.3\,\mathrm{T}\). \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Test case} & \multicolumn{3}{c|}{L2-norm (\(\downarrow\))} & \multicolumn{3}{c|}{Explanated variance score (\(\uparrow\))} & \multicolumn{3}{c|}{Max error (\(\downarrow\))} & \multicolumn{3}{c|}{Mean absolute error (\(\downarrow\))} \\ \cline{2-13} & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN \\ \hline \(\mathcal{C}_{\mathrm{FORC}}\) & 5.3234 & 1.2308 & 0.6863 & **0.0109** & 0.1625 & 0.3566 & 0.4230 & **0.3989** & 2.6134 & 1.4922 & 1.0994 & **0.2370** & 1.6253 & 0.6871 & 0.5433 & **0.0563** \\ \hline \(\mathcal{C}_{\mathrm{FORC}}\) & 6.4038 & 0.9569 & 0.5098 & **0.0115** & 0.1576 & 0.3843 & 0.4693 & **0.9924** & 2.6684 & 1.2985 & 0.9055 & **0.2520** & 1.7484 & 0.7596 & 0.4857 & **0.0583** \\ \hline \(\mathcal{C}_{\mathrm{minor}}\) & 1.75971 & 1.6295 & 0.8069 & **0.0320** & 0.1497 & 0.2977 & 0.3268 & **0.9841** & 2.3788 & 1.3344 & 0.9456 & **0.1882** & 1.4996 & 0.5974 & 0.4396 & **0.0916** \\ \hline \(\mathcal{C}_{\mathrm{minor}}\) & 11.8529 & 2.1787 & 0.8859 & **0.1283** & 0.0616 & 0.2769 & 0.3091 & **0.9267** & 2.2669 & 1.1784 & 0.7925 & **0.2923** & 1.5243 & 0.5639 & 0.3653 & **0.1443** \\ \hline \end{tabular} \end{table} Table 2: The generalization performance assessed using the metrics: L2-norm relative error, explained variance error, maximum error, and mean absolute error for the second experiment, where \(\max(B)=1.25\,\mathrm{T}\). \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Test case} & \multicolumn{3}{c|}{L2-norm (\(\downarrow\))} & \multicolumn{3}{c|}{Explanated variance score (\(\uparrow\))} & \multicolumn{3}{c|}{Max error (\(\downarrow\))} & \multicolumn{3}{c|}{Mean absolute error (\(\downarrow\))} \\ \cline{2-13} & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN & RNN & LSTM & GRU & HystRNN \\ \hline \(\mathcal{C}_{\mathrm{FORC}}\) & 6.0907 & 0.9672 & 0.6652 & **0.0432** & 0.0541 & 0.5200 & 0.5497 & **0.9765** & 5.2184 & 1.5661 & 1.2194 & **0.8378** & 1.992 & 0.1705 & 0.6247 & **0.1276** \\ \hline \(\mathcal{C}_{\mathrm{ProC}}\) & 2.7661 & 0.6730 & 0.4775 & **0.0377** & 0.0295 & 0.3691 & 0.4183 & **0.9785** & 3.2126 & 1.2992 & 0.9456 & **0.4055** & 2.160 & 0.5800 & 0.5249 & **0.1371** \\ \hline \(\mathcal{C}_{\mathrm{minor}}\) & 10.2305 & 1.6042 & 0.9009 & **0.0301** & -0.0216 & 0.2669 & 0.2090 & **0.9774** & 2.2703 & 1.3222 & 0.9947 & **0.1857** & 1.7520 & 0.5923 & 0.4619 & **0.0885** \\ \hline \(\mathcal{C}_{\mathrm{minor}}\) & 18.2528 & Figure 4: Experimental vs predicted hysteresis trajectories for experiment 2, where \(\max(B)=1.25\,\mathrm{T}\). Top two rows: predictions for \(\mathcal{C}_{\mathrm{FORC}_{1}}\) and \(\mathcal{C}_{\mathrm{FORC}_{2}}\) respectively. Bottom two rows: predictions for \(\mathcal{C}_{\mathrm{minor}_{1}}\) and \(\mathcal{C}_{\mathrm{minor}_{2}}\) respectively. ### Baselines We introduce the concept of employing neural oscillators constructed within the framework of recurrent networks for modeling hysteresis. This choice is driven by the _intrinsic sequentiality and memory dependence_ characteristics of hysteresis. Additionally, recurrent architectures have been successfully employed for interpolation-type hysteresis modeling tasks across various fields [4; 17]. Consequently, motivated by these rationales, we conduct a comparative analysis of HystRNN with traditional recurrent networks such as RNN, LSTM, and GRU. We subject these models to a comprehensive comparison, specifically focusing on their performance in the context of generalization. Our experiments involve delving into the potential of these models by predicting outcomes for untrained \(H\) sequences, thereby exploring their capabilities for _generalization beyond trained data_. ### Hyperparameters The selected hyperparameters consist of an input size of 2, a single hidden layer with a dimension of 32, and an output size of 1. The optimization process involves the utilization of the Adam optimizer, with a learning rate of \(0.01\). Training is conducted for 10000 epochs, with a batch size of 1. The hyperparameter \(\Delta t\) is chosen to be \(0.05\). A sequence length of \(595\) is chosen for all four experiments to train \(\mathcal{C}_{\mathrm{major}}\). The determination of sequence length depends upon the data generated by the Preisach model for \(\mathcal{C}_{\mathrm{major}}\). _Uniformity_ in hyperparameter settings is maintained across all experiments. Furthermore, to ensure _fair comparisons_ with RNN, LSTM, and GRU models, the hyperparameters are also held constant for these methods. ### Evaluation metrics We evaluate the proposed method using four metrics. The first is the _L2-norm_, measuring the Euclidean distance between predicted and actual values. The _explained variance score_ indicates prediction accuracy, capturing variance proportion. _Maximum error_ detects significant prediction discrepancies as potential outliers. The _mean absolute error_ assesses average differences between predictions and actual values for overall precision. Lower L2-norm, maximum error, and mean absolute error coupled with higher explained variance signify improved performance. Metric expressions are detailed in **SM SSD**. ### Train and test criteria The trained model is tested in two distinct scenarios involving the prediction of two FORCs and two minor loops. For FORC prediction, testing sequences of lengths \(199\) and \(399\) are utilized, respectively. The prediction of minor loops involves a testing sequence with a length of \(399\) each. As in the case of training the model, these testing sequence lengths depend on the data generated from the Preisach model for evaluating the model. The HystRNN model trained on \(\mathcal{C}_{\mathrm{major}}\) is evaluated on \(\mathcal{C}_{\mathrm{FORC}}\) and \(\mathcal{C}_{\mathrm{minor}}\). This testing sequence is initiated with an input \((H_{i},B_{i})\in\mathcal{C}_{\mathrm{FORC/minor}}\), where both \(H_{i}\), and \(B_{i}\) are provided and \(B_{i+1}\) is predicted. The output generated from this step, \(B_{i+1}\), becomes the subsequent input along with \(H_{i+1}\), the known magnetization for the following sequence. Such testing holds _paramount importance as practical scenarios lack prior knowledge_ about the \(B\) values on \(\mathcal{C}_{\mathrm{FORC}}\) or \(\mathcal{C}_{\mathrm{minor}}\). Thus, the sole available information for generalization stems from the predicted solution in \(\mathcal{C}_{\mathrm{FORC}}\) or \(\mathcal{C}_{\mathrm{minor}}\). ### Experimental Results Four experiments are carried out to evaluate the performance of HystRNN. The experiments differ by the _maximum permitted magnetic flux density_ of the electrical machine \(B_{\mathrm{max}}\). Exact \(B_{\mathrm{max}}\) values are indicated for the experiments in **SM SSD**. Performing experiments and exploring the generalization capabilities of the model for varying \(B_{\mathrm{max}}\) values is _crucial_ for understanding and optimizing the performance and efficiency of different and diverse machines. For instance, machines requiring lower magnetic flux densities of \(B_{\mathrm{max}}=1.25\,\mathrm{T}\) are typically used as high-efficiency induction motors [37] in industrial settings for tasks like driving conveyor belts, pumps, and compressors. Meanwhile, high-performance applications, for instance, particle accelerators [38] and nuclear magnetic resonance [39], demand higher magnetic flux densities with \(B_{\mathrm{max}}=1.7\,\mathrm{T}\) for their operation. We perform experiments for \(B_{\mathrm{max}}\) in this spectrum and examine its _performance for generalized scenarios_, tailoring designs for diverse needs, ensuring energy efficiency for everyday devices, and pushing technological boundaries for cutting-edge systems. For all the experiments, the data for the major loop, \(\mathcal{B}_{\mathrm{major}}\), is collected until \(\mathrm{max}(B)\) reaches the saturation value \(B_{\mathrm{max}}\). HystRNN and other compared methods LSTM, GRU, and RNN are trained on \(\mathcal{B}_{\mathrm{major}}\). Once the model is trained, they are tested for four distinct cases. The first and second test cases correspond to estimating FORC. We denote them as \(\mathcal{C}_{\mathrm{FORC}_{1}}\) and \(\mathcal{C}_{\mathrm{FORC}_{2}}\) respectively. Two distinct FORCs are chosen to study the effect of the distance between the origin of the FORC and \(B_{\mathrm{max}}\). The third and fourth test cases are performed for predicting minor loops, which we denote by \(\mathcal{C}_{\mathrm{minor}_{1}}\) and \(\mathcal{C}_{\mathrm{minor}_{2}}\) respectively. These minor loops vary based on the maximum value of \(B\) to which they are subjected. The origin of \(\mathcal{C}_{\mathrm{FORC}_{1}}\), \(\mathcal{C}_{\mathrm{FORC}_{2}}\) and the maximum \(B\) value of the minor loop is provided in **SM SSG**. Detailed performance metrics for HystRNN are outlined in Tables 1 to 4, corresponding to experiments 1 through 4, respectively. The Tables also facilitate a comprehensive comparative analysis with RNN, LSTM, and GRU. The metrics notably emphasize the _superior_ performance of our proposed method HystRNN across all numerical experimentation scenarios. #### 4.5.1 Experiment 1 In Fig. 3, the top two rows display the predictions of \(\mathcal{C}_{\mathrm{FORC}_{1,2}}\) respectively, wherein training exclusively occurs on \(\mathcal{C}_{\mathrm{major}}\), indicated by the blue color. Predictions are represented with black color, and ground truth is represented with red color. The colors are kept consistent for all the following experiments. The top two rows show that LSTM (Fig. 3(a), 3(d)) and GRU (Fig. 3(b), 3(e)) fail drastically to capture the shape of the FORC accurately. In contrast, HystRNN effectively captures the _structure_ and _symmetry_ of reversal curves as shown in Fig. 3(c) and 3(f). The last two rows show the prediction for minor loop \(\mathcal{C}_{\mathrm{minor}_{1,2}}\). For this case also predictions from LSTM (Fig. 3(g), 3(j)) and GRU (Fig. 3(h), 3(k)) are inaccurate. Neither LSTM nor GRU could form a closed loop for the predicted trajectory, posing a major challenge to compute the energy loss without a closed region, as energy loss depends on the surface area of the hysteresis loop. In contrast, our proposed method HystRNN predicts the _structure_ of the loop very well and efficiently models the minor loop (Fig. 3(i), 3(l)). #### 4.5.2 Experiment 2 The top two rows of Fig. 4 present that LSTM (Fig. 4(a), 4(d)) and GRU (Fig. 4(b), 4(e)) fail to predict the trajectory of FORC by a huge margin. On the other hand, HystRNN shows close agreement with the ground truth for predicting \(\mathcal{C}_{\mathrm{FORC}_{1,2}}\) (Fig. 4(c), 4(f)). Also, the prediction of HystRNN for \(\mathcal{C}_{\mathrm{FORC}_{1}}\) is slightly better than for \(\mathcal{C}_{\mathrm{FORC}_{2}}\), exemplifies that the model performs better when the origin of FORC is closer to \(\max(B)\). A possible reason for this behavior could be the resemblance in the trajectories of \(\mathcal{C}_{\mathrm{major}}\) and a FORC from a higher origin value. The last two rows of Fig. 4 present the predictions of \(\mathcal{C}_{\mathrm{minor}_{1,2}}\) respectively. For this case, LSTM (Fig. 4(g), 4(j)) and GRU (Fig. 4(h), 4(k)) almost form a loop-like shape; however, they are very off from compared to the ground truth. HysRNN, on the other hand, captures the loop shape efficiently, as presented in Fig. 4(i) and 4(l). #### 4.5.3 Experiment 3 The predictions for the model, the ground truth, and the training data are presented in Fig. 5. As presented in Fig. 5(a), 5(b), 5(d), and 5(e) predictions by LSTM and GRU models exhibit a lack of accuracy. In contrast, predictions of our model HystRNN for the reversal curve are notably precise, as evidenced in Figures Fig. 5(c) and 5(f). The final two rows within Fig. 5 present that HystRNN accurately captured the characteristics of the minor loop, as showcased in Fig. 5(i) and 5(l). Prediction by GRU manages to capture a resemblance of the loop, although not entirely, as revealed in Fig. 5(h) and 5(k). On the other hand, LSTM performs poorly, failing to capture the intricate structure of the minor loop, as depicted in Fig. 5(g) and 5(j). #### 4.5.4 Experiment 4 The predictions for the model, the ground truth, and the training data are presented in Fig. 6. Predictions of the reversal curve show agreement with the nature observed in previous experiments. In this case, \(\max(B)\), origin of \(\mathcal{C}_{\mathrm{FORC}_{2}}\), and maximum \(B\) value of \(\mathcal{C}_{\mathrm{minor}_{2}}\) vary significantly, posing a challenge for both LSTM and GRU. However, HystRNN outperforms them for each case, as shown in Fig. 6. The results underscore the performance of HystRNN as, for neither of the cases, the accuracy of LSTM or GRU is comparable to our proposed method. Additional visual results for all the RNN experiments are supplemented in **SM SSE**. ## 5 Conclusions We introduced a novel neural oscillator, _HystRNN_, aimed to _advance_ magnetic hysteresis modeling within _extrapolated regions_. The proposed oscillator is based upon the foundation of coupled oscillator recurrent neural networks and _inspired from_ phenomenological hysteresis models. HystRNN was validated by predicting first-order reversal curves and minor loops after training the model _solely_ with major loop data. The outcomes underscore the _superiority_ of HystRNN in adeptly capturing intricate nonlinear dynamics, _outperforming_ conventional recurrent neural architectures such as RNN, LSTM, and GRU on _various metrics_. This performance is attributed to its capacity to assimilate sequential information, history dependencies, and hysteretic features, ultimately achieving generalization capabilities. Access to the codes and data will be provided upon publication.
2310.07123
Off-Policy Evaluation for Human Feedback
Off-policy evaluation (OPE) is important for closing the gap between offline training and evaluation of reinforcement learning (RL), by estimating performance and/or rank of target (evaluation) policies using offline trajectories only. It can improve the safety and efficiency of data collection and policy testing procedures in situations where online deployments are expensive, such as healthcare. However, existing OPE methods fall short in estimating human feedback (HF) signals, as HF may be conditioned over multiple underlying factors and is only sparsely available; as opposed to the agent-defined environmental rewards (used in policy optimization), which are usually determined over parametric functions or distributions. Consequently, the nature of HF signals makes extrapolating accurate OPE estimations to be challenging. To resolve this, we introduce an OPE for HF (OPEHF) framework that revives existing OPE methods in order to accurately evaluate the HF signals. Specifically, we develop an immediate human reward (IHR) reconstruction approach, regularized by environmental knowledge distilled in a latent space that captures the underlying dynamics of state transitions as well as issuing HF signals. Our approach has been tested over two real-world experiments, adaptive in-vivo neurostimulation and intelligent tutoring, as well as in a simulation environment (visual Q&A). Results show that our approach significantly improves the performance toward estimating HF signals accurately, compared to directly applying (variants of) existing OPE methods.
Qitong Gao, Ge Gao, Juncheng Dong, Vahid Tarokh, Min Chi, Miroslav Pajic
2023-10-11T01:52:42Z
http://arxiv.org/abs/2310.07123v2
# Off-Policy Evaluation for Human Feedback ###### Abstract Off-policy evaluation (OPE) is important for closing the gap between offline training and evaluation of reinforcement learning (RL), by estimating performance and/or rank of target (evaluation) policies using offline trajectories only. It can improve the safety and efficiency of data collection and policy testing procedures in situations where online deployments are expensive, such as healthcare. However, existing OPE methods fall short in estimating human feedback (HF) signals, as HF may be conditioned over multiple underlying factors and is only sparsely available; as opposed to the agent-defined environmental rewards (used in policy optimization), which are usually determined over parametric functions or distributions. Consequently, the nature of HF signals makes extrapolating accurate OPE estimations to be challenging. To resolve this, we introduce an OPE for HF (OPEHF) framework that revives existing OPE methods in order to accurately evaluate the HF signals. Specifically, we develop an immediate human reward (IHR) reconstruction approach, regularized by environmental knowledge distilled in a latent space that captures the underlying dynamics of state transitions as well as issuing HF signals. Our approach has been tested over _two real-world experiments_, adaptive _in-vivo_ neurostimulation and intelligent tutoring, as well as in a simulation environment (visual Q&A). Results show that our approach significantly improves the performance toward estimating HF signals accurately, compared to directly applying (variants of) existing OPE methods. ## 1 Introduction Off-policy evaluation (OPE) aims to estimate the performance of reinforcement learning (RL) policies using only a fixed set of offline trajectories [61], _i.e._, without online deployments. It is considered to be a critical step in closing the gap between offline RL training and evaluation, for environments and systems where online data collection is expensive or unsafe. Specifically, OPE facilitates not only offline evaluation of the safety and efficacy of the policies ahead of online deployment, but policy selection as well; this allows one to maximize the efficiency when online data collection is possible, by identifying and deploying the policies that are more likely to result in higher returns. OPE has been used in various application domains including healthcare [68; 53; 23; 22], robotics [15; 18; 24], intelligent tutoring [64; 45; 17], recommendation systems [50; 43]. The majority of existing OPE methods focus on evaluating the policies' performance defined over the _environmental_ reward functions which are mainly designed for use in policy optimization (training). However, as an increasing number of offline RL frameworks are developed for human-involved systems [64; 45; 1; 48; 16], existing OPE methods lack the ability to estimate how human users would evaluate the policies, _e.g._, ratings provided by patients (on a scale of 1-10) over the procedure facilitated by automated surgical robots; as human feedback (HF) can be noisy and conditioned over various confounders that could be difficult to be captured explicitly [53; 7; 44]. For example, patient satisfaction over a specific diabetes therapy may vary across the cohort, depending on many subjective factors, such as personal preferences and activity level of the day, while participating in the therapy, in addition to the physiological signals (_e.g._, blood sugar level, body weight) that are more commonly used as the sources for determining environmental rewards toward policy optimization [70; 33; 21; 19]. Moreover, the environmental rewards are sometimes discrete to ensure optimality of the learned policies [67], which further reduces its correlation against HF signals. In this work, we introduce the OPE for human feedback (OPEHF) framework that revives existing OPE approaches in the context of evaluating HF from offline data. Specifically, we consider the challenging scenario where the HF signal is only provided at the end of each episode - _i.e._, no per-step HF signals, referred to as _immediate human rewards_ (IHRs) below, are provided - benchmarking the common real-world situations where the participants are allowed to rate the procedures only at the end of the study. The goal is set to estimate the end-of-episode HF signals, also referred to as _human returns_, over the target (evaluation) policies, using a fixed set of offline trajectories collected over some behavioral policies. To facilitate OPEHF, we introduce an approach that first maps the human return back to the sequence of IHRs, over the horizon, for each trajectory. Specifically, this follows from optimizing over an objective that consists of a necessary condition where the cumulative discounted sum of IHRs should equal the human return, as well as a regularization term that limits the discrepancy of the reconstructed IHRs over state-action pairs that are determined similar over a latent representation space into which environmental transitions and rewards are encoded. At last, this allows for the use of any existing OPE methods to process the offline trajectories with reconstructed IHRs and estimate human returns under target policies. Our main contributions are tri-fold. **(i)** We introduce a novel OPEHF framework that revives existing OPE methods toward accurately estimating highly sparse HF signals (provided only at the end of each episode) from offline trajectories, through IHRs reconstruction. **(ii)** Our approach does not require the environmental rewards and the HF signals to be strongly correlated, benefiting from the design where both signals are encoded to a latent space regularizing the objective for reconstructions of IHRs, which is justified empirically over real-world experiments. **(iii)** Two _real-world experiments_, _i.e._, adaptive _in-vivo_ neurostimulation for the treatment of Parkinson's disease and intelligent tutoring for computer science students in colleges, as well as one simulation environment (_i.e._, visual Q&A), facilitated the thorough evaluation of our approach; various degrees of correlations between the environment rewards and HF signals existed across the environments, as well as the varied coverage of the state-action space provided by offline data over sub-optimal behavioral policies, imposing different levels of challenges for OPEHF. ## 2 Off-Policy Evaluation for Human Feedback (OPEHF) In this section, we introduce an OPEHF framework that allows for the use of existing OPE methods to estimate the _human returns_ that are available only at the end of each episode, with IHRs remaining unknown. This is in contrast to the goal of classic OPE that only estimates the _environmental_ returns following the user-defined reward function used in the policy optimization phase. A brief overview of existing OPE methods can be found in Appendix C. ### Problem Formulation We first formulate the human-involved MDP (HMDP), which is a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},R,R^{\mathcal{H}},s_{0},\gamma)\), where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) the set of actions, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is the transition distribution usually captured by probabilities \(p(s_{t}|s_{t-1},a_{t-1})\), \(R:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the _environmental_ reward function, \(R^{\mathcal{H}}(r^{\mathcal{H}}|s,a)\) is the _human_ reward distribution from which the IHR \(r^{\mathcal{H}}_{t}\sim R^{\mathcal{H}}(\cdot|s_{t},a_{t})\) are sampled, \(s_{0}\) is the initial state sampled from the initial state distribution \(p(s_{0})\), and \(\gamma\in[0,1)\) is the discounting factor. Note that we set the IHRs to be determined probabilistically, as opposed to the environmental rewards \(r_{t}=R(s_{t},a_{t})\) that are deterministic; this is due to the fact that many underlying factors may affect the feedback provided by humans [53; 7; 44], as we have also observed while performing human-involved experiments (see Appendix D). Finally, the agent interacts with the MDP following some policy \(\pi(a|s)\) that defines the probabilities of taking action \(a\) at state \(s\). In this work, we make the following assumption over \(R\) and \(R^{\mathcal{H}}\). **Assumption 1** (Unknown IHRs).: _We assume that the immediate environmental reward function \(R\) is known and \(R(s,a)\) can be obtained for any state-action pairs in \(\mathcal{S}\times\mathcal{A}\). Moreover, the IHR distribution \(R^{\mathcal{H}}\) is assumed to be unknown, i.e., \(r^{\mathcal{H}}\sim R^{\mathcal{H}}(\cdot|s,a)\) are unobservable, for all \((s,a)\in\mathcal{S}\times\mathcal{A}\)._ _Instead, the cumulative human return \(G_{0:T}^{\mathcal{H}}\), defined over \(R^{\mathcal{H}}\), is given at the end of each trajectory, i.e., \(G_{0:T}^{\mathcal{H}}=\sum_{t=0}^{T}\gamma^{t}r_{t}^{\mathcal{H}}\), with \(T\) being the horizon and \(r_{t}^{\mathcal{H}}\sim R^{\mathcal{H}}(\cdot|s_{t},a_{t})\)._ The assumption above follows the fact that human feedback (HF) is not available until the end of each episode, as opposed to immediate rewards that can be defined over the environment and evaluated for any \((s_{t},a_{t})\) pairs at any time. This is especially true in environments such as healthcare where the clinical treatment outcome is not foreseeable until a therapeutic cycle is completed, or in intelligent tutoring where the overall gain from students over a semester is mostly reflected by the final grades. Note that although the setup can be generalized to the scenario where HF can be sparsely obtained over the horizon, we believe that issuing the HF only at the end of each trajectory leads to a more challenging setup for OPE. Consequently, the goal of OPEHF can be formulated as follows. **Problem 1** (Objective of OPEHF).: _Given offline trajectories collected by some behavioral policy \(\beta\), \(\rho^{\beta}=\{\tau^{(0)},\tau^{(1)},\ldots,\tau^{(N-1)}|\ a_{t}\sim\beta(a_{t}|s_{t})\}\), with \(\tau^{(i)}=[(s_{0}^{(i)},a_{0}^{(i)},r_{0}^{(i)},r_{0}^{\mathcal{H}(i)},s_{1}^ {(i)}),\ldots,(s_{T-1}^{(i)},a_{T-1}^{(i)},r_{T-1}^{(i)},\tau_{T-1}^{\mathcal{H }(i)},s_{T}^{(i)}),G_{0:T}^{\mathcal{H}(i)}]\) being a single trajectory, \(N\) the total number of offline trajectories, and \(r_{t}^{\mathcal{H}}\)s being unknown, the objective is to estimate the expected total human return over the unknown state-action visitation distribution \(\rho^{\pi}\) of the target (evaluation) policy \(\pi\), i.e., \(\mathbb{E}_{(s,a)\sim\rho^{\pi},r^{\mathcal{H}}\sim R^{\mathcal{H}}}\left[ \sum_{t=0}^{T}\gamma^{t}r_{t}^{\mathcal{H}}\right]\)._ ### Reconstruction of IHRs for OPEHF We emphasize that the human returns are only issued at the end of each episode, with IHRs remaining unknown. One can set all IHRs from \(t=0\) to \(t=T-2\) to be zeros (_i.e._, \(r_{0:T-2}^{\mathcal{H}}=0\)), and _rescale_ the cumulative human return to be the IHR at the last step (_i.e._, \(r_{T-1}^{\mathcal{H}}=G_{0:T}^{\mathcal{H}}/\gamma^{T-1}\)), to allow the use of existing OPE methods toward OPEHF (Problem 1). However, the sparsity over \(r^{\mathcal{H}}\)'s here may impose difficulties for OPE to estimate the human returns accurately over the target policies. For OPEHF, we start by showing that for the per-decision importance sampling (PDIS) method - a variance-reduction variant of the importance sample (IS) family of OPE methods [61] - if IHRs _were to be available_, they could reduce the variance in the estimation compared to the rescale approach above. Recall that the PDIS estimator follows \(\hat{G}_{PDIS}^{\pi}=\frac{1}{N}\sum_{i=1}^{N-1}\sum_{t=0}^{T-1}\gamma^{t} \omega_{0:t}^{(i)}r_{t}^{\mathcal{H}()})\), where \(\omega_{0:t}^{(i)}=\prod_{k=0}^{t}\frac{\pi(a_{t}^{(i)}|s_{k}^{(i)})}{\beta(a_{ t}^{(i)}|s_{k}^{(i)})}\) are the PDIS weights for offline trajectory \(\tau^{(i)}\). Moreover, the estimator of the rescale approach3 above is \(\hat{G}_{Rescale}^{\pi}=\frac{1}{N}\sum_{i=1}^{N-1}\omega_{0:T-1}^{(i)}G_{0:T} ^{\mathcal{H}(i)}\), which is equivalent to the vanilla IS estimator [61, 72]. We now show the variance reduction property of \(\hat{G}_{PDIS}^{\pi}\) in the context of OPEHF. Footnote 3: We call it the _rescale approach_ instead of vanilla IS as the idea behind also generalizes to non-IS methods. **Proposition 1**.: _Assume that (i) \(\mathbb{E}[r_{t}^{\mathcal{H}}]\geq 0\), and (ii) given the horizon \(T\), consider any \(1\leq t+1\leq k\leq T\) of any offline trajectory \(\tau\), \(\omega_{0:k}\) and \(r_{t}^{\mathcal{H}}\omega_{0:k}\) are positively correlated. Then, \(\mathbb{V}(\hat{G}_{PDIS}^{\pi})\leq\mathbb{V}(\hat{G}_{Rescale}^{\pi})\), with \(\mathbb{V}(\cdot)\) representing the variance._ The proof can be found in Appendix A. Assumption (_i_) can be easily satisfied in the real world, as HF signals are usually quantified as positive values, _e.g._, ratings (1-10) provided by participants. Assumption (_ii_) is most likely to be satisfied when the target policies do not visit low-return regions substantially [46], which is a pre-requisite for testing RL policies in human-involved environments as initial screening are usually required to filter the ones that could potentially pose risks to participants [57]. Besides IS, doubly robust (DR) [71, 34, 69, 12] and fitted Q-evaluation (FQE) [40] methods require learning value functions. Sparsity of rewards (following the rescale approach above) in the offline dataset may lead to poorly learned value functions [74], considering that the offline data in OPE is usually fixed (_i.e._, no new samples can be added), and are often generated by behavioral policies that are sub-optimal, which results in limited coverage of the state-action space. Limited availabilities of environment-policy interactions (_e.g._, clinical trials) further reduce the scale of the exploration and therefore limit the information that can be leveraged toward obtaining accurate value function approximations. **Reconstruction of IHRs.** To address this challenge, our approach aims to project the end-of-episode human returns back to each environmental step, _i.e._, to learn a mapping \(f_{\theta}(\tau,G_{0:T}^{\mathcal{H}}):(\mathcal{S}\times\mathcal{A})^{T}\times\) \(\mathbb{R}\rightarrow\mathbb{R}^{T}\), parameterized by \(\theta\), that maximizes the sum of log-likelihood of the estimated IHRs, \([\hat{r}_{0}^{\mathcal{H}},\ldots,\hat{r}_{T-1}^{\mathcal{H}}]^{\intercal} \sim f_{\theta}(\tau,G_{0:T}^{\mathcal{H}})\), following \(\max_{\theta}\frac{1}{N}\sum_{i=0}^{N-1}\sum_{t=0}^{T-1}\log p(\hat{r}_{t}^{ \mathcal{H}}=r_{t}^{\mathcal{H}(i)}|\theta,\tau^{(i)},G_{0:T}^{\mathcal{H}(i)})\), where \(G_{0:T}^{\mathcal{H}(i)}\) and \(r_{t}^{\mathcal{H}(i)}\)'s are respectivelly the human return and IHRs (unknown) of the \(i\)-th trajectory in the offline dataset \(\rho^{\beta}\), and \(N\) is the total number of trajectories in \(\rho^{\beta}\). Given that the objective above is intractable due to unknown \(r_{t}^{\mathcal{H}(i)}\)'s, we introduce a surrogate objective \[\max_{\theta}\frac{1}{N}\sum_{i=0}^{N-1}\Big{[}\log p\Big{(}\sum_{t=0}^{T-1} \gamma^{t}\hat{r}_{t}^{\mathcal{H}}=G_{0:T}^{\mathcal{H}(i)}|\theta,\tau^{(i) },G_{0:T}^{\mathcal{H}(i)}\Big{)}-C\cdot\mathcal{L}_{regu}(\hat{r}_{0:T-1}^{ \mathcal{H}}|\theta,\tau^{(i)},G_{0:T}^{\mathcal{H}(i)})\Big{]}. \tag{1}\] Here, the _first term_ is a necessary condition for \(\hat{r}_{t}^{\mathcal{H}}\)'s to be valid for estimating \(r_{t}^{\mathcal{H}}\)'s, as they should sum to \(G_{0:T}^{\mathcal{H}}\). Since many solutions may exist if one only optimizes over the first term, the _second term_\(\mathcal{L}_{regu}\) serves as a regularization that imposes constraints on \(r_{t}^{\mathcal{H}}\)'s to follow the properties specific to their corresponding state-action pairs; _e.g._, \((s,a)\) pairs that are similar to each other in a representation space, defined over the state-action visitation space, tend to yield similar immediate rewards [18]. The detailed regularization technique is introduced in sub-section below. Practically, we choose \(f_{\theta}\) to be a bi-directional long-short term memory (LSTM) [32], since the reconstruction of IHRs can leverage information from both previous and subsequent steps as provided in the offline trajectories. ### Reconstruction of IHRs over Latent Representations (RILR) for OPEHF Now, we introduce the regularization technique for the reconstruction of IHRs, _i.e._, reconstructing IHRs over latent representations (RILR). Specifically, we leverage the representations captured by variational auto-encoders (VAEs) [35], learned over \(\rho^{\beta}\), to regularize the reconstructed IHRs, \(\hat{r}_{t}^{\mathcal{H}}\). VAEs have been adapted toward learning a compact latent space over offline state-action visitations, facilitating both offline policy optimization [42; 81; 65; 27; 26; 28] and OPE [18]. In this work, we specifically consider building on the variational latent model (VLM) proposed in [18] since it is originally proposed to facilitate OPE, as opposed to others that mainly use knowledge captured in the latent space to improve sample efficiency for policy optimization. Moreover, the VLM has shown to be effective for learning an expressive representation space, where the encoded state-action pairs are clustered well in the latent space, as measured by the difference over the returns of the policies from which the state-action pairs are sampled; see Figure 1 (mid) which uses \(t\)-SNE to visualize the encoded state-action pairs in trajectories collected from a visual Q&A environment (Appendix E). Note that VLM originally does not account for HF signals (neither \(r_{t}^{\mathcal{H}}\)'s nor \(G_{0:T}^{\mathcal{H}}\)'s), so we introduce the variational latent model with human returns (VLM-H) below, building on the architecture introduced in [18]. VLM-H consists of a prior \(p(z)\) over the latent variables \(z\in\mathcal{Z}\subset\mathbb{R}^{L}\), with \(\mathcal{Z}\) representing the latent space and \(L\) the dimension, along with a variational encoder \(q_{\psi}(z_{t}|z_{t-1},a_{t-1},s_{t})\), a decoder \(p_{\phi}(z_{t},s_{t},r_{t-1}|z_{t-1},a_{t-1})\) for generating per-step transitions (over both state-action and latent space), and a separate decoder \(p_{\phi}(G_{0:T}^{\mathcal{H}}|z_{T})\) for the reconstruction of the human returns at the end of each episode. Note that encoders and decoders are parameterized by \(\psi\) and \(\phi\) respectively. The overall architecture is illustrated in Figure 1 (left). Figure 1: **(Left)** Architecture of the variational latent model with human returns (VLM-H). (**Mid**) Illustration of the clustering behavior in the latent space using \(t\)-SNE visualization [73], where the encoded state-action pairs (output by the encoder of VLM-H) are in general clustered together if they are generated by policies with similar human returns (shown in the legend at the top left). (**Right**) Diagram summarizing the pipeline of the OPEHF framework. **Trajectory inference (encoding).** VLM-H's encoder approximates the intractable posterior \(p(z_{t}|z_{t-1},a_{t-1},s_{t})=\frac{p(z_{t-1},a_{t-1},z_{t},s_{t})}{\int_{z_{t} \in\mathcal{I}}p(z_{t-1},a_{t-1},s_{t})dz_{t}}\), by avoiding to integrate over the unknown latent space _a priori_. The inference (or encoding) process can be decomposed as, _i.e._, \(q_{\psi}(z_{0:T}|s_{0:T},a_{0:T-1})=q_{\psi}(z_{0}|s_{0})\prod_{t=1}^{T}q_{\psi} (z_{t}|z_{t-1},a_{t-1},s_{t})\); here, \(q_{\psi}(z_{0}|s_{0})\) encodes initial states \(s_{0}\) into latent variables \(z_{0}\), and \(q_{\psi}(z_{t}|z_{t-1},a_{t-1},s_{t})\) captures all subsequent environmental transitions in the latent space over \(z_{t}\)'s. In general, both \(q_{\psi}\)'s are represented as diagonal Gaussian distributions4 with mean and variance determined by neural network \(\psi\), as in [18, 42, 27, 26, 28]. Footnote 4: This helps facilitate an orthogonal basis of the latent space, which would improve the expressiveness of the model. **Trajectory generation (decoding).** The generative (or decoding) process follows, _i.e._, \(p_{\phi}(z_{1:T},s_{0:T},r_{0:T-1},G_{0:T}^{\mathcal{H}}|z_{0},\pi)=p_{\phi}( G_{0:T}^{\mathcal{H}}|z_{T})\cdot\prod_{t=1}^{T}p_{\phi}(z_{t}|z_{t-1},a_{t-1})p_{ \phi}(r_{t-1}|z_{t})\cdot\prod_{t=0}^{T}p_{\phi}(s_{t}|z_{t})\); here, \(p_{\phi}(z_{t}|z_{t-1},a_{t-1})\) enforces the transition of latent variables \(z_{t}\) over time, \(p_{\phi}(s_{t}|z_{t})\) and \(p_{\phi}(r_{t-1}|z_{t})\) are used to sample the states and immediate _environmental_ rewards, while \(p_{\phi}(G_{0:T}^{\mathcal{H}}|z_{T})\) generates the _human return_ issued at the end of each episode. Note that here we still use the VLM-H to capture environmental rewards, allowing the VLM-H to formulate a latent space that captures as much information about the dynamics underlying the environment as possible. All \(p_{\phi}\)'s are represented as diagonal Gaussians5 with parameters determined by network \(\phi\). Footnote 5: If needed, one can project the states over to the orthogonal basis, to ensure that they follow a diagonal covariance. To train \(\phi\) and \(\psi\), one can maximize the evidence lower bound (ELBO) of the joint log-likelihood \(\log p_{\phi}(s_{0:T},r_{0:T-1},G_{0:T}^{\mathcal{H}}|\phi,\psi,\rho^{\beta})\), _i.e._, \[\max_{\psi,\phi} \mathbb{E}_{q_{\psi}}\Big{[}\log p_{\phi}(G_{0:T}^{\mathcal{H}}| z_{T})+\sum\nolimits_{t=0}^{T}\log p_{\phi}(s_{t}|z_{t})+\sum\nolimits_{t=1}^{T} \log p_{\phi}(r_{t-1}|z_{t})\] \[-KL\big{(}q_{\psi}(z_{0}|s_{0})||p(z_{0})\big{)}-\sum\nolimits_{t =1}^{T}KL\big{(}q_{\psi}(z_{t}|z_{t-1},a_{t-1},s_{t})||p_{\phi}(z_{t}|z_{t-1},a_{t-1})\big{)}\Big{]}; \tag{2}\] the first three terms are the log-likelihoods of reconstructing the human return, states, and environmental rewards, and the two terms that follow are Kullback-Leibler (KL) divergence [38] regularizing the inferred posterior \(q_{\psi}\). Derivation of the ELBO can be found in Appendix B. In practice, if \(\phi\) and \(\psi\) are chosen to be recurrent networks, one can also regularize the hidden states of \(\phi,\psi\) by including the additional regularization term introduced in [18]. **Regularizing the reconstruction of IHRs.** Existing works have shown that the latent space not only facilitates the generation of synthetic trajectories but demonstrated that the latent encodings of state-action pairs form clusters, over some measures in the latent space [73], if they are rolled out from policies that lead to similar returns [42, 18]. As a result, we regularize \(\hat{r}_{t}^{\mathcal{H}}\) following \[\min_{\theta}\mathcal{L}_{regu}(\hat{r}_{t}^{\mathcal{H}}|\theta,\psi,s_{0:t}^ {(i)},a_{0:t-1}^{(i)},G_{0:T}^{\mathcal{H}(i)})=\sum_{j\in\mathcal{J}}-\log p (\hat{r}_{t}^{\mathcal{H}}=(1-\gamma)G_{0:T}^{\mathcal{H}(j)}|\theta,\psi,s_{0: t^{\prime}}^{(j)},a_{0:t^{\prime}-1}^{(j)},G_{0:T}^{\mathcal{H}(j)}) \tag{3}\] for each step \(t\); here, \((s_{0:t}^{(i)},a_{0:t-1}^{(i)})\in\tau^{(i)}\sim\rho^{\beta}\), \(\mathcal{J}=\{j_{0},\ldots,j_{K-1}\}\) are the indices of offline trajectories that correspond to the latent encodings \(\{z_{t^{\prime}}^{(j_{k})}\sim q_{\psi}(\cdot|s_{0:t^{\prime}}^{(j_{k})},a_{0: t^{\prime}-1}^{(j_{k})})|j_{k}\in\mathcal{J},t^{\prime}\in[0,T-1]\}\) that are \(K\)-neighbours of the latent encoding \(z_{t}^{(i)}\) pertaining to \((s_{0:t}^{(i)},a_{0:t-1}^{(i)})\), defined over some similarity/distance function \(d(\cdot||\cdot)\), following, _i.e._, \[\min_{j_{k}\in\mathcal{J}}\sum_{k=0}^{K-1}d(z_{t}^{(i)}||z_{t^{\prime}}^{(j_{k} )}),\quad\text{s.t. }z_{t^{\prime}}^{(j_{k})}\text{'s corresponding }\big{(}s_{0:t^{\prime}}^{(j_{k})},a_{0:t^{\prime}-1}^{(j_{k})}\big{)}\in\tau^{( j_{k})}\sim\rho^{\beta}. \tag{4}\] In practice, we choose \(d(\cdot||\cdot)\) to follow stochastic neighbor embedding (SNE) similarities [73], as it has been shown effective for capturing Euclidean distances in high-dimensional space [75]. **Overall objective of RILR for OPEHF.** As a result, by following (1) and leveraging the \(\mathcal{L}_{regu}\) from (3) above, the objective for reconstructing the IHRs is set to be, _i.e._, \[\max_{\theta}\frac{1}{N}\sum_{i=0}^{N-1}\Big{[}\log p\Big{(}\sum_{t=0}^{T-1} \gamma^{t}\hat{r}_{t}^{\mathcal{H}}=G_{0:T}^{\mathcal{H}(i)}|\theta,\tau^{(i)},G_{0 :T}^{\mathcal{H}(i)}\Big{)}-C\cdot\sum_{t=0}^{T-1}\mathcal{L}_{regu}(\hat{r}_{t}^ {\mathcal{H}}|\theta,\psi,s_{0:t}^{(i)},a_{0:t-1}^{(i)},G_{0:T}^{\mathcal{H}(i)}) \Big{]}. \tag{5}\] **Move from RILR to OPEHF.** In what follows, one can leverage any existing OPE methods to take as inputs the offline trajectories, with the immediate environmental rewards \(r_{t}\)'s replaced by the reconstructed IHRs \(\hat{r}_{t}^{\mathcal{H}}\)'s, to achieve the OPEHF's objective (Problem 1). Moreover, our method does not require the IHRs to be correlated with the environmental rewards, as the VLM-H learns to reconstruct both by sampling from two independent distributions, \(p_{\phi}(r_{t-1}|z_{t})\) and \(p_{\phi}(G_{0:T}^{\mathcal{H}}|z_{T})\) respectively, following (2); this is also illustrated empirically over the experiments introduced below (Sections 3), where exceedingly low correlations are found in specific scenarios. The overall pipeline summarizing our method is shown in Figure 1 (right). ## 3 Real-World Experiments with Human Participants In this section, we validate the OPEHF framework introduced above over two real-world experiments, adaptive neurostimulation, and intelligent tutoring. Specifically, we consider four types of OPE methods to be used as the downstream estimator following the RILR step (Section 2.3), including per-decision importance sampling (IS) with behavioral policy estimation [30], doubly robust (DR) [71], distribution correction estimation (DICE) [78] and fitted Q-evaluation (FQE) [40]. A brief overview of these methods can be found in Appendix C, and the specific implementations we use are documented in Appendix D. In Appendix E, we have also tested our method within a visual Q&A environment [10; 66], which follows similar mechanisms as in the two real-world experiments, _i.e._, two types of return signals are considered though no human participants are involved. **Baselines and Ablations.** The baselines include two variants for each of the OPE methods above, _i.e._, (_i_) the _rescale_ approach discussed in Section 2.2, and (_ii_) another variant that sets all the IHRs to be equal to the environmental rewards at corresponding steps, \(r_{t}^{\mathcal{H}}=r_{t}\ \forall t\in[0,T-2]\), and then let \(r_{T-1}^{\mathcal{H}}=r_{T-1}+(G_{0:T}^{\mathcal{H}}-G_{0:T})/\gamma^{T-1}\) with \(G_{0:T}=\sum_{t}\gamma^{t}r_{t}\) being the environmental return, which is referred to as _fusion_ below - this baseline may perform better when strong correlations existed between environmental and human rewards, as it intrinsically decomposes the human returns into IHRs. Consequently, in each experiment below, we compare the performance of the OPEHF framework extending all four types of OPE methods above, <IS/DR/DICE/FQE>-OPEHF, against the corresponding baselines, <IS/DR/DICE/FQE>-<Fusion/Rescale>. We also include the VLM-H as an ablation baseline, as if it is a model-based approach standalone; this is achieved by sampling the estimate returns from the decoder, \(\hat{G}_{0:T}^{\mathcal{H}}\sim p_{\phi}(G_{0:T}^{\mathcal{H}}|z_{T})\). **Metrics.** Following a recent OPE benchmark [15], three metrics are considered to validate the performance of each method, including mean absolute error (MAE), rank correlation, and regret@1. Mathematical definitions can be found in Appendix D. Also, following [15], each method is evaluated over 3 random seeds, and the mean performance (with standard errors) is reported. ### Adaptive Neurostimulation: Deep Brain Stimulation Adaptive neurostimulation facilitates treatments for a variety of neurological disorders [4; 11; 13; 55]. Deep brain stimulation (DBS) is a type of neurostimulation used specifically toward Parkinson's disease (PD), where an internal pulse generator (IPG), implanted under the collarbone, sends electrical stimulus to the basal ganglia (BG) area of the brain through invasive electrodes; Figure 2 illustrates Figure 2: Setup of the neurostimulation experiments, as well as the formulation of offline trajectories. Environmental rewards and human returns are captured in streams 1 and 2-3 respectively. the setup. Adaptive DBS aims to adjust the strength (amplitude) of the stimulus in real-time, to respond to irregular neuronal activities caused by PD, leveraging the local field potentials (LFPs) as the immediate feedback signals, _i.e._, the environmental rewards. Existing works have leveraged RL for adaptive DBS over _computational_ BG models [25, 20, 52, 59], using rewards defined over a physiological signal - beta-band power spectral density of LFPs (_i.e._, the beta power) since physiologically PD could lead to increased beta power due to the irregular neuronal activations it causes [39]. However, in clinical practice, the correlation between beta power and the level of satisfaction reported by the patients varies depending on the specific characteristics of each person, as PD can cause different types of symptoms over a wide range of severity [56, 37, 5, 76]. Such findings further justify the significance of evaluating HF/human returns in the real world using OPEHF. In this experiment, we leverage OPEHF to estimate the feedback provided by 4 _PD patients_ who participate in monthly clinical testings of RL policies trained to adapt amplitudes of the stimulus toward reducing their PD symptoms, _i.e._, bradykinesia and tremor. A mixture of behavioral policies is used to collect the offline trajectories \(\rho^{\beta}\). Specifically, in every step, the state \(s_{t}\) is a historical sequence of LFPs capturing neuronal activities, and the action \(a_{t}\) updates the amplitude of the stimulus to be sent6. Then, an _environmental_ reward \(r_{t}=R(s_{t},a_{t})\) gives a penalty if the beta power computed from the latest LFPs is greater than some threshold (to promote treatment efficacy) as well as a penalty proportional to the amplitudes of the stimulus being sent (to improve battery life of the IPG). At the end of each episode, the _human returns_\(G_{0:T}^{\mathcal{H}}\) are determined from three sources (weighted by 50%, 25%, 25%, respectively), _i.e._, (_i_) a satisfaction rating (between 1-10) provided by the patient, (_ii_) hand grasp speed as a result of the bradykinesia test [63], and (_iii_) level of tremorcalculated over the data from a wearable accelerometry [60, 6]. Each session lasts more than 10 minutes, and each discrete step above corresponds to 2 seconds in the real world; thus, the horizon \(T\geq 300\) (more details are provided in Appendix D). Approval of an Institutional Review Board (IRB) is obtained from Duke University Health System, as well as the exceptional use of the DBS system by the US Food and Drug Administration (FDA). Footnote 6: RL policies only adapt the stimulation amplitudes within a safe range as determined by neurologists/neurosurgeons, making sure they will not lead to negative effects to participants. For each patient, OPEHF and the baselines are used to estimate the human returns of 6 target policies with varied performance. The ground-truth human return for each target policy is obtained as a result of extensive clinical testing following the same schema above, over more than 100 minutes. Table 1 shows the Pearson's and Spearman's correlation coefficients [14], measuring the linear and rank correlations between the environmental returns \(G_{0:T}\) and the human returns \(G_{0:T}^{\mathcal{H}}\) over all the target DBS policies considered for each patient. Pearson's coefficients are all negative since the environmental reward function only issues penalties, while human returns are all captured by positive values. It can be observed that only weak-to-moderate degrees of linear correlations exist for all four patients, while ranks between \(G_{0:T}\)'s and \(G_{0:T}^{\mathcal{H}}\)'s are not preserved across patients; thus, it highlights \begin{table} \begin{tabular}{l l l l l} \hline Patient \# & \(0\) & \(1\) & \(2\) & \(3\) \\ \hline Pearson’s & -0.396 & -0.477 & -0.599 & -0.275 \\ Spearman’s & -0.2 & -0.6 & 0.086 & 0.086 \\ \hline \end{tabular} \end{table} Table 1: Correlations between the _environmental_ and _human_ returns of the 6 target DBS policies associated with each PD patient. Figure 3: Results from the adaptive neurostimulation experiment, _i.e._, deep brain stimulation (DBS). Each method is evaluated over the data collected from each patient, toward corresponding target policies, respectively. The performance shown above are averaged over all 4 human participants affected by Parkinson’s disease (PD). Raw performance over each patient can be found in Appendix D. the need for leveraging OPEHF to estimate human returns, which is different than the classic OPE that focus on estimating environmental returns. The overall performance averaged across the 4-patient cohort, is reported in Fig. 3. Raw performance over every single patient can be found in Appendix D. It can be observed that our OPEHF framework significantly improves MAEs and ranks compared to the two baselines, for all 4 types of downstream OPE methods we considered (IS, DR, DICE, and FQE). Moreover, our method also significantly outperforms the ablation VLM-H in terms of these two metrics, as the VLM-H's performance is mainly determined by how well it could capture the underlying dynamics and returns. In contrast, our OPEHF framework not only leverages the latent representations learnt by the VLM-H (for regularizing RILR), it also inherits the advantages intrinsically associated with the downstream estimators; _e.g._, low-bias nature of IS, or low-variance provided by DR. Moreover, the fusion baseline in general performs worse than the rescale baseline as expected, since no strong correlations between environmental and human returns are found, as reported in Table 1. Note that the majority of the methods lead to similar (relatively low) regrets, as there exist a few policies that lead to human returns that are close over some patients (see the raw statistics in Appendix D). The reason is that all the policies to be extensively tested in clinics are subject to initial screening, where clinicians ensure they would not lead to undesired outcomes or pose significant risks to the patients; thus, the performance of some target policies tends to be close. Nonetheless, low MAEs and high ranks achieved by our method show that it can effectively capture the subtle differences in returns resulting from other HF signals, _i.e._, levels of bradykinesia and tremor. Moreover, Figure 4 visualizes the VLM-H encodings over the trajectories collected from the 6 target DBS policies for each participant and shows that encoded pairs associated with the policies that lead to similar returns are in general clustered together, which justifies the importance of leveraging the similarities over latent representations to regularize the reconstruction of IHRs as in the RILR objective (5). ### Intelligent Tutoring Intelligent tutoring refers to a system where students can actively interact with an autonomous tutoring agent that can customize the learning content, tests, etc., to improve engagement and learning \begin{table} \begin{tabular}{l|c c c|c c c|c} \hline \hline & \multicolumn{3}{c|}{_IS_} & \multicolumn{3}{c|}{_DR_} & \multicolumn{1}{c}{_Ablation_} \\ & Fusion & Rescale & \begin{tabular}{c} **OPEHF** \\ (our) \\ \end{tabular} & Fusion & Rescale & \begin{tabular}{c} **OPEHF** \\ (our) \\ \end{tabular} & VLM-H \\ \hline MAE & 0.7\(\pm\)0.14 & 0.77\(\pm\)0.08 & **0.57\(\pm\)0.09** & 1.03\(\pm\)0.07 & 1.03\(\pm\)0.25 & **0.86\(\pm\)0.04** & 1.00\(\pm\)0.01 \\ Rank & 0.47\(\pm\)0.11 & 0.4\(\pm\)0.09 & **0.8\(\pm\)0.09** & 0.33\(\pm\)0.05 & 0.4\(\pm\)0.0 & **0.53\(\pm\)0.2** & 0.41\(\pm\)0.25 \\ Regret@1 & **0.36\(\pm\)0.16** & **0.36\(\pm\)0.16** & **0.41\(\pm\)0.04** & **0.41\(\pm\)0.0** & **0.41\(\pm\)0.0** & **0.41\(\pm\)0.0** & 0.28\(\pm\)0.19 \\ \hline & \multicolumn{3}{c|}{_DICE_} & \multicolumn{3}{c|}{_FQE_} \\ & Fusion & Rescale & \begin{tabular}{c} **OPEHF** \\ (our) \\ \end{tabular} & Fusion & Rescale & \begin{tabular}{c} **OPEHF** \\ (our) \\ \end{tabular} \\ \hline MAE & 3.19\(\pm\)0.57 & 2.33\(\pm\)0.59 & **1.01\(\pm\)0.01** & 0.74\(\pm\)0.07 & 0.98\(\pm\)0.1 & **0.59\(\pm\)0.1** \\ Rank & 0.47\(\pm\)0.2 & 0.33\(\pm\)0.2 & **0.53\(\pm\)0.22** & 0.27\(\pm\)0.14 & 0.4\(\pm\)0.0 & **0.47\(\pm\)0.05** \\ Regret@1 & 0.55\(\pm\)0.06 & 0.45\(\pm\)0.18 & **0.37\(\pm\)0.15** & **0.36\(\pm\)0.16** & **0.41\(\pm\)0.0** & **0.41\(\pm\)0.0** \\ \hline \hline \end{tabular} \end{table} Table 2: Results from the intelligent tutoring experiment, _i.e._, performance achieved by our OPEHF framework compared to the ablation and baselines over all four types of downstream OPE estimators. Figure 4: \(t\)-SNE visualizing the VLM-H encodings of the state-action pairs rolled out over DBS policies with different human returns (shown in the legend). It can be observed that distances among the encoded pairs associated with the policies that lead to similar returns are in general smaller, justifying the RILR objective (5). outcomes [2; 64; 45]. OPEHF is important in such a setup for directly estimating the potential outcomes that could be obtained by students, as opposed to environmental rewards that are mostly discrete; see detailed setup below. Existing works have explored this topic over classic OPE setting in _simulations_[49; 54]. The system is deployed in an undergraduate-level introduction to probability and statistics course over 5 academic years at North Carolina State University, where the interaction logs obtained from 1,288 students who voluntarily opted-in for this experiment are recorded.7 Specifically, each episode refers to a student working on a set of 12 problems (_i.e._, horizon \(T=12\)), where the agent suggests the student approach each problem through _independent_ work, working with the _hints_ provided, or directly providing the _full solution_ (for studying purposes) - these options constitute the action space of the agent. The states are characterized by 140 features extracted from the logs, designed by domain experts; they include, for example, the time spent on each problem, and the correctness of the solution provided. In each step, an immediate _environmental_ reward of +1 is issued if the answer submitted by students, for the current problem, is at least 80% correct (auto-graded following pre-defined rubrics). A reward of 0 is issued if the grade is less than 80% or the agent chooses the action that directly displays the full solution. Moreover, students are instructed to complete two exams, one before working on any problems and another after finishing all the problems. The normalized difference between the grades of two exams constitutes the _human_ return for each episode. More details are provided in Appendix D. Footnote 7: An IRB approval is obtained from North Carolina State University. The use/test of the intelligent tutoring system is overseen by a departmental committee, ensuring it does not risk the academic performance and privacy of the participants. The intelligent tutoring agent follows different policies across academic years, where the data collected from the first 4 years (1148 students total) constitutes the offline trajectories \(\rho^{\beta}\) (as a result of a mixture of behavioral policies). The 4 policies deployed in the 5th year (140 students total) serve as the target policies, whose ground-truth performance is determined by averaging over the human returns of the episodes that are associated with each policy respectively. Table 3 documents the Pearson's and Spearman's correlation coefficients between the environmental and human returns from data collected over each academic year, showing weak linear and rank correlations across all 5 years. Such low correlations are due to the fact that the environmental rewards are discrete and do not distinguish among the agent's choices, _i.e._, a +1 reward can be obtained either if the student works out a solution independently or by following hints, and a 0 reward is issued every time the agent chooses to display the solution even if the student could have solved the problem. As a result, such a setup makes OPEHF to be more challenging; because human returns are only available at the end of each episode, and the immediate environmental rewards do not carry substantial information toward extrapolating IHRs. Table 2 documents the performance of OPEHF and the baselines toward estimating the human returns of the target policies. It can be observed that our OPEHF framework achieves state-of-the-art performance, over all types of downstream OPE estimators considered. This result echos the design of the VLM-H where both environmental information (state transitions and rewards) and human returns are encoded into the latent space, which helps formulate a compact and expressive latent space for regularizing the downstream RILR objective (5). Moreover, it is important to use the latent information to guide the reconstruction of IHRs (as regularizations in RILR), as opposed to using the VLM-H to predict human returns standalone; since limited convergence guarantees/error bounds can be provided for VAE-based latent models, which is illustrated in both Figure 3 and Table 2 where OPEHF largely outperforms the VLM-H ablation over MAE and rank. ## 4 Related Works **OPE.** Majority of existing model-free OPE methods can be categorized into one of the four types, _i.e._, IS, DR, DICE, and FQE. Recently, variants of IS and DR methods have been proposed for variance or bias reduction [34; 71; 12; 69], as well as adaptations toward unknown behavioral policies [30]. \begin{table} \begin{tabular}{l c c c c c} \hline Year \# & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) \\ \hline Pearson’s & 0.033 & 0.176 & 0.089 & 0.154 & 0.183 \\ Spearman’s & 0.082 & 0.156 & 0.130 & 0.161 & 0.103 \\ \hline \end{tabular} \end{table} Table 3: Correlations between the _environmental_ and _human_ returns from data collected over each academic year. DICE methods are intrinsically designed to work with offline trajectories rolled out from a mixture of behavioral policies, and existing works have introduced the DICE variants toward specific environmental setups [84; 83; 77; 78; 51; 9]. FQE extrapolates policy returns from the approximated Q-values [31; 40; 36]. There also exist model-based OPE methods [82; 18] that first captures the dynamics underlying the environment, and estimate policy performance by rolling out trajectories under the target policies. A more detailed review of existing OPE methods can be found in Appendix C. Note that these OPE methods have been designed for estimating the _environmental_ returns. In contrast, the objective for OPEHF is to estimate the _human_ returns which may not be strongly correlated with the environmental returns, as they are usually determined under different schemas. **VAEs for OPE and offline RL.** There exists a long line of research developing latent models to capture the dynamics underlying environments in the context of offline RL as well as OPE. Specifically, PlaNet [27] uses recurrent neural networks to capture the transitions of latent variables over time. Latent representations learned by such VAE architectures have been used to augment the state space in offline policy optimization to improve sample efficiency, _e.g._, in Dreamers [26; 28], SOLAR [81] and SLAC [42]. On the other hand, LatCo [65] attempts to improve sample efficiency by searching in the latent space which allows by-passing physical constraints. Also, MOPO [80], COMBO [79], and LOMPO [62] train latent models to quantify the confidence of the environmental transitions as learned from offline data and prevent the policies from following transitions over uncertain regions during policy training. Given that such models are mostly designed for improving sample efficiency in policy optimization/training, we choose to leverage the architecture from [18] for RILR as it is the first work that adapts latent models to the OPE setup. **Reinforcement learning from human feedback (RLHF).** Recently, the concept of RLHF has been widely used in guiding RL policy optimization with the HF signals deemed more informative than the environmental rewards [85; 47; 8]. Specifically, they leverage the _ranked preference_ provided by labelers to train a reward model, captured by feed-forward neural networks, that is fused with the environmental rewards to guide policy optimization. However, in this work, we focus on estimating the HF signals that serve as _direct evaluation_ of the RL policies used in human-involved experiments, such as the level of satisfaction (_e.g._, on a scale 1-10) and the treatment outcome. The reason is that in many scenarios the participants cannot revisit the same procedure multiple times, _e.g._, patients may not undergo the same surgeries several times and rank the experiences. More importantly, OPEHF's setup is critical when online testing of RL policies may be even prohibited, without sufficient justifications over safety and efficacy upfront, as illustrated by the experiments above. **Reward shaping.** Although reward shaping methods [3; 58; 29] pursue similar ideas of decomposing the delayed and/or sparse rewards (_e.g._, the human return) into immediate rewards, they fundamentally rely on transforming the MDP to such that the value functions can be smoothly captured and high-return state-action pairs can be quickly identified and frequently re-visited. For example, RUDDER [3] leverages the transformed MDP that has expected future rewards equal to zero. Though the optimization objective is consistent between pre- and post-transformed MDPs, this approach likely would not converge to an optimal policy in practice. On the other hand, the performance (_i.e._, returns) of sub-optimal policies is not preserved across the two MDPs. This significantly limits its use cases toward OPE which requires the returns resulted by sub-optimal policies to be estimated accurately. As a result, such methods are not directly applicable to the OPEHF problem we consider. ## 5 Conclusion and Future Works Existing OPE methods fall short in estimating HF signals, as HF can be dependent upon various confounders. Thus, in this work, we introduced the OPEHF framework that revived existing OPE methods for estimating human returns, through RILR. The framework was validated over two real-world experiments and one simulation environment, outperforming the baselines in all setups. Although in the future it could be possible to extend OPEHF to facilitate estimating the HF signals needed for updating the policies similar to RLHF, we focused on policy evaluation which helped to isolate the source of improvements; as policy optimization's performance may depend on multiple factors, such as the exploration techniques used as well as the objective/optimizer chosen for updating the policy. Moreover, this work mainly focuses on the scenarios where the human returns are directly provided by the participants. So under the condition where the HF signals are provided by 3-rd parties (_e.g,_ clinicians), non-trivial adaptations over this work may be needed to consider special cases such as conflicting HF signals provided by different sources. Acknowledgements This work is sponsored in part by the AFOSR under award number FA9550-19-1-0169, by the NIH UH3 NS103468 award, and by the NSF CNS-1652544, DUE-1726550, IIS-1651909 and DUE-2013502 awards, as well as the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant CNS-2112562. Investigational Summit RC+S systems and technical support provided by Medtronic PLC. Apple Watches were provided by Rune Labs. We thank Stephen L. Schmidt and Jennifer J. Peters from Duke University Department of Biomedical Engineering, as well as Katherine Gentry from Duke University Department of Neurosurgery, for the efforts overseeing DBS experiments in the clinic.
2303.09036
Mimic3D: Thriving 3D-Aware GANs via 3D-to-2D Imitation
Generating images with both photorealism and multiview 3D consistency is crucial for 3D-aware GANs, yet existing methods struggle to achieve them simultaneously. Improving the photorealism via CNN-based 2D super-resolution can break the strict 3D consistency, while keeping the 3D consistency by learning high-resolution 3D representations for direct rendering often compromises image quality. In this paper, we propose a novel learning strategy, namely 3D-to-2D imitation, which enables a 3D-aware GAN to generate high-quality images while maintaining their strict 3D consistency, by letting the images synthesized by the generator's 3D rendering branch to mimic those generated by its 2D super-resolution branch. We also introduce 3D-aware convolutions into the generator for better 3D representation learning, which further improves the image generation quality. With the above strategies, our method reaches FID scores of 5.4 and 4.3 on FFHQ and AFHQ-v2 Cats, respectively, at 512x512 resolution, largely outperforming existing 3D-aware GANs using direct 3D rendering and coming very close to the previous state-of-the-art method that leverages 2D super-resolution. Project website: https://seanchenxy.github.io/Mimic3DWeb.
Xingyu Chen, Yu Deng, Baoyuan Wang
2023-03-16T02:18:41Z
http://arxiv.org/abs/2303.09036v2
# Mimic3D: Thriving 3D-Aware GANs via 3D-to-2D Imitation ###### Abstract Generating images with both photorealism and multi-view 3D consistency is crucial for 3D-aware GANs, yet existing methods struggle to achieve them simultaneously. Improving the photorealism via CNN-based 2D super-resolution can break the strict 3D consistency, while keeping the 3D consistency by learning high-resolution 3D representations for direct rendering often compromises image quality. In this paper, we propose a novel learning strategy, namely 3D-to-2D imitation, which enables a 3D-aware GAN to generate high-quality images while maintaining their strict 3D consistency, by letting the images synthesized by the generator's 3D rendering branch to mimic those generated by its 2D super-resolution branch. We also introduce 3D-aware convolutions into the generator for better 3D representation learning, which further improves the image generation quality. With the above strategies, our method reaches FID scores of 5.4 and 4.3 on FFHQ and AFHQ-v2 Cats, respectively, at 512\(\times\)512 resolution, largely outperforming existing 3D-aware GANs using direct 3D rendering and coming very close to the previous state-of-the-art method that leverages 2D super-resolution. Project website: [https://seanchenxy.github.io/Mimic3DWeb](https://seanchenxy.github.io/Mimic3DWeb). ## 1 Introduction 3D-aware GANs [37, 9, 6, 3] have experienced rapid development in recent years and shown great potential for large-scale realistic 3D content creation. The core of 3D-aware GANs is to incorporate 3D representation learning and differentiable rendering into image-level adversarial learning [8]. In this way, the generated 3D representations are forced to mimic real image distribution from arbitrary viewing angles, resulting in their faithful reconstruction of the underlying 3D structures of the subjects for free-view image synthesis. Among different 3D representations, neural radiance field (NeRF) [24] has been proven to be effective in the 3D-aware GAN scenario [37, 4], which guarantees strong 3D consistency when synthesizing multiview images via volume rendering [15]. However, NeRF's volumetric representation also brings high computation costs to GAN training. This hinders the generative models from synthesizing high-resolution images with fine details. Several attempts have been made to facilitate NeRF-based GAN training at high resolution, via sparse representations [38, 6, 47, 54] or patch-wise adversarial learning [43], yet the performance is still unsatisfactory and lags far behind state-of-the-art 2D GANs [19, 17]. Along another line, instead of using direct NeRF rendering, plenty of works [26, 9, 28, 3, 50] introduce 2D super-resolution module to deal with 3D-aware GAN training at high resolution. A typical procedure is to first render a NeRF-like feature field into low-resolution feature maps, then apply a 2D CNN to generate high-resolution images from them. The representative work among this line, namely EG3D [3], utilizes tri-plane representation to effectively model the low-resolution feature field and leverages StyleGAN2-like [19] super-resolution block to achieve image synthesis at high-quality. It sets a record for image Figure 1: Comparison between different 3D-aware GANs on image generation quality and multiview 3D consistency. The image generation quality is evaluated via FID between generated and real images. The 3D consistency is measured by conducting 3D reconstruction [45] on generated multiview images and calculating PSNR between them and the re-rendered reconstruction results. Our method inherits the high image quality of approaches leveraging 2D super-resolution meanwhile maintains strict 3D consistency taking the advantage of direct 3D rendering. quality among 3D-aware GANs and gets very close to that of state-of-the-art 2D GANs. However, a fatal drawback of this line of works is a sacrifice of strict 3D consistency, due to leveraging a black-box 2D CNN for image synthesis. A question naturally arises -- _Is there any way to combine the above two lines to achieve strict 3D consistency and high-quality image generation simultaneously?_ The answer, as we will show in this paper, is arguably yes. The key intuition is to let the images synthesized by direct NeRF rendering to mimic those generated by a 2D super-resolution module, which we name _3D-to-2D imitation_. Specifically, we start from an EG3D backbone that adopts 2D super-resolution to generate high-resolution images from a low-resolution feature field. Based on this architecture, we add another 3D super-resolution module to generate high-resolution NeRF from the low-resolution feature field and force the images rendered by the former to imitative those generated by the 2D super-resolution branch. This process can be seen as a multiview reconstruction process -- images sharing the same latent code from different views produced by the 2D branch are pseudo multiview data, and the high-resolution NeRF branch represents the 3D scene to be reconstructed. Previous methods [29, 53, 33] have shown that this procedure can obtain reasonable 3D reconstruction, even if the multiview data are not strictly 3D consistent. We believe this is partially due to the inductive bias (_e.g_., continuity and sparsity) of the underlying 3D representation. With the above process, the high-resolution NeRF learns to depict fine details of the 2D-branch images, thus enabling high-quality image rendering. The 3D consistency across different views can also be preserved thanks to the intrinsic property of NeRF. Note that if the rendered images try to faithfully reconstruct every detail of the 2D-branch images across different views, it is likely to obtain blurry results due to detail-level 3D inconsistency of the latter. To avoid this problem, we only let the images produced by the two branches to be perceptually similar (_i.e_. by LPIPS loss [51]), and further enforce adversarial loss between the rendered images from the high-resolution NeRF and real images to maintain high-frequency details. In addition, we only render small image patches to conduct the imitative learning to reduce memory costs. Apart from the above learning strategy, we introduce 3D-aware convolutions to the EG3D backbone to improve tri-plane learning, motivated by a recent 3D diffusion model [46]. The original EG3D generates tri-plane features to model the low-resolution feature field via a StyleGAN2-like generator. The generator is forced to learn 2D-unaligned features on the three orthogonal planes via 2D convolutions, which is inefficient. The 3D-aware convolution considers associated features in 3D space when performing 2D convolution, which improves feature communications and helps to produce more reasonable tri-planes. Nevertheless, directly applying 3D-aware convolution in all layers in the generator is unaffordable. As a result, we only apply them after the output layers at each resolution in the tri-plane generator. This helps us to further improve the image generation quality with only a minor increase in the total memory consumption. With the above strategies, our generator is able to synthesize 3D-consistent images of virtual subjects with high image quality (Fig. 2). It reaches FID scores [13] of \(5.4\) and \(4.3\) on FFHQ [18] and AFHQ-v2 Cats [5], respectively, at \(512\times 512\) resolution, largely outperforming previous 3D-aware GANs with direct 3D rendering and even surpassing many leveraging 2D super-resolution (Fig. 1). A by-product of our method is a more powerful 2D-branch generator, which reaches an FID of \(4.1\) on FFHQ, exceeding previous state-of-the-art EG3D. Though our method presented in this paper is mostly based on EG3D backbone, its 3D-to-2D imitation strategy can be extended to learning other 3D-aware GANs as well. We believe this would largely close the quality gap between 3D-aware GANs and traditional 2D GANs, and pave a new way for realistic 3D generation. ## 2 Related Works 3D-aware GAN.3D-aware GANs [12, 25, 37, 4, 26, 9, 52, 6, 3, 43, 54] aim to generate multiview images of an object category, given only in-the-wild 2D images as training data. The key is to represent the generated scenes via a 3D representation and leverage corresponding rendering Figure 2: Our method enables high-quality image generation at \(512\times 512\) resolution without using a 2D super-resolution module. techniques to synthesize images at different viewpoints for image-level adversarial learning [8]. Initially, explicit representations such as voxels [25, 12] and meshes [44] are used to describe scenes. With the development of neural implicit fields [32, 23, 42, 41, 24, 45, 27], implicit scene representations, especially NeRF [24], gradually overtake explicit ones in 3D-aware GANs [4, 28, 3]. Nevertheless, one great hurdle of NeRF-based GANs is the high computation cost, which restricts earlier works [37, 4, 7, 48, 31] from synthesizing high-quality images. Consequently, a large number of follow-up works [26, 9, 55, 50, 28, 3, 49] avoid rendering NeRF at high resolution by conducting 2D super-resolution from a low-resolution image or feature map rendered by NeRF-like fields. This is only a stopgap as the black-box 2D super-resolution module sacrifices the important 3D consistency brought by NeRF. To keep the strict 3D consistency, several works [6, 47, 38, 43, 54] turn to more sparse 3D representations such as sparse voxel [38], radiance manifolds [6], and multi-plane images [54] to allow direct rendering at high resolution. Carefully designed training strategies such as two-stage training [47] or patch-wise optimization [43] are also introduced to facilitate the learning process. However, their image generation quality still lags behind those with 2D super-resolution. Our method combines the advantages of both lines of works to achieve high-quality image generation and strict 3D consistency at once, by leveraging the proposed 3D-to-2D imitation. 3D generation by 3D-to-2D imitationRecent studies [14, 39, 10] reveal that 2D generative models [2, 18] have the ability to generate pseudo multiview images of a subject. Based on this observation, several methods [29, 53, 40, 30] propose to distill the knowledge from a pre-trained 2D generative model for 3D generation by performing 3D reconstruction on the generated "multiview" images. A standard procedure is to render the 3D representation of an object from multiple views, and compare them with the closest samples falling in the latent space of the pre-trained 2D generator for iterative optimization. The 2D generator ensures that the rendered results are photorealistic from different views, meanwhile the intrinsic property of the 3D representation guarantees reasonable 3D structure, thus leading to high-quality 3D generation. Some recent methods [33, 20] also combine this idea with text-to-image diffusion models [35, 36] to achieve text-driven 3D creation. Our method shares a similar spirit, which distills the knowledge from the generator's 2D super-resolution branch to its 3D rendering branch, thus achieving image generation with both photorealism and strict 3D consistency. ## 3 Approach Given a collection of 2D images, we aim to learn a 3D-aware generator \(G\) for free-view image synthesis. The generator takes a random code \(\mathbf{z}\in\mathbb{R}^{d_{a}}\) and an explicit camera pose \(\mathbf{\theta}\in\mathbb{R}^{d_{a}}\) as input, and generates a 2D image \(I\): \[G:(\mathbf{z},\mathbf{\theta})\in\mathbb{R}^{d_{z}}\times\mathbb{R}^{d_{\mathbf{\theta}}} \to I\in\mathbb{R}^{H\times W\times 3}. \tag{1}\] To enable high-quality image synthesis, we adopt EG3D [3] as the backbone of the generator, which synthesizes low-resolution feature fields via the tri-plane representation [3], and leverages 2D super-resolution for high-resolution image generation (Sec. 3.1). Based on EG3D, we propose a 3D-to-2D imitation strategy to synthesize high-resolution NeRF for 3D-consistent image rendering. We leverage a 3D super-resolution branch to predict high-resolution tri-planes from the low-resolution ones, and force the rendered images from the former to mimic the images generated by the 2D super-resolution branch (Sec. 3.2). In addition, we introduce 3D-aware convolution [46] to the generator for better tri-plane learning via cross-plane communications, which helps to further improve the image generation quality (Sec. 3.3). The overview of our method is illustrated in Fig. 3. We describe each part in detail below. ### Preliminaries: EG3D EG3D adopts a StyleGAN2-based [19] generator \(\mathcal{E}\) to efficiently synthesize the low-resolution feature field of a subject. The feature field is represented by the tri-plane representation which consists of three orthogonal 2D planes produced by reshaping the output feature map of \(\mathcal{E}\), given the latent code \(\mathbf{z}\) as input. For a point \(\mathbf{x}\in\mathbb{R}^{3}\) in the 3D space, its corresponding feature \(\mathbf{f}\) can be obtained by projecting itself onto the three planes \(\mathbf{P}_{xy},\mathbf{P}_{yz},\mathbf{P}_{zx}\), and summing the retrieved features \(\mathbf{f}_{xy},\mathbf{f}_{yz},\mathbf{f}_{zx}\). A small MLP \(\mathcal{M}\) then maps this intermediate feature to volume density \(\sigma\in\mathbb{R}\) and color feature \(\mathbf{c}\in\mathbb{R}^{d_{c}}\) (the first three dimensions represent \(RGB\) color), forming the low-resolution feature field: \[\mathcal{M}:\mathbf{f}\in\mathbb{R}^{d^{\mathbf{\ell}}}\rightarrow(\mathbf{c},\sigma) \in\mathbb{R}^{d^{\mathbf{\ell}}}\times\mathbb{R}. \tag{2}\] To generate high-resolution images, EG3D enforces volume rendering [15, 24] to render the above feature field to a low-resolution feature map \(C\), where each pixel value \(C(\mathbf{r})\) corresponding to a viewing ray \(\mathbf{r}\) can be obtained via \[C(\mathbf{r})=\sum_{i=1}^{N}T_{i}(1-\exp(-\sigma_{i}\delta_{i}))\mathbf{e}_{i},T_{ i}=\exp(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{j}). \tag{3}\] Here, \(i\) is the index of points along ray \(\mathbf{r}\) sorted from near to far, and \(\delta\) is the distance between adjacent points. Then, the rendered feature map \(C\) is sent to a 2D super-resolution module \(\mathcal{S}^{2D}\) consisting of several StyleGAN2-modulated convolutional layers to generate the final image \(I^{2D}\). Although EG3D can generate free-view images of high quality, it cannot well maintain their 3D consistency across different views. This is inevitable due to incorporating the black-box CNN-based 2D super-resolution mod ule, which breaks the physical rules of the volume rendering process. Despite that EG3D further proposes a dual-discrimination [3] strategy to force the high-resolution images to be consistent with their low-resolution counterparts, detail-level 3D inconsistency (texture flickering) still cannot be eliminated. During continuous camera variation, these artifacts can be easily captured by human eyes, differing the synthesized results from a real video sequence. To maintain the 3D consistency meanwhile keep the high-quality image generation to the maximum extent, we propose a 3D-to-2D imitation strategy described below. ### 3D-to-2D Imitation To keep the strict 3D consistency, a better way is to directly render the 3D representation instead of resorting to a 2D CNN for image synthesis. Noticing that the images generated by EG3D contain rich details, it is natural to use them as guidance for images synthesized by direct 3D rendering. If the directly-rendered images well mimic those fine details, their quality should get very close to that of EG3D. Meanwhile, since they are rendered from a continuous 3D representation, their 3D consistency across different views should be trivially maintained. This motivates us to design the 3D-to-2D imitation strategy, as depicted in Fig. 3. Specifically, we introduce a 3D super-resolution module \(\mathcal{S}^{3D}\) to generate residual tri-planes \(\mathbf{P}^{r}\) from the coarse tri-planes \(\mathbf{P}^{c}\) produced by the tri-plane generator \(\mathcal{E}\): \[\mathcal{S}^{3D}:\mathbf{P}^{c}\in\mathbb{R}^{3\times H^{c}\times W^{c}\times d ^{\ell}}\rightarrow\mathbf{P}^{r}\in\mathbb{R}^{3\times H^{r}\times W^{r} \times d^{\ell}}. \tag{4}\] The \(\mathcal{S}^{3D}\) adopts several StyleGAN2-modulated convolutional layers conditioned on a latent code \(\mathbf{w}\) mapped from the random code \(\mathbf{z}\), similar to the 2D super-resolution module \(\mathcal{S}^{2D}\) in EG3D. The difference is that \(\mathcal{S}^{3D}\) conducts super-resolution on the triplane-based 3D representation instead of the rendered 2D feature map. In this way, we can generate a high-resolution 3D field for direct 3D rendering. Given the coarse and residual tri-planes ( and ), we obtain a more detailed intermediate feature \(\mathbf{f}=\mathbf{f}^{c}+\mathbf{f}^{r}\) for a 3D point \(\mathbf{x}\), and further obtain the high-resolution feature field by sending the intermediate feature into the MLP-based decoder \(\mathcal{M}\) following Eq. (2). The first three feature dimensions of the field derive the high-resolution NeRF for rendering 3D-consistent fine image \(I^{3D}\) via Eq. (3). To ensure that \(I^{3D}\) contains reasonable geometry structure with rich texture details, we let it to mimic the contents of \(I^{2D}\) generated by the 2D branch \(\mathcal{S}^{2D}\). For a pair of \(I^{3D}\) and \(I^{2D}\) synthesized with the same latent code \(\mathbf{z}\) and camera pose \(\mathbf{\theta}\), we enforce imitation loss between them to guarantee their perceptual similarity: \[\mathcal{L}_{imitation}=\mathrm{LPIPS}(I^{3D},\mathrm{sg}(I^{2D})), \tag{5}\] where \(\mathrm{LPIPS}(\cdot,\cdot)\) is the perceptual loss defined in [51], and \(\mathrm{sg}\) denotes stopping gradient to avoid undesired influence of \(I^{3D}\) on the 2D branch. This process is very similar to a standard multiview reconstruction process. During training, \(I^{2D}\) sharing the same code \(\mathbf{z}\) are generated under different camera views from a statistical aspect, forming the multi-view supervision. The high-resolution NeRF from the 3D branch renders \(I^{3D}\) under the same camera views to compare with the multiview data for 3D reconstruction. Considering that \(I^{2D}\) are nearly 3D-consistent, they should help to learn a reasonable NeRF for 3D-consistent image rendering. Nevertheless, since \(I^{2D}\) are not strictly 3D-consistent, faithfully reconstructing their image contents leads to blurry results where the texture details across different views are averaged out. Therefore, we further introduce the non-saturating GAN loss with R1 regularization [22] between \(I^{3D}\) and real images \(\hat{I}\) to main the high-frequency details: \[\begin{split}\mathcal{L}_{adv}^{3D}&=\mathbb{E}_{ \mathbf{z}\sim p_{x},\mathbf{\theta}\sim p_{\mathbf{\theta}}}[f(D^{3D}(G^{3D}(\mathbf{z},\mathbf{ \theta})))]\\ &+\mathbb{E}_{\hat{I}\sim p_{real}}[f(-D^{3D}(\hat{I}))+\lambda \|\nabla D^{3D}(\hat{I})\|^{2}],\end{split} \tag{6}\] Figure 3: Overview of our framework. 3D-to-2D imitation strategy is enforced to let the generator’s 3D branch to mimic the results of its 2D branch, thus leading to image generation of high quality and strict 3D consistency. 3D-aware convolutions are also introduced to the tri-plane generator to enhance 3D representation learning, which further improves the image generation quality. where \(f(u)=\log(1+\exp{(u)})\) is the Softplus function, \(G^{3D}\) including \(\{\mathcal{E},\mathcal{M},\mathcal{S}^{3D}\}\) is the 3D rendering branch of the generator, and \(D^{3D}\) is the corresponding discriminator. An advantage of the above imitation learning is that we can render small patches (_i.e_. \(64\times 64\)) to compute Eq. (5) and Eq. (6), as shown in Fig. 3, with only minor influence to the final image quality. This largely reduces the memory cost during training and enables learning the 3D branch at high resolution (_e.g_. \(512\times 512\)). By contrast, solely applying adversarial loss at patch-level often leads to large quality drops as shown in previous methods [37, 43] and Tab. 2. Finally, we apply image-level adversarial loss to the 2D branch following EG3D to ensure that \(I^{2D}\), as the supervision for the 3D branch, are of high quality: \[\begin{split}\mathcal{L}^{2D}_{adv}&=\mathbb{E}_{ \boldsymbol{z}\sim p_{z},\boldsymbol{\theta}\sim p_{\boldsymbol{z}}}[f(D^{2D }(G^{2D}(\boldsymbol{z},\boldsymbol{\theta})))]\\ &+\mathbb{E}_{I\sim p_{real}}[f(-D^{2D}(\hat{I}))+\lambda\| \nabla D^{2D}(\hat{I}))\|^{2}],\end{split} \tag{7}\] where \(G^{2D}\) is the 2D branch generator consisting of \(\{\mathcal{E},\mathcal{M},\mathcal{S}^{2D}\}\), and \(D^{2D}\) is the corresponding discriminator. The same dual discrimination is adopted as done in EG3D. Overall, the training objective is \[\mathcal{L}_{total}=\mathcal{L}_{imitation}+\mathcal{L}^{3D}_{adv}+\mathcal{L} ^{2D}_{adv}. \tag{8}\] In practice, we first learn the 2D branch via \(\mathcal{L}^{2D}_{adv}\) to obtain reasonable synthesized images \(I^{2D}\), then leverage \(\mathcal{L}_{total}\) to simultaneously learn the 2D and 3D branches for high-quality and 3D-consistent image synthesis. ### 3D-Aware Tri-plane Generator As depicted in Sec. 3.2, the tri-plane generator \(\mathcal{E}\) is responsible for synthesizing the coarse tri-planes \(\mathbf{P}^{c}\) shared by both the 2D and 3D branches, which is an important component that would affect the final image generation quality. However, in EG3D, \(\mathcal{E}\) takes a StyleGAN2 architecture originally designed for 2D generative tasks. As shown in Fig. 4(a), the original tri-plane generator only contains the main stream and the output stream. The tri-planes are obtained from latent feature maps in the main stream via 2D convolutions (_i.e_. \(toRGB\) layers), and the latent feature maps are also produced by a serials of 2D synthesis blocks. Consequently, the latent feature maps are forced to learn 3D unaligned features of the three orthogonal planes and the latters also lack feature communications with each other. Inspired by a recent 3D diffusion model [46], we introduce 3D-aware convolutions into our tri-plane generator \(\mathcal{E}\) to enhance feature communications between 3D-associated positions across different planes, for better tri-plane generation. Specifically, as illustrated in Fig. 4(a), we add an extra 3D-aware stream upon the original output stream after each \(toRGB\) layer at different resolutions. At each resolution level \(k\), the corresponding tri-planes \(\mathbf{P}_{k}=[\mathbf{P}_{k,xy},\mathbf{P}_{k,yz},\mathbf{P}_{k,zx}]\), are summed with the tri-planes produced by the original output stream, and further sent into a 3D-aware block to produce tri-plane features for the next level. The 3D-aware block conducts similar operations on each of the three planes. For brevity, we omit the subscript \(k\) here and take \(\mathbf{P}_{xy}\) as an example to illustrate the operation process. As shown in Fig. 4(b), to align \(\mathbf{P}_{yz}\) and \(\mathbf{P}_{zx}\) towards \(\mathbf{P}_{xy}\), we first perform global pooling along \(z\) axis of the former two to obtain \(z\)-squeezed feature vectors. These vectors are then repeated along the \(z\) dimension to restore the original spatial size, denoted as \(\mathbf{P}_{yr}\) and \(\mathbf{P}_{rx}\). In this manner, the obtained \(\mathbf{P}_{yr}\) and \(\mathbf{P}_{rx}\) are aligned with \(\mathbf{P}_{xy}\) from a 3D perspective, _i.e_., a 2D position \(uv\) on \(\mathbf{P}_{xy}\) is responsible for features in region \(uvz,z\in[z_{min},z_{max}]\) in the 3D space, meanwhile the same \(uv\) position on \(\mathbf{P}_{yr}\) and \(\mathbf{P}_{rx}\) also associate with the features in this 3D region. As a result, we can simply concatenate them along the channel dimension as \([\mathbf{P}_{xy},\mathbf{P}_{yr},\mathbf{P}_{rx}]\), and perform modulated 2D convolution [19] on it. The 2D convolution aggregates the 3D-associated features to produce next-level \(\mathbf{P}_{xy}\), leading to better feature communications across the planes. \(\mathbf{P}_{yz}\) and \(\mathbf{P}_{zx}\) can be processed similarly. Note that in [46], the 3D-aware convolution is applied in all layers in a U-Net structure. However, in our scenario, leveraging 3D-aware convolution for all layers, especially the main stream, introduces unaffordable memory cost during training, as it would produce multiple auxiliary tensors and triples the channel dimension for each processed latent feature map, as shown in Fig. 4(b). Comparing to the latent feature maps in the main stream, the tri-planes after each output layer contains much fewer channels thus more Figure 4: (a) Structure of our 3D-aware tri-plane generator. (b) Operations of the 3D-aware block on \(xy\) plane. memory-friendly to adopt the 3D-aware convolution. Empirically, our proposed 3D-aware stream helps to learn more reasonable tri-planes and improves the final image generation quality, with only a minor increase in the total memory consumption (see Sec. 4.3). ## 4 Experiments Implementation details.We train our method on two real-world datasets: FFHQ [18] and AFHQ-v2 Cats [5], which consists of 70K human face images of \(1024^{2}\) resolution and 5.5K cat face images of \(512^{2}\) resolution, respectively. We follow the data pre-processing of EG3D [3] to crop and resize the images to \(256^{2}\) or \(512^{2}\) resolution. Experiments are conducted on 8 NVIDIA Tesla A100 GPUs with 40GB memory, following the training configuration of EG3D. For FFHQ, the training process takes around 8 days, where learning the 2D branch takes 5 days and jointly training the whole framework takes additional 3 days. For AFHQ-v2, we finetune the 2D branch initially trained on FFHQ for 1 day, then jointly train the whole framework for extra 3 days. Adaptive data augmentation [16] is applied to AFHQ-v2 to facilitate training with limited data. See the _suppl. material_ for more details. ### Visual Results Figure 2 shows the multiview images generated by our 3D branch generator. It can produce high-quality images with fine details at a resolution of \(512^{2}\). Moreover, the images are of strict 3D consistency across different views via directly rendering the generated high-resolution NeRF. More results are in Fig. 5, 6, and the _suppl. material_. ### Comparison with Prior Arts Baselines.We compare our method with existing 3D-aware GANs, including methods leveraging 2D super-resolution: StyleSDF [28], VolumeGAN [49], StyleNeRF [9], and EG3D [3]; and methods with direct 3D rendering: GRAM [6], GRAM-HD [47], GMPI [54], Epi-GRAF [43], and VoxGRAF [38]. Qualitative comparison.Figure 5 shows the visual comparison between our method and EG3D. Our generated images via direct rendering have comparable quality with those generated by EG3D via 2D super-resolution. We further visualize the 3D geometry and the spatiotemporal texture images [47] of the two methods. The geometry is extracted via Marching Cubes [21] on the density field at \(512^{3}\) resolution. The spatiotemporal textures are obtained by stacking the pixels of a fixed line segment under continuous camera change, very similar to the Epipolar Line Images [1], where smoothly tilted strips indicate better 3D consistency. As shown, our geometries contain finer details in that we directly learn the NeRF of a subject at high resolution. Our spatiotemporal textures are also more rea Figure 5: Comparison between our method and EG3D on FFHQ at \(512\times 512\) resolution. Our method generates images with comparable quality to those of EG3D, while producing 3D geometries with finer details and multiview sequences with better 3D-consistency. Figure 6: Comparison between our method and other 3D rendering baselines on FFHQ at \(512\times 512\) resolution. **Best viewed with zoom-in.** sonable with fewer twisted patterns, thanks to the direct 3D rendering for image synthesis instead of using a black-box 2D super-resolution module. Figure 6 compares our method with other 3D baselines on FFHQ at \(512^{2}\) resolution. Visually inspected, our 3D branch produces images of higher fidelity compared to existing methods leveraging direct 3D rendering. Video results can be found in the _suppl. material_. Quantitative comparison.Table 1 and Fig. 1 show the quantitative results of different methods in terms of image generation quality and 3D consistency. For image generation quality, We calculate the Frechet Inception Distance (FID) [13] between 50K generated images and all available real images in the training set. For 3D consistency, we follow GRAM-HD [47] to generate multiview images of 50 random subjects and train the multiview reconstruction method NeuS [45] on each of them. We report the average PSNR and SSIM scores between our generated multiview images and the re-rendered images of NeuS (denoted as PSNR\({}_{mv}\) and SSIM\({}_{mv}\)). Theoretically, better 3D consistency facilitates the 3D reconstruction process of NeuS, thus leading to higher PSNR and SSIM. As shown, our 2D branch generator demonstrates better results compared to EG3D in all metrics across different datasets, thanks to our 3D-aware stream in the tri-plane generator. Moreover, with the 3D-to-2D imitation strategy, our 3D branch generator largely improves the image generation quality among methods using direct 3D rendering, while maintaining competitive 3D consistency. Its image quality even surpasses most of the methods with 2D super-resolution and comes very close to that of EG3D. ### Ablation Study We conduct ablation studies to validate the efficacy of our proposed 3D-to-2D imitation and the 3D-aware tri-plane generator. For efficiency, all experiments are conducted on FFHQ dataset at \(256^{2}\) resolution. 3D-to-2D imitation strategy.As shown in Tab. 2 and Fig. 7, We start from a generator without using the 3D-to-2D imitation and the 3D super-resolution module \(\mathcal{S}^{3D}\) (setting A), by directly rendering the coarse tri-planes \(\mathbf{P}^{c}\) for image synthesis. The rendered images in this way are blurry and lack fine details, leading to a high FID score of \(30.6\). Naively introducing the imitation loss (setting B) to improve the rendered images of \(\mathbf{P}^{c}\) has minor influence, as the capacity of the coarse tri-planes are limited. Further incorporating the 3D super-resolution module (setting C) effectively releases the potential of the imitation loss and largely improves the image generation quality in terms of FID. However, the rendered images still lack rich details limited by the 3D-inconsistent 2D branch supervisions. Then, if the imitation loss is replaced with the adversarial loss (setting D), the image quality decreases significantly. This is due to that we only render small image patches to compute the corresponding losses for memory consideration. Under this circumstance, the adversarial loss is less stable compared to the imitation loss which is a perceptual-level reconstruction loss. This reveals the advantage of \begin{table} \begin{tabular}{c c|c c c|c c c|c|c} \hline \hline \multirow{2}{*}{} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{FFHQ256} & \multicolumn{3}{c|}{FFHQ512} & \multicolumn{1}{c|}{CATS256} & \multicolumn{1}{c}{CATS152} \\ & & FID \(\downarrow\) & PSNR\({}_{mv}\uparrow\) & SSIM\({}_{mv}\uparrow\) & FID \(\downarrow\) & PSNR\({}_{mv}\uparrow\) & SSIM\({}_{mv}\uparrow\) & FID \(\downarrow\) & FID \(\downarrow\) \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Fret** \\ \end{tabular} } & StyleSDF [28] & 11.5 & - & - & 11.2 & - & - & - & 7.91 \\ & VolumeGAN [49] & 9.10 & 33.6 & 0.926 & - & - & - & - & - \\ & StyleNeRF [9] & 8.00 & 31.9 & 0.915 & 7.80 & 30.9 & 0.843 & - & - \\ & EG3D [3] & 4.80 & 34.0 & 0.928 & 4.70 & 32.4 & 0.861 & 3.88 & 2.77 \\ & Ours (2D branch) & **3.91** & **35.7** & **0.938** & **4.14** & **33.3** & **0.891** & **3.41** & **2.72** \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Fret** \\ \end{tabular} } & GRAM [6] & 13.8 & 38.0 & 0.966 & - & - & - & 13.4 & - \\ & GRAM-HD [47] & 10.4 & 36.5 & 0.955 & - & - & - & - & 7.67 \\ & GMPI [54] & 11.4 & **39.8** & **0.977** & 8.29 & **39.0** & **0.961** & - & 7.79 \\ & EpiGRAF [43] & 9.71 & - & - & 9.92 & 37.3 & 0.949 & 6.93 & - \\ & VoxGRAF [38] & 9.60 & 37.2 & 0.960 & - & - & - & 9.60 & - \\ & Ours (3D branch) & **5.14** & 39.3 & 0.974 & **5.37** & 37.8 & 0.955 & **4.14** & **4.29** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison on image generation quality and 3D consistency among different 3D-aware GANs. \begin{table} \begin{tabular}{c|c c c|c} \hline \hline Label & \(\mathcal{L}_{imitation}\) & \(\mathcal{S}^{3D}\) & \(\mathcal{L}_{adv}^{3D}\) & FID (3D branch) \\ \hline (A) & & & & 30.6 \\ (B) & ✓ & & & 29.9 \\ (C) & ✓ & ✓ & & 9.29 \\ (D) & & ✓ & ✓ & 22.8 \\ (E) & ✓ & ✓ & ✓ & **5.14** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on 3D-to-2D imitation strategy. Figure 7: Generated images under different learning strategies. The labels are consistent with Tab. 2. our imitation strategy, which could be extended to higher resolution via patch-wise optimization while maintaining a good image generation quality. Finally, leveraging all the three components (setting E) yields the best result, where the imitation loss keeps the overall structure reasonable and the adversarial loss helps with fine details learning. 3D-aware tri-plane generator.Table 3 shows the ablation study on the 3D-aware tri-plane generator. We compare our design with two alternatives and one without 3D-aware convolutions originally adopted by EG3D. We report the parameter size of the tri-plane generators, the inference memory cost to generate the coarse tri-planes, as well as the final image generation quality in terms of FID. In the first alternative, we remove our 3D-aware stream, and leverage 3D-aware convolutions for the latent feature maps in the main stream, namely _3D-aware latent_. Since the main stream feature maps have relatively larger feature channels, and the 3D-aware convolution requires to concatenate two additional tensors with the same size as the input tensor, this design increases the parameter size and memory consumption significantly, and raises the out-of-memory issue during training. In the second alternative, namely _3D-aware tri-plane_, we directly apply 3D-aware convolutions in the output stream, by inserting them after the upsampling operations at each resolution, instead of using the additional 3D-aware stream. This strategy leads to an improvement of the image generation quality of the 2D branch, and largely reduces the parameter size and memory cost compare to the first design. Finally, our 3D-aware stream design further improves the image generation quality without introducing extra parameters and memory costs. Therefore, we adopt it as our final 3D-aware tri-plane generator for 3D-to-2D imitation. It effectively lowers the FID score of both the 2D and 3D branches compared to the original structure without 3D-aware convolutions, with only a minor increase of the parameter size and memory cost. Figure 8 further shows the synthesized tri-planes, where we visualize the L2 norm of each spatial location on the three orthogonal planes. Our method leveraging the 3D-aware stream produces more informative tri-planes. The generated planes of the side-views better depict the characters of different instances (, see the difference of the profiles on the \(yz\) planes). Our frontal planes ( planes) also demonstrate more clear head silhouettes compared to those without using the 3D-aware convolutions. ## 5 Conclusions We presented a novel learning strategy for 3D-aware GANs to achieve image synthesis of high-quality and strict 3D consistency. The core idea is to enforce the images synthesized by the generator's 3D rendering branch to mimic those generated by its 2D super-resolution branch. We also introduced 3D-aware convolutions to the generator to further improve the image generation quality. With the above strategies, our method largely improves the image quality among methods using direct 3D rendering, which we believe enables a new way for more realistic 3D generation. Limitation and future works.Our method has several limitations. The image generation quality of its 3D branch still lags behind that of the 2D branch. Certain generated 3D structures such as hairs and cat whiskers are stuck to the geometry surfaces instead of correctly floating in the volumetric space. The 3D-to-2D imitation strategy also introduces extra training time and memory costs compared to only learning the 2D branch. We expect more effective learning strategies and more advanced 3D representations to alleviate these problems. Ethics consideration.The goal of this paper is to generate images of virtual subjects. It is not intended for creating misleading or deceptive contents of real people and we do not condone any such harmful behavior. Figure 8: Visualization of the generated tri-planes with or w/o 3D-aware convolutions. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Method & FID (2D) & FID (3D) & \#Param & Mem. \\ \hline w/o 3D-aware & 4.80 & 6.71 & 29.0M & 2.3G \\ \hline 3D-aware latent & OOM & OOM & 111.7M & 11.6G \\ 3D-aware tri-plane & 4.14 & - & 32.6M & 2.4G \\ 3D-aware stream (Ours) & **3.91** & **5.14** & 32.6M & 2.4G \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on designs of 3D-aware tri-plane generator. The FID scores are from 2D or 3D branch; #Param only considers the tri-plane generator \(\mathcal{E}\) and Mem. indicates the GPU memory cost for generating the coarse tri-planes.
2305.15791
Residual Dynamics Learning for Trajectory Tracking for Multi-rotor Aerial Vehicles
This paper presents a technique to cope with the gap between high-level planning, e.g., reference trajectory tracking, and low-level controlling using a learning-based method in the plan-based control paradigm. The technique improves the smoothness of maneuvering through cluttered environments, especially targeting low-speed velocity profiles. In such a profile, external aerodynamic effects that are applied on the quadrotor can be neglected. Hence, we used a simplified motion model to represent the motion of the quadrotor when formulating the Nonlinear Model Predictive Control (NMPC)-based local planner. However, the simplified motion model causes residual dynamics between the high-level planner and the low-level controller. The Sparse Gaussian Process Regression-based technique is proposed to reduce these residual dynamics. The proposed technique is compared with Data-Driven MPC. The comparison results yield that an augmented residual dynamics model-based planner helps to reduce the nominal model error by a factor of 2 on average. Further, we compared the proposed complete framework with four other approaches. The proposed approach outperformed the others in terms of tracking the reference trajectory without colliding with obstacles with less flight time without losing computational efficiency.
Geesara Kulathunga, Hany Hamed, Alexandr Klimchik
2023-05-25T07:14:59Z
http://arxiv.org/abs/2305.15791v2
# Residual Dynamics Learning for Trajectory Tracking for Multi-rotor Aerial Vehicles ###### Abstract This paper presents a technique to cope with the gap between high-level planning, e.g., reference trajectory tracking, and low-level controlling using a learning-based method in the plan-based control paradigm. The technique improves the smoothness of maneuvering through cluttered environments, especially targeting low-speed velocity profiles. In such a profile, external aerodynamic effects that are applied on the quadrotor can be neglected. Hence, we used a simplified motion model to represent the motion of the quadrotor when formulating the Nonlinear Model Predictive Control (NMPC)-based local planner. However, the simplified motion model causes residual dynamics between the high-level planner and the low-level controller. The Sparse Gaussian Process Regression-based technique is proposed to reduce these residual dynamics. The proposed technique is compared with Data-Driven MPC. The comparison results yield that an augmented residual dynamics model-based planner helps to reduce the nominal model error by a factor of 2 on average. Further, we compared the proposed complete framework with four other approaches. The proposed approach outperformed the others in terms of tracking the reference trajectory without colliding with obstacles with less flight time without losing computational efficiency. ## 1 Introduction Accurate reference trajectory tracking in cluttered environments (Kulathunga et al., 2022b) is still a challenging research problem. Sensing capabilities, e.g., how far the sensor can observe the environment, and computational and power capabilities are the main constraints that face when solving such research problems. Therefore, the quadrotor's full maneuverability can not be exploited due to those constraints. On the contrary, when the problem formulation uses a simplified motion model instead of a complex dynamical model, the residual dynamical error arises between the actual controller and desired control commands. For controlling a quadrotor through high-speed agile maneuvers, it is necessary to consider external aerodynamic effects, e.g., wind, that is applied on the quadrotor in addition to other constraints: dynamics and obstacle constraints. However, several studies related to agile maneuvers (Neunert et al., 2016), (Zhou et al., 2021), (Rojas-Perez and Martinez-Carranza, 2021), (Song and Scaramuzza, 2020), (Honig et al., 2018) do not consider such effects, which are very difficult to incorporate when modelling system dynamics, except approximating the quadrotor dynamics with simplified motion model (Torrente et al., 2021). Even if those effects are incorporated, the necessary external aerodynamic effects are difficult to obtain due to high-computational demands that leverage real-time performance. In other words, model complexity is constrained by the computational capabilities of the onboard controller. Nonetheless, such aerodynamic effects produce negligible impact for the low-speed maneuvers since dynamic effects can be neglected, considering only the kinematic modelling, especially in the plan-based control paradigm. Such a paradigm consists of
2305.04844
SR+Codec: a Benchmark of Super-Resolution for Video Compression Bitrate Reduction
In recent years, there has been significant interest in Super-Resolution (SR), which focuses on generating a high-resolution image from a low-resolution input. Deep learning-based methods for super-resolution have been particularly popular and have shown impressive results on various benchmarks. However, research indicates that these methods may not perform as well on strongly compressed videos. We developed a super-resolution benchmark to analyze SR's capacity to upscale compressed videos. Our dataset employed video codecs based on five widely-used compression standards: H.264, H.265, H.266, AV1, and AVS3. We assessed 19 popular SR models using our benchmark and evaluated their ability to restore details and their susceptibility to compression artifacts. To get an accurate perceptual ranking of SR models, we conducted a crowd-sourced side-by-side comparison of their outputs. We found that some SR models, combined with compression, allow us to reduce the video bitrate without significant loss of quality. We also compared a range of image and video quality metrics with subjective scores to evaluate their accuracy on super-resolved compressed videos. The benchmark is publicly available at https://videoprocessing.ai/benchmarks/super-resolution-for-video-compression.html
Evgeney Bogatyrev, Ivan Molodetskikh, Dmitriy Vatolin
2023-05-08T16:42:55Z
http://arxiv.org/abs/2305.04844v2
# Compressed Video Quality Assessment for Super-Resolution: a Benchmark and a Quality Metric ###### Abstract We developed a super-resolution (SR) benchmark to analyze SR's capacity to upscale compressed videos. Our dataset employed video codecs based on five compression standards: H.264, H.265, H.266, AV1, and AVS3. We assessed 17 state-of-the-art SR models using our benchmark and evaluated their ability to preserve scene context and their susceptibility to compression artifacts. To get an accurate perceptual ranking of SR models, we conducted a crowd-sourced side-by-side comparison of their outputs. The benchmark is publicly available at [https://videoprocessing.ai/benchmarks/super-resolution-for-video-compression.html](https://videoprocessing.ai/benchmarks/super-resolution-for-video-compression.html). We also analyzed benchmark results and developed an objective-quality-assessment metric based on the current best-performing objective metrics. Our metric outperforms others, according to Spearman correlation with subjective scores for compressed video upscaling. It is publicly available at [https://github.com/EvgeneyBogatyrev/super-resolution-metric](https://github.com/EvgeneyBogatyrev/super-resolution-metric). video processing, super-resolution, dataset, video quality metric ## I Introduction Super-resolution (SR) has garnered extensive research in recent years, with new articles appearing monthly. Some state-of-the-art SR methods can restore details that are unclear in the original (lower-resolution) clip while working in real time [30]. Neighboring frames can help fill gaps when upscaling, because small movements caused by camera tremor may provide enough information to accurately increase the resolution, as demonstrated using a Google Pixel 3 camera [29]. Video compression that can reduce bandwidth consumption with minimal changes to visual quality is more critical than ever. Some recent codec [7, 12] downscale a video before compression to cut the bitrate and then upscale it to its original resolution using SR. This approach decreases bandwidth consumption and preserves the video's perceptual quality. Not all SR methods are suitable for downsample-based video compression, however, since few real-time SR models can generate acceptable-quality video. To analyze which SR models work better with each compression standard, and to help researchers find the best model for their codec, we present our Super-Resolution for Video Compression benchmark. To develop it we selected 17 SR models with different architectures and assessed their compressed-video-restoration capabilities on our dataset, which includes videos compressed using five codecs. Our effort employed objective metrics and subjective evaluation to assess model quality. In addition, analysis of the correlation between objective-quality metrics and subjective scores aided us in developing our objective metric for analyzing SR's capacity to upscale compressed videos. Our main contributions are as follows: 1. We developed a comprehensive SR benchmark to test the ability of SR models to upscale and restore videos compressed by video codecs of different standards. The benchmark is publicly available at [https://videoprocessing.ai/benchmarks/super-resolution-for-video-compression.html](https://videoprocessing.ai/benchmarks/super-resolution-for-video-compression.html). 2. We provided an analysis of 6 video quality metrics by their correlation with subjective scores on our dataset. 3. We developed a new objective quality metric for assessing compressed video quality that has the highest correlation with subjective scores among other metrics. The metric is publicly available at [https://github.com/EvgeneyBogatyrev/super-resolution-metric](https://github.com/EvgeneyBogatyrev/super-resolution-metric). ## II Related Work In this section we provide an overview of existing SR methods and video-quality metrics. ### _Super-Resolution Methods_ SR has received extensive attention since neural networks were first applied to this area, resulting in many approaches. Some SR algorithms rely on the temporal redundancy of video frames, allowing them to restore a single high-resolution Fig. 1: Example videos from the dataset. The dataset includes real-world sequences, animation, and clips from games. frame from a series of low-resolution ones. RBPN [8] integrates spatial and temporal contexts from a video using a recurrent encoder-decoder module. COMISR [16] upscales compressed videos; it employs bidirectional recurrent warping for detail-preserving flow estimation, and it applies Laplacian enhancement. BasicVSR++ [6] also adopts bidirectional propagation and spatial alignment. VRT [17] extracts video features, upscales them, and then reconstructs HQ frames on the basis of these features using a transformer network. Generative adversarial networks (GANs) serve widely in deep learning and especially in SR. ESRGAN [24] modifies the SRGAN architecture by adding residual-in-residual dense block as well as improved adversarial loss and perceptual loss. Real-ESRGAN [23] enhances this approach by incorporating high-order degradation modeling to simulate real-world degradation. Given the limited number of SR models designed to work with compressed video, assessing the performance of existing SR models on compressed video remains a critical task. ### _Super-Resolution Quality Metrics_ Peak signal-to-noise ratio (PSNR) uses mean squared error and maximum pixel values. Structural similarity index measure (SSIM) [26] calculates average, variance, and covariance pixel values for image windows. Both still commonly appear in super-resolution papers despite their simple techniques and poor correlation with subjective scores [35]. Deep-learning approaches to video-quality metrics are growing in popularity. VMAF [2] is an advanced, objective full-reference video quality assessment metric developed by Netflix that utilizes SVM-based regression to accurately and consistently predict the perceived video quality score based on various video features. LPIPS [32] is a metric that uses deep features of various neural networks to compare images. It additionally serves as a loss function for some SR models [3]. The ERQA metric [13] assesses images in terms of detail restoration. To do so, it detaches object edges and matches them with their counterparts in the reference image. ## III Benchmark In this section we present our Super-Resolution for Video Compression benchmark, including a description of its method and an overview of our evaluation results for SR models. Our benchmark is publicly available at https://videoprocessing_ai/benchmarks/super-resolution-for-video-compression.html. We welcome SR researchers to contribute to it by submitting SR models. ### _Dataset Preparation_ To ensure the benchmark dataset is diverse enough to test various aspects of SR models, we collected videos from multiple sources: * **Vimeo**: We gathered 50 FullHD sequences, including both real world and animation. We split them into scenes using the Scene Change Detector plugin for VQMT1. Footnote 1: [https://www.compression.ru/video/quality_measure/video_measurement_tool.html](https://www.compression.ru/video/quality_measure/video_measurement_tool.html) * **Camera**: We shot several videos using a Canon EOS 7D. The settings aimed to minimize blur and achieve the appropriate brightness -- the ISO was 4000 and the aperture 400. Those settings provided clear ground-truth (GT) videos without blur or noise. We shot 20 indoor videos and 30 outdoor videos. The indoor ones consist of synthetically crafted scenes containing objects from everyday life. Each scene includes either moving objects or parallel camera motion. * **Games**: We recorded 20 clips from various 2D and 3D videogames. We then obtained the following features for each video: Google SI/TI features [25], frames per second (FPS), colorfulness [9], and maximal number of faces2 throughout the video. On the basis of these features we separated all videos into nine clusters using the K-Means clustering and selected one video from each cluster. We refer to these nine selections as _source videos_. A preview of them appears in Fig. 1. Footnote 2: [https://github.com/agetitgey/face_recognition](https://github.com/agetitgey/face_recognition) ### _Super-Resolution Models_ To select SR models for evaluation we used SR benchmarks that target two tasks: detail restoration3 and perceptual-quality improvement4. We excluded similar SR methods, focusing on similar metric values. Footnote 3: [https://videoprocessing.ai/benchmarks/video-super-resolution.html](https://videoprocessing.ai/benchmarks/video-super-resolution.html) Footnote 4: [https://videoprocessing.ai/benchmarks/video-upscalers.html](https://videoprocessing.ai/benchmarks/video-upscalers.html) We selected the following 17 methods: BasicVSR++ [6], COMISR [16], DBVSR [34], EGVSR [5], LGFN [21], RBPN [8], Real-ESRGAN [23], RealSR [11], RSDN [10], SOF-VSR-BD [22], SOF-VSR-BI [22], SwinIR [18], TMNet [31], VRT [17], ahq-11 [1], amq-12 [1], and bicubic interpolation. ### _Objective-Quality Estimation_ First, we downscaled the source video to \(480\times 270\) resolution using FFmpeg with the flags=bicubic option. We then compressed the low-resolution video using each of five Fig. 2: Distribution of Google SI/TI [25] features for videos we considered when creating our dataset. Chosen videos appear in orange, others in blue. video codecs with six target bitrates: 0.1, 0.3, 0.6, 1.0, 2.0, and 4.0 Mbps. We chose these bitrates to be relatively low and to form a logarithmic curve. Our selection of video codecs included x264, x265, aomenc, vvenc [28], and uavs3e5. All employed the _medium_ preset during compression. Compressed videos underwent transcoding to PNG sequences using FFmpeg and were inputs to an SR model. We applied image SR models to each frame individually; video SR models received the path to the directory containing frames in the correct order. We tested a 4x upscale using our benchmark, but some SR models can only handle 2x. In this latter case, we applied the model twice. Footnote 5: [https://github.com/uavs3/uavs3e](https://github.com/uavs3/uavs3e) After super-resolving videos, we calculated the following objective-video-quality metrics on the results: PSNR, MS-SSIM [26], VMAF [2], LPIPS [32], and ERQA [13]. We ranked the SR methods by calculating BSQ-rate (bitrate-for-the-same-quality rate) [33] for each SR+codec pair relative to base codec performance, where the base codec is the one we used to compress low-resolution video. ### _Subjective Comparison_ To subjectively rank SR models, we conducted a crowdsourced comparison through Subjectify.us service. Because detail loss and compression artifacts can be difficult to notice in a full frame, the subjective evaluation employed crops. First, we generated saliency maps for each source video using a method proposed in [14]. Second, we averaged the saliency maps over all frames and applied a Gaussian-blur kernel to the result in order to determine the video's most salient region. Third, we took distorted videos from the benchmark dataset, which were compressed to three target bitrates (0.6, 1.0, and 2.0 Mbps), and cut one \(480\times 270\) crop from each one, with the most salient area at the center of the crop. We chose these three bitrates because they represent the logarithmic structure of an RD curve. Our crop resolution is guaranteed to fit the screens of subjective-comparison participants. We evaluated objective metrics on these crops to determine the correlation with the subjective scores. We split our comparison into five sections by codec, using only the 10 best SR models as determined by the LPIPS value. During the experiment we showed each participant a pair of videos from two random SR models and asked them to choose the video that looks more realistic and has fewer compression artifacts ("indistinguishable" is also an option). Every video pair was viewed by 15 participants. Each participant compared 25 pairs total. Among the 25 questions were three verification ones, which had obvious predefined answers. We excluded the results from any participant who failed to correctly answer one or more of the verification questions. A total of 5,662 people participated in our subjective evaluation. We excluded the results from 265 of them because they failed to correctly answer verification questions. Our calculation of the final subjective scores, using the Bradley-Terry model [4], employed the remaining 120,316 responses. ### _Benchmark Results_ The results of our subjective evaluation of each SR+codec pair appears in Tab. I. As the table shows, the best SR model differs by codec, proving that no single SR model can handle distortion from all compression standards with equal effectiveness. The "No SR" method applies the video codec to the source video without downscaling or super-resolving. This method serves as a reference during the BSQ-rate calculation. We see that it exhibits the best results for the aomenc codec, because AV1-based codecs have a mode that encodes frames at low resolution and applies an upsampler when decoding. This mode is normally used at low bitrate, so it is nearly identical to our evaluation method. Tab. II lists the ranking of SR+codec pairs for "Restuarant" video. It shows the top 10 on the basis of subjective scores. The Pearson (PLCC) and Spearman (SRCC) correlations of objective metrics with subjective score appear in Tab. III. ## IV Quality Metric Judging by the subjective benchmark comparison (Tab. III), existing video-quality metrics correlate poorly with subjective scores and are therefore unsuitable for assessing the results of downscale-based video coding. ### _Dataset_ We expanded our benchmark dataset so we could train the metric. When selecting videos we used the same method described in Sec. III-A, but we collected 20 new ones. Fig. 2 shows the distribution of SI/TI features in the dataset. We selected four SR methods (RealSR, Real-ESRGAN, COMISR, and BasicVSR++) as well as three video codecs (x264, x265, and aomenc) to generate distorted videos. We then downscaled the source videos and compressed them using each codec at two target bitrates: 0.5 Mbps and 2.0 Mbps. The next step was to apply each SR method to each video, thereby producing the dataset of distorted videos. We also compressed the source videos without downscaling and added the results to our dataset. We conducted a subjective comparison like the one in Sec. III-D. Because subjective scores differ in range from one video to the next, we rescaled them to range from 0 to 1 in each video. ### _Proposed Approach_ Although individual metrics perform poorly on this task, MDTVSFA, ERQA, and LPIPS have the highest correlation among them. We decided to combine these metrics to boost the correlation. We calculated MDTVSFA, ERQA, and LPIPS values over our distorted dataset and then added ERQAxMDTVSFA and ERQAxLPIPS features, which result from multiplying the ERQA value by the MDTVSFA and LPIPS values, respectively. We tried other metrics such as NIQE [36] and SSIM [26] in combination with ERQA, but they showed worse results. To generalize the information about the video, we also calculated other features: Google SI/TI [25], colorfulness [9], and bitrate. Our approach used min-max normalization to normalize the data before training. We then divided all videos in the distorted dataset using threefold cross-validation. To train the metric we selected the SVR model with a linear kernel from Scikit-Learn 1.2.1. Our choice of this model was because it is easy to train and has produced better results than other regression models. To optimize the metric's run time we reduced the number of features. We iterated through each pair of features and trained the metric without them. Our approach used the same three splits, and for each pair it involved calculating the worst SRCC value for those splits. The metric showed the highest correlation without MDTVSFA and bitrate features. Removing other features caused much higher correlation losses. The metric's final version uses the features ERQA, LPIPS, ERQAxLPIPS, ERQAxMDTVSFA, SI, TI, and colorfulness. ### _Experiments_ We tested the proposed metric on our benchmark, where it outperformed the other metrics in both Pearson and Spearman correlation, as Tab. III shows. We also evaluated the metric on the Live Video Quality Database [19, 20]. Tab. IV shows the Spearman correlation of our metric with DMOS values for each of the four test cases, as well as the metric's overall Spearman correlation. The results demonstrate that our metric exhibits the highest correlation on H.264 compression and the second-highest correlation overall. Correlation in the case of MPEG-2 compression is lower, probably because the training set lacked any videos encoded using that standard. metrics and subjective evaluation. We proposed a new quality metric that has better correlation with subjective scores than do other metrics on our benchmark's dataset, and its results more closely resemble ranking based on human judgment. Our research shows that SR models, such as RealSR [11] and SwinIR [18], can serve in downscale-based codecs on the decoder side to enhance the subjectively perceived quality of videos with low bitrates.
2308.04660
Efficient Bayesian Optimization with Deep Kernel Learning and Transformer Pre-trained on Multiple Heterogeneous Datasets
Bayesian optimization (BO) is widely adopted in black-box optimization problems and it relies on a surrogate model to approximate the black-box response function. With the increasing number of black-box optimization tasks solved and even more to solve, the ability to learn from multiple prior tasks to jointly pre-train a surrogate model is long-awaited to further boost optimization efficiency. In this paper, we propose a simple approach to pre-train a surrogate, which is a Gaussian process (GP) with a kernel defined on deep features learned from a Transformer-based encoder, using datasets from prior tasks with possibly heterogeneous input spaces. In addition, we provide a simple yet effective mix-up initialization strategy for input tokens corresponding to unseen input variables and therefore accelerate new tasks' convergence. Experiments on both synthetic and real benchmark problems demonstrate the effectiveness of our proposed pre-training and transfer BO strategy over existing methods.
Wenlong Lyu, Shoubo Hu, Jie Chuai, Zhitang Chen
2023-08-09T01:56:10Z
http://arxiv.org/abs/2308.04660v1
Efficient Bayesian Optimization with Deep Kernel Learning and Transformer Pre-trained on Multiple Heterogeneous Datasets ###### Abstract Bayesian optimization (BO) is widely adopted in black-box optimization problems and it relies on a surrogate model to approximate the black-box response function. With the increasing number of black-box optimization tasks solved and even more to solve, the ability to learn from multiple prior tasks to jointly pre-train a surrogate model is long-awaited to further boost optimization efficiency. In this paper, we propose a simple approach to pre-train a surrogate, which is a Gaussian process (GP) with a kernel defined on deep features learned from a Transformer-based encoder, using datasets from prior tasks with possibly heterogeneous input spaces. In addition, we provide a simple yet effective mix-up initialization strategy for input tokens corresponding to unseen input variables and therefore accelerate new tasks' convergence. Experiments on both synthetic and real benchmark problems demonstrate the effectiveness of our proposed pre-training and transfer BO strategy over existing methods. ## 1 Introduction In black-box optimization problems, one could only observe outputs of the function being optimized based on some given inputs, and can hardly access the explicit form of the function. These kinds of optimization problems are ubiquitous in practice (e.g., (Mahapatra et al., 2015; Korovina et al., 2020; Griffiths & Lobato, 2020)). Among black-box optimization problems, some are particularly challenging since their function evaluations are expensive, in the sense that the evaluation either takes a substantial amount of time or requires a considerable monetary cost. To this end, Bayesian Optimization (BO; Shahriari et al. (2016)) was proposed as a sample-efficient and derivative-free solution for finding an optimal input value of black-box functions. BO algorithms are typically equipped with two core components: a surrogate and an acquisition function. The surrogate is to model the objective function from historical interactions, and the acquisition function measures the utility of gathering new input points by trading off exploration and exploitation. Traditional BO algorithms adopt Gaussian process (GP; Rasmussen & Williams (2009)) as the surrogates, and different tasks are usually optimized respectively in a cold-start manner. In recent years, as model pre-training showed significant improvements in both convergence speed and prediction accuracy (Szegedy et al., 2016; Devlin et al., 2019), pre-training surrogate(s) in BO becomes a promising research direction to boost its optimization efficiency. Most existing work on surrogate pre-training (Bardenet et al., 2013; Swersky et al., 2013; Yogatama & Mann, 2014; Springenberg et al., 2016; Wistuba et al., 2017; Perrone et al., 2018; Feurer et al., 2018; Wistuba & Grabocka, 2021) assumes that the target task shares the same input search space with prior tasks generating historical datasets. If this assumption is violated, the pre-trained surrogate cannot be directly applied and one has to conduct a cold-start BO. Such an assumption largely restricts the scope of application of a pre-trained surrogate, and also prevents it from learning useful information by training on a large number of similar datasets. To overcome these limitations, a text-based method was proposed recently. It formulates the optimization task as a sequence modeling problem and pre-trains a single surrogate using various optimization trajectories (Chen et al., 2022). In this work, we focus on surrogate pre-training that transfers knowledge from prior tasks to new ones with possibly different input search spaces, for further improving the optimization efficiency of BO. We adopt a combination of Transformer (Vaswani et al., 2017) and deep kernel Gaussian process (Wilson et al., 2016) for the surrogate, which enables joint training on prior datasets with variable input dimensions. For a target task, only the feature tokenizer of the pre-trained model needs to be modularized and reconstructed according to its input space. Other modules of the pre-trained model remain unchanged when applied to new tasks, which allows the new task to make the most of prior knowledge. Our contributions can be summarized as follows: * To the best of our knowledge, this is the first transfer BO method that is able to jointly pre-train on tabular data from tasks with heterogeneous input spaces. * We provide a simple yet effective strategy of transferring the pre-trained model to new tasks with previously unseen input variables to improve optimization efficiency. * Our transfer BO method shows clear advantage on both synthetic and real problems from different domains, and also achieves the new state-of-the-art results on the HPO-B (Pineda-Arango et al., 2021) public datasets. ## 2 Background Gaussian processA Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution (Rasmussen and Williams, 2009). Formally, a GP is represented as \(f(\mathbf{x})\sim\mathcal{GP}(m(\mathbf{x}),k(\mathbf{x},\mathbf{x}^{\prime}))\), where \(m(\mathbf{x})\) and \(k(\mathbf{x},\mathbf{x}^{\prime})\) denotes mean and covariance function, respectively. Given a dataset \(\mathcal{D}=\{(\mathbf{x}^{(i)},y^{(i)})\}_{i=1}^{n}\) with \(n\) examples, any collection of function values has a joint Gaussian distribution \(\mathbf{f}=[f(\mathbf{x}^{(1)}),\dots,f(\mathbf{x}^{(n)})]^{\top}\sim\mathcal{N}(\mathbf{\mu},\mathbf{K}_{\mathbf{x},\mathbf{x}})\), where the mean vector \(\mathbf{\mu}_{i}=m(\mathbf{x}^{(i)})\), \([\mathbf{K}_{\mathbf{x},\mathbf{x}}]_{ij}=k(\mathbf{x}^{(i)},\mathbf{x}^{(j)})\). A nice property of GP is that its distributions of various derived quantities can be obtained explicitly. Specifically, under the additive Gaussian noise assumption, the predictive distribution of the GP evaluated at a new test example \(\mathbf{x}^{(*)}\) can be derived as \[p(\mathbf{f}^{(*)}|\mathbf{x}^{(*)},\mathcal{D})\sim\mathcal{N}(\mathbb{E}[\mathbf{f}^{(* )}],\mathrm{cov}(\mathbf{f}^{(*)})), \tag{1}\] where \(\mathbb{E}[\mathbf{f}^{(*)}]=m(\mathbf{x}^{(*)})+\mathbf{K}_{\mathbf{x}^{(*)},\mathbf{x}}[\mathbf{ K}_{\mathbf{x},\mathbf{x}}+\sigma^{2}\mathbf{I}]^{-1}\mathbf{y}\), \(\mathrm{cov}(\mathbf{f}^{(*)})=k(\mathbf{x}^{(*)},\mathbf{x}^{(*)})-\mathbf{K}_{\mathbf{x}^{(*)},\mathbf{x}}[\mathbf{K}_{\mathbf{x},\mathbf{x}}+\sigma^{2}\mathbf{I}]^{-1}\mathbf{K}_{ \mathbf{x},\mathbf{x}^{(*)}}\), \(\mathbf{K}_{\mathbf{x}^{(*)},\mathbf{x}}\) denotes the vector of covariances between the test example \(\mathbf{x}^{(i)}\) and the \(n\) training examples, and \(\mathbf{y}\) is the vector consisting of all response values. Bayesian OptimizationBayesian optimization (Shahriari et al., 2016) uses a probabilistic surrogate model for data-efficient black-box optimization. It is suited for _expensive black-box optimization_, where objective evaluation can be time-consuming or of high cost. Given the previously gathered dataset \(\mathcal{D}\), BO uses surrogate models like GP to fit the dataset. For a new input \(\mathbf{x}^{(*)}\), the surrogate model gives predictive distribution in equation 1, then an acquisition function is constructed with both the prediction and uncertainty information to balance exploitation and exploration. The acquisition is optimized by third-party optimizer like evolutionary algorithm to generate BO recommendation. Throughout this paper, we use the lower confidence bound (LCB) Srinivas et al. (2009) as the acquisition function. \(\mathrm{LCB}(\mathbf{x})=m(\mathbf{x})-\kappa\times\sigma(\mathbf{x})\), where \(\sigma(\mathbf{x})\) denotes the standard deviation and \(\kappa\) (set to 3 in experiments) is a constant for tuning the exploitation and exploration trade-off. FT-TransformerFT-Transformer (Gorishniy et al., 2021) is a recently proposed attention-based model for tabular data modeling. It consists of a Feature-Tokenizer layer, multiple Transformer layers, and a prediction layer. The Feature-Tokenizer layer enables its ability of handling tabular data. For \(d\) numerical input features \(\mathbf{x}=[x_{1},\dots,x_{d}]^{\top}\), the Feature-Tokenizer layer initializes a value-dependent embedding table \(\mathbf{W}\in\mathbb{R}^{d\times d_{e}}\) and a column-dependent embedding table \(\mathbf{B}\in\mathbb{R}^{d\times d_{e}}\), where \(d_{e}\) is the dimension of embedding vector. During forward-pass of the Feature-Tokenizer layer, the \(i\)-th feature \(x_{i}\) would be transformed to \(x_{i}\times\mathbf{w}_{i}+\mathbf{b}_{i}\), where \(\mathbf{w}_{i}\) and \(\mathbf{b}_{i}\) are the \(i\)-th row in \(\mathbf{W}\) and \(\mathbf{B}\). In this way, an \(n\times d\) matrix is transformed into a \(n\times d\times d_{e}\) tensor. Then, a [CLS] token embedding is appended to the tensor and the tensor is passed to the stacked transformer layers to extract output embedding vectors. The output embedding vector corresponding to the [CLS] token is used as the output representation. The output representation is then passed into the prediction layer for final model prediction. The tokenization process for categorical data is implemented by a look-up table, in which each categorical variable corresponds to a \(\mathbf{b}_{i}\) and each unique value of a variable corresponds to a \(\mathbf{w}_{i}\). We use FT-Transformer as the backbone of our method. Throughout this paper, we only consider numerical features, however as FT-Transformer can also handle categorical features, our method can be easily extended to mixed search space with both numerical and categorical features. ## 3 Methodology ### Problem Setting Given a target function \(f_{T}(\mathbf{x}):\mathbb{R}^{d_{T}}\rightarrow\mathbb{R}\) where \(d_{T}\) is the dimension of \(f_{T}\), we would like to apply Bayesian optimization to find its minimizer: \[\mathbf{x}_{*}=\operatorname{argmin}_{\mathbf{x}}f_{T}(x)\] Assume we have \(N\) source dataset \(\{\mathcal{D}_{1}^{S},\dots,\mathcal{D}_{N}^{S}\}\) where \(\mathcal{D}_{i}^{S}=\{X_{i}^{S},\mathbf{y}_{i}^{S}\}\), \(X_{i}^{S}\in\mathbb{R}^{N_{i}\times d_{i}}\) and \(y_{i}^{S}\in\mathbb{R}^{N_{i}}\), we want to pre-train a surrogate model on all these \(N\) historical datasets to accelerate the BO convergence on target task. If \(d_{1}=d_{2}=\dots d_{N}=d_{T}\), and the feature names are aligned, pre-training methods like (Wistuba & Grabocka, 2021; Wistuba et al., 2016; Feurer et al., 2018) can be applied. However, if either the dimension or parameter names are unaligned, pre-training and fine-tuning become non-trivial. Taking hyper-parameter optimization (HPO) for AutoML as an example. Suppose we have the following two historical HPO records: * HPO for Random forest on dataset A, where max_features, max_depth are tuned, * HPO for LightGBM on dataset B, where learning_rate and reg_alpha are tuned, now we want to perform HPO for XGBoost model on a new dataset C, the hyper-parameters to be tuned are learning_rate, max_depth and col_sample_by_level, and we want to use the two source datasets to pre-train the surrogate model for BO. ### Multi-Source Deep Kernel Learning with FT-Transformer The basic idea of our method is to use FT-Transformer (Gorishniy et al., 2021) as the feature extractor of deep kernel Gaussian processes (FT-DKL) and pre-train the FT-DKL on similar source tasks. Given that transformer can handle variable-length input, we can use the FT-DKL to jointly pre-train on multiple heterogeneous datasets with unaligned parameter spaces, making the FT-Transformer a _multi-source_ FT-Transformer. The first step of our algorithm is data normalization. When there are multiple unaligned source datasets, we independently normalize the objective of each source dataset. As for the normalization of input features, if two source datasets share common features, the shared common features are merged and jointly normalized. After the source datasets are normalized, we initialize a multi-source FT-Transformer, where the embedding table in the Feature-tokenizer layer corresponds to the union of features of all source datasets. Following the common procedures of training DKL models, we firstly pre-train the multi-source FT-Transformer with linear output layer and MSE loss, and then replace the linear output layer with a sparse variational Gaussian process (SVGP) layer and pre-train the model with ELBO loss, in this stage, the weight of FT-Transformers, the GP hyper-parameters and variational parameters are jointly updated. Details about SVGP and sparse varational deep kernel learning can be seen in (Hensman et al., 2015; Wilson et al., 2016). After pre-training, we can transfer the model to downstream target optimization task, where the transformer layers, sparse Gaussian process layer and the [CLS] embedding of Feature-Tokenizer layer are copied to initialize the target FT-DKL model. For the target task parameters already seen in source tasks, the corresponding source embedding vectors are also copied to the target FT-DKL model; for the unseen target task parameters, we use mix-up initialization: we randomly select two embedding vectors \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) from source embeddings and a random number \(\alpha\sim\mathcal{U}(0,1)\), the new embedding vector is initialized as \[\mathbf{e}=\alpha\times\mathbf{e}_{1}+(1-\alpha)\times\mathbf{e}_{2}.\] After the target FT-DKL is initialized, we directly fine-tune the target FT-DKL model with target data using ELBO loss at the start of each BO iteration. The fine-tuned FT-DKL model can be used as a regular Gaussian process to construct acquisition functions and give recommendations. We summarize our algorithm in Algorithm 1. In Section A.2, we provide an illustrative example to demonstrate the proposed model architecture. ``` source datasets \(\{D_{S}^{1},\dots,D_{S}^{N}\}\) Jointly normalize the source datasets Construct multi-source FT-Transformer with the union of all source features for the Feature-Tokenizer layer Joint training of multi-source datasets with MSE loss Joint training of multi-source with GP layer and ELBO loss while target task not finished do Transfer the source FT-DKL to target FT-DKL with embedding mix-up Fine-tune the target FT-KL with ELBO loss Use the predictive distribution of FT-DKL to construct acquisition function Optimize the acquisition function for BO recommendation endwhile ``` **Algorithm 1** Pre-training and fine-tuning of multi-source FT-DKL ## 4 Related Work Warm-starting a new BO task by knowledge transfer from prior tasks can significantly boost optimization efficiency. One of the most common transfer approaches is to pre-train a surrogate model on the entire prior data. Earlier methods considered learning a joint GP-based surrogate model on combined prior data (Bardenet et al., 2013; Swersky et al., 2013; Yogatama and Mann, 2014), which may easily suffer from high computational complexity when applied to large data sets. As a result, some later works tried to alleviate this issue by using multi-task learning (Perrone et al., 2018) or ensembles of GP, where a GP was learned for each task (Wistuba et al., 2017; Feurer et al., 2018). Neural network models have also been adopted to either learn a task embedding for each task inside a Bayesian neural network (Springenberg et al., 2016), or pre-train a deep kernel model and fine-tune on target tasks (Wistuba and Grabocka, 2021). Besides the surrogate model, learning new acquisition functions is another valid approach for knowledge transfer (Volpp et al., 2020). However, all of these existing works considered a fixed search space. In other words, a new task with a different search space would fail in directly borrowing strength from prior data sets. Most recently, a text-based pre-trained model called OptFormer (Chen et al., 2022) was introduced to address this issue by adopting a Transformer model as the surrogate. Although both OptFormer and our work are able to pre-train using heterogeneous datasets, our work directly conducts learning on tabular data whereas OptFormer requires an additional tokenization step for the optimization trajectories. Transformer (Vaswani et al., 2017) was initially proposed for machine translation. It was later adopted, beyond natural language processing, in computer vision (Dosovitskiy et al., 2021; Parmar et al., 2018), reinforcement learning (Chen et al., 2021; Zheng et al., 2022), etc. Because of its success in various fields, Transformer was also restructured for tabular data (Huang et al., 2020; Gorishniy et al., 2021) as another strong (neural network) baseline besides multi-layer perceptron. ## 5 Experiments In this section, we demonstrate the high sample efficiency of our pre-trained FT-DKL with three experiments, including a high-dimensional synthetic function, HPO on the HPO-B (Pineda-Arango et al., 2021) benchmark, and a real-world wireless network optimization (WNO) problem. For the synthetic function and wireless network optimization problem, we transferred knowledge from a single low-dimensional source problem to a high-dimensional target task, while for the HPO-B benchmarks, 758 source tasks with dimensions ranging from 2 to 18 were merged and pre-trained, the pre-trained model was fine-tuned for 86 different target HPO problems. The statistics of these experiments can be seen in Table 1. All experiments were conducted on a server with eight-core Intel(R) Xeon(R) Gold 6134 CPU and one Quadro RTX 5000 GPU. ### Scaled and Shifted High Dimensional Ackley Function Optimization In this section, we use the Ackley function with input scaling and offset as a demonstration. Ackley function is a popular benchmark function for black-box optimization algorithms, To make the function harder to optimize and transfer, we introduced random offset and scaling to each dimension. To do that, we firstly scaled the original search space to \([-1,1]^{D}\) where \(D\) was the input dimension; then we modified the original Ackley function as shown in Eq 2. In our experiment, we set \(D=30\) for the target optimization task. \[\begin{array}{lcl}f_{D}(\mathbf{x})&=&\text{Ackley}(s_{1}\times(x_{1}-o_{1}) \ldots s_{D}\times(x_{D}-o_{D}))\\ s_{i}&\sim&\mathcal{U}(0.01,2),i=1,2,\ldots,D\\ o_{i}&\sim&\mathcal{U}(-0.8,0.8),i=1,2,\ldots,D\\ \end{array} \tag{2}\] Firstly, we compared GP-UCB on the original Ackley function and the modified Ackley function. As shown in Figure 0(a), we see that although GP-UCB was able to reach a near-optimal solution for the original Ackley function, it failed to optimize the Ackley function with input scaling and offset. We then compared different surrogate models on the modified Ackley function. For our pre-trained FT-DKL, we used the same modified Ackley function but with \(D=20\) as the source task, leaving 10 target task parameters unseen in the source task. We ran evolutionary algorithm on the 20-dimensional source task, and sampled 800 data points from the optimization trajectory to pre-train the FT-DKL model. The following models were compared. For all surrogate models, lower confidence bound (LCB) was used as the acquisition function. We randomly sampled 5 points to initialize BO, except for GP-50, where 50 initial random points were used. For each model, BO was repeated five times to average out random fluctuations. * GP, where Gaussian process with Matern3/2 kernel was used for BO, * FT-DKL, where the FT-DKL model was used as the surrogate model, _without_ any pre-training and model transfer, * GP-50, Gaussian process as surrogate model, but with 50 points as random initialization, * Pretrained-FT-DKL, FT-DKL pre-trained on 20-D data used as surrogate model. As shown in Figure 0(b), we can see that with only five points as random initialization, both GP and FT-DKL failed to optimize the AckleyOffsetScale function. With 50 points as random initialization, \begin{table} \begin{tabular}{l l l l l l} \hline \hline Task & Source tasks & Source dimension & Source evaluations & Target tasks & Target dimension \\ \hline Synthetic & 1 & 20 & 800 & 1 & 30 \\ HPO-B & 758 & 2-18 & 3,279,050 & 86 & 2-18 \\ WNO & 1 & 48 & 1800 & 1 & 69 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of experimental datasets GP was able to find some low-regret solutions, but the result was far from optimal. On the other hand, when we were able to pre-train the FT-DKL on the 20-D source function and fine-tune the model on the 30-D target function, the pre-trained FT-DKL outperformed all other models significantly. In this paper, we only consider how model performance is improved by multi-source pre-training. However, from a practical point of view, a more effective approach for transferring from a 20-D function to a 30-D function would be to directly copy the optimal values of the 20 dimensions, and only optimize the rest 10 dimensions. In Appendix A.4, we show that our proposed approach still outperformed Gaussian processes and FT-DKL when fixing the first 20 dimensions. ### Hyper-parameter Optimization Using HPO-B Benchmark In this section, we demonstrate our method on HPO problems with the HPO-B benchmark (Pineda-Arango et al., 2021). HPO-B is the largest public benchmark for HPO, containing more than 1900 HPO-B tasks, with 176 different search spaces. We used the HPO-B-v3 version in our experiment, where tasks related to the most frequent 16 search spaces were extracted and splitted into meta-train-dataset, meta-validation-dataset, and meta-test-dataset. After the train-validation-test splitting, there were 758 HPO tasks with 3,279,050 hyper-parameter configuration evaluations with 16 search spaces in the meta-train-dataset, and 86 tasks in the meta-test-dataset. We jointly pre-trained the FT-DKL model on all the 758 meta-train tasks and then transferred the model to the target task during the optimization of meta-test tasks. We didn't use the meta validation dataset in this experiment. We pre-trained the model for 300 epochs with MSE loss and 50 epochs with ELBO loss. Unlike FSBO (Wistuba & Grabocka, 2021) where a distinct model was pre-trained for each search space, we used one common model with shared transformer layers for all the 758 source tasks. During pre-training, the meta-features like space-ID and dataset-ID were treated as categorical features to augment the dataset. We followed the evaluation protocol of HPO-B, where five different sets of initialization points were provided by the benchmark for repeated experiments. We compared our algorithm against the results provided by the HPO-B benchmark, the following transfer and non-transfer BO methods were compared: * Random: Random search * GP: Gaussian processes * DNGO (Snoek et al., 2015): Bayesian linear regression with feature extracted by neural networks, * DGP (Wilson et al., 2016): Gaussian process with deep kernel, * BOHAMIANN (Springenberg et al., 2016): BO with BNN surrogate, trained with adaptive SGDHMC Figure 1: (a) GP-UCB on Ackley and the Ackley with scale and offset, it can bee seen that the Ackley with input scaling and offset is much harder to optimize; (b) Comparison of different algorithms on the Ackley with scale and offset. * TST (Wistuba et al., 2016), RGPE (Feurer et al., 2018b), TAF (Wistuba et al., 2017): Transfer learning by weighted combination of Gaussian processes trained on source tasks with same design spaces, * FSBO (Wistuba and Grabocka, 2021): Few-shot Bayesian optimization where deep kernel GP was pre-trained on source tasks that share the same design space with target task. The result of normalized regret and average rank is shown in Figure 2 and Table 2. As can be seen, our pre-trained model outperformed all other reported results by a large margin, both with regard to average rank and average regret, throughout all the iterations, the immediate regret remained 34%-65% of the second best algorithm FSBO. Among all algorithms, our method was the only one that achieved a regret of less than 0.1 after only one iteration and less than 0.01 after 100 iterations. The per-space comparison of normalized regret and average rank is put in Appendix A.5, where pre-trained FT-DKL showed the best performance on most of the design spaces. Does big model generalize better?We have witnessed a lot of news reporting large language models showing better zero/few-shot performance, then one interesting question to ask is how transformer model size affects the sample efficiency of our pre-trained model. To answer that question, we increased the model size to the level of BERT (Devlin et al., 2019) with 768-dimensional embedding, 12-heads attention, and 12 transformer layers, the model now has more than **30 million** trainable parameters. Given only one GPU to use, the large model is very slow to train. It took us about 50 minutes to train for one epoch, so we didn't follow the standard BO procedure as in previous sections. Firstly, GP was not used, we only pre-trained the model with MSE loss for 250 epochs, the model with linear output layer was directly used as the surrogate model; secondly, there's **no fine-tuning** during target task optimization, instead, we only used the pre-trained multi-source FT-Transformer for _zero-shot prediction_ on the meta-test dataset. We sorted the zero-shot prediction result and the hyper-parameter configurations with the top 100 predictions was recommended one by one for the 100 \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Algorithms & Regret@1 & Regret@5 & Regret@15 & Regret@30 & Regret@50 & Regret@100 \\ \hline Random & 0.189 & 0.142 & 0.102 & 0.082 & 0.072 & 0.0540 \\ GP & 0.181 & 0.106 & 0.064 & 0.044 & 0.035 & 0.0258 \\ TST & 0.164 & 0.104 & 0.064 & 0.047 & 0.039 & 0.0278 \\ DNGO & 0.165 & 0.099 & 0.057 & 0.041 & 0.028 & 0.0224 \\ TAF & 0.164 & 0.101 & 0.060 & 0.043 & 0.036 & 0.0254 \\ DGP & 0.183 & 0.099 & 0.061 & 0.041 & 0.028 & 0.0173 \\ BOHAMIANN & 0.173 & 0.118 & 0.061 & 0.041 & 0.028 & 0.0158 \\ RGPE & 0.140 & 0.104 & 0.064 & 0.053 & 0.046 & 0.0278 \\ FSBO & 0.152 & 0.087 & 0.049 & 0.032 & 0.021 & 0.0105 \\ Pretrained-FT-DKL & **0.066** & **0.030** & **0.017** & **0.011** & **0.009** & **0.0065** \\ \hline \hline \end{tabular} \end{table} Table 2: Normalized regret comparison at different iterations Figure 2: Comparisons of normalized regret and average ranks across all search spaces. target task iterations. With this _batched zero-shot optimization_, the target task can be done much faster than BO with smaller FT-DKL. We compared the batched zero-shot optimization of our previously reported pre-trained FT-DKL and FSBO, and we also performed batched zero-shot optimization for the smaller FT-Transformer used for FT-DKL. The result of average rank and regret can be seen in Figure 3. As can be seen in Figure 3, with equal-sized FT-Transformer, running BO with pre-trained FD-DKL showed better performance than only running batched zero-shot optimization; however, with a much larger multi-source FT-Transformer, the performance of zero-shot optimization would be comparable to the result of BO. ### Wireless Network Optimization In this section, we evaluate our pre-training strategy in real-world applications. The task is to optimize the parameters of a cellular network. A cellular network consists of many wireless cells and provides the infrastructure for modern mobile communications. Each wireless cell has many parameters (e.g., the antenna angle, the transmission power) that need to be optimized to adapt to its surrounding wireless environment. By optimizing the parameters, the performance (e.g., data throughput) of the network can be improved to provide better user experience. Due to the heterogeneous wireless environment of a cellular network, different cells have different optimal parameter configurations. Moreover, tuning parameters of one cell affects the performance of its neighbors due to the inter-cell interference. Therefore, parameters of different cells within a network should be jointly optimized. **Source and Target Task.** In our experiment, the network consisted of 23 cells, where 69 parameters need to be optimized jointly to increase the overall network throughput. In the source task, part of the network (i.e., 16 cells with 48 parameters) had been optimized historically, and we used the data from the source task to pre-train the surrogate model, and transferred to the 69-parameter optimization task to boost the optimization efficiency. The experiments were conducted on an industrial-level simulator that imitated the behavior of a real cellular network. **Search Space.** The search range of each parameter was an ordered set of integers with step size of 1. Among the 69 parameters, 46 parameters had a search range of size 41, and 23 parameters had a search range of size 26. **Surrogate Models.** We compared the optimization performance of the following strategy/surrogate models on the 69-parameter optimization task. * **Random:** A uniformly random sampling was performed within the 69-parameter search space. * **Random Forest:** A random forest regressor with 100 estimators was used as the surrogate model _without_ pre-training, and was initialized by 70 random samples. * **Gaussian Process:** A Gaussian process model with combined linear and Matern kernel was used _without_ pre-training, and initialized by 70 random samples. Figure 3: Comparisons of normalized regret and average ranks across all search spaces. * **Pretrained-FT-DKL:** The proposed model was pre-trained with 1800 samples collected from previous 48-parameter BO experiments. Among the 1800 samples, 1000 samples were obtained by uniformly random sampling from the 48-parameter search space, and 800 samples were the BO traces with RF and GP as surrogate models and 400 samples were collected for each model. The optimization on the target task was initialized by 5 random samples. **Setup.** The exploration budget of each run was 400 iterations, and 5 runs were performed for each surrogate model. The mean and variance over different runs were recorded. The performance of the surrogate models over different runs are presented in Figure 4, and the numerical values are reported in Table 5. The regret is the negative of the 23-cell-network throughput and the value should be minimized as small as possible. It is clear from the plot that our pre-training strategy boosted the optimization efficiency dramatically compared to the other baselines. After only a few initial random samples, the Pretrained-FT-DKL model quickly located its search to a region with low regret. The achieved performance of Pretrained-FT-DKL by 80 iterations already surpassed RF and GP's final performance with 400 iterations, which indicated that our strategy boosted the optimization efficiency by around five times. ## 6 Limitation and Discussion Overall, we believe that our proposed method represents a new paradigm of transfer learning in BO, however, there are still several limitations of our work to be overcome. The main issue is about the training overhead of Transformers. Even with GPUs, transformer training is still time-consuming compared to GP and traditional deep kernel GP. Although the excellent zero/few-shot performance alleviates the requirement for long-time BO iterations, we still need hardware-friendly Transformers and Transformer-friendly hardware for more widely application of our method. Secondly, as shown in (Ober et al., 2021), it's even more easier to over-fit a deep kernel model than to over-fit a traditional neural network. The over-fitting can be addressed by introducing Lipschitz constraint in neural works (Liu et al., 2020). However it's still unclear how Lipschitz constraints can be efficiently introduced to Transformers. Finally, we believe that the quality of the source data is very important. Currently, we use shared embedding vector for common features, and we determine whether or not two source datasets have common features by matching their parameter names, so it's possible that our system can be misguided if a malicious user uploads a dataset with the same parameter names but completely irrelevant or adversarial parameter values. ## 7 Conclusion In this paper, we introduced a simple deep kernel model with multi-source FT-Transformer. The model can be used to jointly pre-train multiple heterogeneous source datasets and can be efficiently fine-tuned for downstream target task with unaligned parameter space. We tested the proposed FT-DKL model on three synthetic and real-world benchmarks and found the model to be highly sample Figure 4: Regret vs. Number of iterations Figure 5: Regret by iteration 80 and 400 efficient. We also found that a bigger surrogate model that matched the size of BERT showed even better zero-shot optimization performance. We believe that our research paves the way toward a more unified surrogate model for Bayesian optimization.
2305.05582
Thermoelectric phenomena in an antiferromagnetic helix: Role of electric field
The charge and spin-dependent thermoelectric responses are investigated on a single-helical molecule possessing a collinear antiferromagnetic spin arrangement with zero net magnetization in the presence of a transverse electric field. Both the short and long-range hopping scenarios are considered, which mimic biological systems like single-stranded DNA and $\alpha$-protein molecules. A non-equilibrium Green's function formalism is employed following the Landauer-Buttiker prescription to study the thermoelectric phenomena. The detailed dependence of the basic thermoelectric quantities on helicity, electric field, temperature etc., are elaborated on, and the underlying physics is explained accordingly. The charge and spin \textit{figure of merits} are computed and compared critically. For a more accurate estimation, the phononic contribution towards thermal conductance is also included. The present proposition shows a favorable spin-dependent thermoelectric response compared to the charge counterpart.
Kallol Mondal, Sudin Ganguly, Santanu K. Maiti
2023-05-09T16:15:45Z
http://arxiv.org/abs/2305.05582v1
# Thermoelectric phenomena in an antiferromagnetic helix: Role of electric field ###### Abstract The charge and spin-dependent thermoelectric responses are investigated on a single-helical molecule possessing a collinear antiferromagnetic spin arrangement with zero net magnetization in the presence of a transverse electric field. Both the short and long-range hopping scenarios are considered, which mimic biological systems like single-stranded DNA and \(\alpha\)-protein molecules. A non-equilibrium Green's function formalism is employed following the Landauer-Buttiker prescription to study the thermoelectric phenomena. The detailed dependence of the basic thermoelectric quantities on helicity, electric field, temperature etc., are elaborated on, and the underlying physics is explained accordingly. The charge and spin _figure of merits_ are computed and compared critically. For a more accurate estimation, the phononic contribution towards thermal conductance is also included. The present proposition shows a favorable spin-dependent thermoelectric response compared to the charge counterpart. ## I Introduction Achieving a favorable thermoelectric (TE) response is a long-sought goal in the material science community to overcome the dilemma of the global energy crisis. This is due to the fact that heat-to-energy conversion potentially can be an effective mechanism for scavenging waste heat [1; 2] by developing efficient devices. Even after persistent efforts and investments, designing efficient thermoelectrics is reaching a plateau. The obtained efficiency is not up to the mark and hence, is far from commercialization. The efficiency of TE material is characterized by a dimensionless parameter, namely _figure of merit_ (FOM), denoted by \(ZT\), which explicitly depends on the Seebeck coefficient, electrical conductance, temperature, and total thermal conductance [3]. For bulk systems, electrical and thermal conductances are correlated by the Wiedemann-Franz (W-F) law [4], which essentially restricts to have an efficient energy conversion. However, it is possible to achieve better TE performance in the nanoscale regime than the bulk ones, overshadowing the W-F law [5; 6; 3]. Extensive efforts have been made to study the thermoelectric phenomena exploring the charge degrees of freedom in the nanoscale regime with systems like quantum dots [7; 8; 9; 10; 11], nanowires [12; 13; 14; 15], topological insulators [16; 17], and also organic molecular junctions [18; 19; 20; 21] including DNAs, proteins [22; 23; 24; 25; 26], etc. On the other hand, compared to charge-based devices, spintronic devices are usually faster, more efficient, and have smaller dimensions where the electron's spin allows us to perform more work providing much less effort [27; 28; 29; 30; 31]. A recent development in the field of thermoelectric has been the entry of spin degrees of freedom, and magnetic order provides a 'green' strategy to enhance the thermoelectric figure of merit [32]. This is due to the fact that the TE efficiency is directly proportional to the square of the Seebeck coefficient [33], and for a spin TE, it is defined as the difference between the contributions from the up and down spins. Interestingly, for the spin TE case, it is possible to achieve different signs of the spin-Seebeck coefficient, which can add up to produce a favorable TE response. Precisely, the spin-Seebeck effect(SSE) [34; 35] is the charge analog of the Seebeck effect, where one can generate a net spin current from the temperature gradient and can potentially reduce the thermal dissipation induced by the total charge current [36; 37; 38]. One of the remarkable features of the spin-Seebeck device is that it possesses a scalability different from that of usual charge-based Seebeck devices, where the output power is proportional to the length perpendicular to the temperature gradient. Not only that, the heat current and charge current follow separate paths in the spin-based Seebeck device compared to the charge-based Seebeck device, which prompts us to think about that the spin Seebeck device as a possible route to enhance the thermoelectric FOM [39]. These salient features have invigorated spintronic research to develop spin-based TE devices [40; 41; 42]. The primary requirement of a spintronic device is to look for an efficient mechanism that sets apart the charge carriers based on their spin quantum number, which essentially means achieving polarized spin current from a completely unpolarized electron beam. Among several propositions [43; 44; 45], the most studied one is the use of ferromagnetic material as a functional element [46]. However, there are several limitations to overcome in that case, like a large resistivity mismatch is induced across the junction formed by ferromagnetic and non-magnetic materials, which act against the flow of the injected electrons [46; 47]. Another major issue is the tuning of spin-selective junction currents under the application of external magnetic fields. Experimentally, it is hard to achieve such strong confinement of magnetic fields within the quantum regime. Due to the above-mentioned limitations, in the recent past, the focus is shifted towards spin-orbit (SO) coupled systems instead of ferromagnetic materials. The investigation along the line is dominated by Rashba SO coupled system over the Dresselhaus one, as the strength of the former one can be tuned externally by suitable setups [48; 49]. Extensive efforts have been made in this regard to explore a range of different geometries using inorganic and organic molecules [50; 51; 52]. But it turns out that, especially in molecular systems, the strength of the SO coupling is significantly weak compared to the hopping strength, differing by order of magnitude [53]. In addition to that, the tuning of SO coupling strength is also restricted by external means. As a result, it is difficult to obtain a high degree of spin separation and its possible tuning in a wide range in those spin-orbit coupled systems. Due to the aforementioned issues with ferromagnetic systems, there is a growing inclination towards antiferromagnetic materials for future spintronic applications [54; 55; 56]. Antiferromagnets are magnetically ordered, with the nearest-neighbor spins aligning in the opposite direction resulting in net zero magnetic moments. Thus, these types of magnetic structures are robust against external perturbations like magnetic fields, produce no stray fields, display ultrafast dynamics, and are capable of generating large magnetotransport effects [57]. Intensive efforts have been made to unravel the spin transport properties in antiferromagnetic materials, and antiferromagnetic spintronics remains an active area of cutting-edge research [58; 59; 60; 61]. Recent experiments have made significant progress along the line, including biological systems, finding that double-stranded DNA (dsDNA) molecules are highly efficient spin filters [62]. The results are remarkable in the sense that the DNA molecules are nonmagnetic, and the present spin-orbit couplings (SOCs) are too small to host the chiral-induced spin selectivity (CISS) effect. Interestingly, this CISS effect led us to think about exploring chiral molecules in spintronic applications and may shed light on the spin effects in biological systems [63; 64; 65; 66; 67; 68]. Due to the above reasons, the biological systems like, double-stranded DNA, single-stranded DNA, \(\alpha\)-protein with helical geometries are of particular interest. In the present communication, we propose a new prescription for efficient thermoelectric response, considering an antiferromagnetic helix as a functional element in the presence of the transverse electric field. To the best of our knowledge, no effort has been made to understand thermoelectric physics in such system, and this is precisely the driving force behind the present work. We extensively study the charge and spin-dependent thermoelectric responses on a single-stranded antiferromagnetic helix system connected by two one-dimensional (1D) nonmagnetic, reflectionless, semi-infinite leads in the presence of a transverse electric field(see Fig. 1). We simulate the whole system using the tight-binding framework. We employ non-equilibrium Green's function (NEGF) formalism following the Landauer-Buttiker prescription to study the thermoelectric phenomena [69; 70; 71; 72]. It is a well-known fact that no spin-separation is possible for antiferromagnetic systems with zero net magnetzation, _but it is possible to generate spin-filtration under the application of a transverse electric field_. The physics of spin-filtration gyrates around the interplay between the helicity of the antiferromagnetic helix (AFH) and the applied electric field. For a realistic estimation of the TE response, we also include the phonon contribution for the present case. To make this study complete, we also explore the thermoelectric responses across different temperatures. Our prescription shows a favorable spin-dependent thermoelectric response as compared to the charge counterpart at room temperature. The rest part of the present communication is organized as follows. In Sec. II, we discuss the system along with the relevant interaction considered in the model and present the theoretical framework. All the results, considering the short-range and long-range interactions in the presence of an electric field, are critically investigated in Sec. III. Finally, in Sec. IV, we conclude our essential findings. Figure 1: (Color online). Schematic diagram of an antiferromagnetic right-handed helix. Each red ball corresponds to a magnetic site and the arrow on the ball represents the direction of magnetic moment. Perpendicular to the helix axis, an external electric field is applied, which plays a central role in our investigation. Theoretical Formulation ### Description of the system Let us first introduce the system to study the thermoelectric phenomena. Figure 1 depicts the schematic diagram of our proposed setup where a single-stranded antiferromagnetic helix possessing \(N\) magnetic sites is attached to two 1D non-magnetic, reflectionless, semi-infinite leads, namely source (\(S\)) and drain (\(D\)) (not shown in the figure) at site 1 and site \(N\) respectively. These two leads are operating at two different temperatures, \(T+\Delta T\) and \(T-\Delta T\), where \(T\) is the equilibrium temperature and \(\Delta T\) is infinitesimally small. Thus, we restrict ourselves within the linear response regime throughout the analysis. We use helical system as a functional element to study the TE response. In general, a helical system is described by two important parameters like stacking distance and twisting angle, denoted by \(\Delta z\) and \(\Delta\phi\) respectively [63; 64]. These two parameters play a crucial role in determining whether the hopping is short-range or long-range and also determine the structure of the magnetic helix. When \(\Delta z\) is very small, the atomic sites are closely spaced, and the electrons can hop to higher-order neighbor sites, yielding a long-range hopping (LRH) helix. On the other hand, when \(\Delta z\) is quite large, the hopping of the electrons is restricted mostly to a few neighboring sites, and we have short-range hopping (SRH) helix. Here, we present the parameter values for a typical case of SRH and LRH in a tabular form in Table. 1. (For the details of the helical geometry and relevant parameters, one may look at some previous pioneering efforts [67; 68].) These values mimic biological systems like single-stranded DNA and \(\alpha\)-protein molecules, and they are the most suitable examples where respectively, the short-range hopping and long-range hopping can be explored. In our chosen antiferromagnetic helix system, the successive magnetic moments are aligned along \(\pm z\) directions, and thus the resultant magnetization becomes zero. Each magnetic site \(i\) is associated with a net spin \(\left\langle\mathbf{S}_{i}\right\rangle\). The general orientation of any such spin vector can be described by the usual polar angle \(\theta_{i}\) and the azimuthal angle \(\phi_{i}\). Now, the incoming electron will interact with these local magnetic moments through the usual spin-moment exchange interaction \(J\). To include this interaction, we introduce a spin-dependent scattering (SDS) parameter at each site \(i\) as \(\mathbf{h}_{i}=J\langle\mathbf{S}_{i}\rangle\)[53]. The strength of the SDS parameter \(\left|\mathbf{h}\right|\) is assumed to be isotropic, i.e., \(\mathbf{h}_{i}=h\;\;\forall\;i\). For the present investigation, the interaction between neighboring magnetic moments is ignored, and it is a subject of future study. The central region i.e., the AFH, is exposed to an electric field, having strength \(E_{g}\), perpendicular to the helix axis (\(\hat{z}\)) as shown in Fig. 1. The incorporation of electric field in our theoretical formalism is described in the forthcoming sub-section. ### Model Hamiltonian The tight-binding Hamiltonian representing the total system comprises four parts, which are given by [73; 74; 75; 76] \[\mathcal{H}=\mathcal{H}_{\mathrm{AFH}}+\mathcal{H}_{\mathrm{S}}+\mathcal{H}_{ \mathrm{D}}+\mathcal{H}_{\mathrm{C}}, \tag{1}\] where, \(\mathcal{H}_{\mathrm{AFH}},\mathcal{H}_{\mathrm{S}},\mathcal{H}_{\mathrm{D}}\), and \(\mathcal{H}_{\mathrm{C}}\) represent the sub-parts of the Hamiltonian, associated with the AFH, source, drain, and the coupling between the leads and the AFH, respectively. The Hamiltonian for the AFH is given by [68; 77] \[\mathcal{H}_{\mathrm{AFH}} = \sum_{n}\mathbf{c}_{n}^{\dagger}\left(\mathbf{\epsilon}_{n}-\mathbf{ h}_{n}\cdot\mathbf{\sigma}\right)\mathbf{c}_{n} \tag{2}\] \[+ \sum_{n}^{N-1}\sum_{m}^{N-n}\left(\mathbf{c}_{n}^{\dagger}\mathbf{t} _{n}\mathbf{c}_{n+m}+h.c.\right),\] where \(\mathbf{c}_{n}\) denotes the two-component fermionic operator at site \(n\), given by \(\mathbf{c}_{n}=\begin{pmatrix}c_{n\uparrow}\\ c_{n\downarrow}\end{pmatrix}\) and its hermitian counterpart \(\mathbf{c}_{n}^{\dagger}\) is defined accordingly. \(\mathbf{\sigma}\) is the well-known Pauli matrices, \(\mathbf{t}_{n}\) and \(\mathbf{\epsilon}_{n}\) are the \(2\times 2\) diagonal matrices given by \[\mathbf{t}_{n}=\begin{pmatrix}t_{n}&0\\ 0&t_{n}\end{pmatrix}\quad\text{ and }\quad\mathbf{\epsilon}_{n}=\begin{pmatrix} \epsilon_{n}&0\\ 0&\epsilon_{n}\end{pmatrix}, \tag{3}\] where \(\epsilon_{n}\) is the on-site energy in the absence of any spin-dependent scattering and \(t_{n}\) represents the hopping amplitude from the site \(n\) to \(n+m\). The inclusion of SDS leads to the effective site energy matrix \(\left(\mathbf{\epsilon}_{n}-\mathbf{h}_{n}\cdot\mathbf{\sigma}\right)\). Now, the presence of an external electric field \(E_{g}\), perpendicular to the helix axis, modifies the on-site energy in the following way [78; 77] \[\epsilon_{n}^{\mathrm{eff}}=\epsilon_{n}+ev_{g}\cos(n\Delta\phi-\beta), \tag{4}\] where \(e\) is the electronic charge, \(v_{g}\left(=2E_{g}R\right)\) is the applied gate voltage, and \(\beta\) is the angle between the incident electric field and the positive \(\hat{x}\) axis, \(R\) is the radius of the helix. Due to the helical shape of the physical system, the hopping term becomes quite tricky, unlike the usual nearest-neighbor hopping (NNH) case. The summations over the site indices are to be taken carefully. The expression for the hopping integral \(t_{n}\) is given by \[t_{n}=t_{1}\exp\left[-(l_{n}-l_{1})/l_{c}\right], \tag{5}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline System & Radius & Stacking & Twisting & Decay \\ type & (R) & distance (\(\Delta z\)) & angle (\(\Delta\phi\)) & constant (\(l_{c}\)) \\ \hline SRH & 7 Å & 3.4 Å & \(\pi/5\) rad & 0.9 Å \\ \hline LRH & 2.5 Å & 1.5 Å & \(5\pi/9\) rad & 0.9 Å \\ \hline \end{tabular} \end{table} Table 1: Geometrical parameters for the helical sytem. where \(t_{1}\) and \(l_{1}\) are the nearest-neighbor hopping amplitude and the distance among the nearest-neighbor sites, respectively. \(l_{c}\) is the decay constant and \(l_{n}\) is the spatial separation between the sites \(n\) and \(n+m\). The expression of \(l_{n}\) is given by \[l_{n}=\left[\left(2R\sin\left(n\Delta\phi/2\right)\right)^{2}+\left(n\Delta z \right)^{2}\right]^{1/2}, \tag{6}\] where \(\Delta z\) and \(\Delta\phi\) are the stacking distance and twisting angle, respectively. The contributions from the leads and the coupling between the leads and the central region to the total Hamiltonian read as \[\mathcal{H}_{\mathrm{S}} =\sum_{m<1}\mathbf{a}_{m}^{\dagger}\mathbf{\epsilon}_{0}\mathbf{a}_{m}+\sum_{ m<1}\left(\mathbf{a}_{m}^{\dagger}\mathbf{t}_{0}\mathbf{a}_{m-1}+h.c.\right) \tag{7a}\] \[\mathcal{H}_{\mathrm{D}} =\sum_{m>N}\mathbf{b}_{m}^{\dagger}\mathbf{\epsilon}_{0}\mathbf{b}_{m}+\sum_ {m>N}\left(\mathbf{b}_{m}^{\dagger}\mathbf{t}_{0}\mathbf{b}_{m+1}+h.c.\right)\] (7b) \[\mathcal{H}_{\mathrm{C}} =\mathbf{a}_{0}^{\dagger}\mathbf{\tau}_{\mathrm{S}}\mathbf{c}_{1}+\mathbf{c}_{N} ^{\dagger}\mathbf{\tau}_{\mathrm{D}}\mathbf{b}_{N+1}+h.c. \tag{7c}\] Here, \(\mathbf{a}_{n},\mathbf{b}_{n}\) are used for the source and the drain in the same way like the \(\mathbf{c}_{n}\) operator. \(\mathbf{\epsilon}_{0}\) and \(\mathbf{t}_{0}\) are \(2\times 2\) diagonal matrices where the on-site potential \(\epsilon_{0}\) and hopping amplitude \(t_{0}\) are taken to be the same for both the leads. The coupling between the source (drain) and the AFH is denoted by \(\mathbf{\tau}_{S}\left(\mathbf{\tau}_{D}\right)\), defined in the same footing as \(\mathbf{t}_{0}\). ### Two-terminal transmission probability We employ NEGF formalism to evaluate the two-terminal transmission probability through the helix system. The standard way to put up the retarded Green's function for the present case is as follows, \[\mathcal{G}^{r}=\left[(E+i\ 0^{+})\mathbb{I}-\mathcal{H}_{\mathrm{AFH}}-\Sigma_{ \sigma S}-\Sigma_{\sigma D}\right]^{-1}, \tag{8}\] where \(\sigma,\sigma^{\prime}\) are the spin indices, \(\Sigma_{\sigma S}\) and \(\Sigma_{\sigma D}\) represent the contact self-energies of the source and drain, respectively, \(\mathbb{I}\) is the identity matrix with dimension \(2N\times 2N\). The rest of the other symbols have the usual meaning. Now, the transmission probability can be expressed in terms of retarded (\(\mathcal{G}^{r}\)) and advanced \(\left(\mathcal{G}^{a}\left(=\mathcal{G}^{r}\right)^{\dagger}\right)\) Green's functions as \[\mathcal{T}_{\sigma\sigma^{\prime}}=\mathrm{Tr}\left[\Gamma_{\sigma S}\ \mathcal{G}^{r}\ \Gamma_{\sigma^{\prime}D}\ \mathcal{G}^{a}\right], \tag{9}\] where \(\Gamma_{\sigma S}\) and \(\Gamma_{\sigma D}\) are the coupling matrices that describe the rate at which particles scatter between the leads and the AFH. \(\mathcal{T}_{\sigma\sigma^{\prime}}\) indicates the probability of a transmitted electron with spin \(\sigma^{\prime}\) injected with spin \(\sigma\). We must mention that if \(\sigma=\sigma^{\prime}\), then we get pure spin transmission, otherwise we get a spin-flip transmission. We define the net up and down spin transmission probabilities as \[\mathcal{T}_{\sigma}=\sum_{\sigma^{\prime}}\mathcal{T}_{\sigma^{\prime}\sigma}, \tag{10}\] where \(\sigma,\sigma^{\prime}\) can be either \(\uparrow\) or \(\downarrow\). These are fundamental entities to calculate different thermoelectric quantities as described in the next sub-section. ### Thermoelectric quantities In the linear response regime, all the spin-resolved thermoelectric quantities like \(G_{\sigma}\), \(S_{\sigma}\), and \(k_{\mathrm{eel}}\) can be extracted using Landauer's integrals as [78; 79] \[G_{\sigma} =\frac{e^{2}}{h}L_{0\sigma}, \tag{11a}\] \[S_{\sigma} =-\frac{1}{eT}\frac{L_{1\sigma}}{L_{0\sigma}},\] (11b) \[k_{\mathrm{eel}} =\frac{1}{hT}\left(L_{2\sigma}-\frac{L_{1\sigma}^{2}}{L_{0\sigma} }\right), \tag{11c}\] where spin-resolved Landauer's integral is given by \[L_{n\sigma}=-\int\mathcal{T}_{\sigma}(E)(E-E_{F})^{n}\frac{\partial f_{ \mathrm{FD}}}{\partial E}\ dE, \tag{12}\] where, \(h,f_{\mathrm{FD}}\), and \(E_{F}\) denote Planck's constant, equilibrium Fermi-Dirac occupation probability, and Fermi energy, respectively. Here, \(\mathcal{T}_{\sigma}(E)\) is the spin-resolved two-terminal transmission probability as defined earlier. Now, we define the charge (\(c\)) and spin (\(s\)) electrical conductances in the following way [80] \[G_{c}=G_{\uparrow}+G_{\downarrow}\quad\text{and}\quad G_{s}=G_{\uparrow}-G_{ \downarrow}. \tag{13}\] The charge and spin Seebeck coefficients (thermopowers) are defined by [80; 81] \[S_{c}=\frac{1}{2}\left(S_{\uparrow}+S_{\downarrow}\right)\quad\text{and}\quad S _{s}=\left(S_{\uparrow}-S_{\downarrow}\right). \tag{14}\] Similarly, the charge and spin thermal conductances are given by [80] \[k_{\mathrm{eel}}=k_{\mathrm{sel}}=\left(k_{\uparrow}+k_{\downarrow}\right). \tag{15}\] The charge and spin _figure of merits_ can be expressed in a compressed form in the following way [80] \[Z_{\alpha}T=\frac{|G_{\alpha}|S_{\alpha}^{2}\ T}{k_{\alpha}(=k_{\mathrm{eel}}+k _{\mathrm{ph}})}, \tag{16}\] where \(\alpha\,(=\mathrm{c},\mathrm{s})\) stands for the charge and spin degrees of freedom, \(k_{\mathrm{ph}}\) is the phonon contribution to the total thermal conductance and \(T\) is the equilibrium temperature. Typically, a thermoelectric response of the order of unity is often regarded as favorable TE response. However, for an economically competitive response, \(Z_{\alpha}T\sim 3\) is often prescribed [82]. For a precise estimation of \(Z_{\alpha}T\), one needs to consider the contribution of \(k_{\mathrm{ph}}\) in thermal conductance. The method for calculating \(k_{\mathrm{ph}}\) is given in the forthcoming sub-section. ### Calculation of phonon thermal conductance When the temperature difference between the two contact electrodes is infinitesimally small, the phonon ther mal conductance in the NEGF formalism can be evaluated from the expression [83; 84; 85; 86] \[k_{\rm ph}=\frac{\hbar}{2\pi}\int_{0}^{\omega_{c}}\mathcal{T}_{\rm ph}\frac{ \partial f_{BE}}{\partial T}\omega d\omega. \tag{17}\] Here, \(\omega\) is the phonon frequency and \(\omega_{c}\) the phonon cutoff frequency respectively. We consider only elastic scattering in the present case. \(f_{BE}\) denotes the Bose-Einstein distribution function. \(\mathcal{T}_{\rm ph}\) is the phonon transmission probability across the central region, evaluated through the NEGF formalism as \[\mathcal{T}_{\rm ph}=\rm Tr\left[\Gamma_{S}^{\rm ph}\mathcal{G}_{\rm ph}\Gamma _{D}^{\rm ph}\left(\mathcal{G}_{\rm ph}\right)^{\dagger}\right] \tag{18}\] \(\Gamma_{S/D}^{\rm ph}=i\left[\widetilde{\Sigma}_{S/D}-\widetilde{\Sigma}_{S/D }^{\dagger}\right]\) is known as the thermal broadening. \(\widetilde{\Sigma}_{S/D}\) is the self-energy matrix for the source/drain electrode. The phononic Green's function for the AFH reads as \[\mathcal{G}_{\rm ph}=\left[\mathbb{M}\omega^{2}-\mathbb{K}-\widetilde{\Sigma}_ {S}-\widetilde{\Sigma}_{D}\right] \tag{19}\] where \(\mathbb{M}\) is a diagonal matrix that describes the mass matrix of the helix. Each element of the mass matrix \(\mathbb{M}_{nn}\) denotes the mass of the \(n\)-th atom in the helical system and \(\mathbb{K}\) is the matrix of spring constants. The diagonal element \(\mathbb{K}_{nn}\) denotes the restoring force of the \(n\)-th atom due to its neighboring atoms, while the element \(\mathbb{K}_{nm}\) represents the effective spring constant between \(n\)-th and \(m\)-th neighboring atoms. The self-energy matrices \(\widetilde{\Sigma}_{S}\) and \(\widetilde{\Sigma}_{D}\) have the same dimension as \(\mathbb{M}\) and \(\mathbb{K}\) and can be computed by evaluating the self-energy term \(\Sigma_{S/D}=-K_{S/D}\exp\left[2i\sin^{-1}\left(\frac{\omega}{\omega_{c}} \right)\right]\), where \(K_{S/D}\) is the spring constant at the electrode-helix contact interface. The spring constants are determined from the second derivative of Harrison's interatomic potential [87]. Since a 1D system does not allow any transverse interaction [88], the spring constant for the 1D electrode is given by \(K=3dc_{11}/16\). For a 3D system like helix, the spring constant is \(K=3d\left(c_{11}+2c_{12}\right)/16\). Here \(d\) denotes the interatomic spacing and \(c_{11}\) and \(c_{12}\) are the elastic constants. The cut-off frequency for the 1D electrode is determined from the relation \(\omega_{c}=2\sqrt{K/M}\), in terms of the mass and spring constant. ## III Numerical results and discussion The interplay between the transverse electric field and the helicity plays the central role to have spin-dependent TE phenomena in our chosen antiferromagnetic helix which we are going to discuss in this section. In the absence of any of these two parameters viz, the helicity and the electric field, there will be no mismatch between up and down spin channels, and therefore, we cannot expect any spin-dependent transport phenomena. The underlying physical mechanism is as follows. As all the magnetic moments are aligned along \(\pm\hat{z}\) directions, the Hamiltonian of the antiferromagnetic helix can be decoupled as a sum of up and down spin Hamiltonians (viz, \(H_{\uparrow}+H_{\downarrow}\)). In the absence of electric field, these two sub-Hamiltonians are symmetric to each other, because of the anti-parallel configuration of the successive magnetic moments, resulting identical set of energy eigenvalues. The symmetry can be broken quite easily by applying an electric field in the helix system. Under that condition, we have a finite mismatch between the two different spin-dependent energy channels. In presence of transverse electric field, the site energies get modulated in a cosine form as mentioned in Eq. 4. The site energy expression looks identical to the well-known Aubrey-Andre-Harper (AAH) model [89]. In the AAH model, the on-site term reads as \(\epsilon_{n}=W\cos\left(2\pi bn+\phi_{\nu}\right)\) (\(n\) being the site index), where \(W\) is the AAH modulation strength, \(b\) is an irrational number, and \(\phi_{\nu}\) is the AAH phase. The one-to-one mapping is obvious in view of Eq. 4, where one identifies the term Figure 2: (Color online). Eigenvalue spectrum in presence of the electric field in case of (a) short-range hopping and (b) long-range hopping with \(h=0.5\), \(v_{g}=1\), and \(\beta=0\). Number of sites in the Helix is \(N=20\). The eigenvalues for the up and down spins are represented by red and black colors, respectively. Along \(y\)-direction, different energy levels are shown. as the AAH modulation strength \(W\), \(2\pi b\) as the twisting angle \(\Delta\phi\), and \(\beta\) as the AAH phase factor \(\phi_{\nu}\). Thus, one can capture the essential physics of the AAH model using the above formulation through our helical system. Before discussing the results, let us first mention that the present communication focuses on the right-handed helix. All the energies are measured in the units of eV. The on-site energies of the leads are taken to be zero. In the absence of any electric field, the on-site energies \(\epsilon_{n}\) in the AFH are fixed to zero, and we choose the NNH strength \(t_{1}=1\,\)eV, and for the leads \(t_{0}=2.5\,\)eV. To work in the wide-band limit, we set \(t_{0}>t_{1}\). The coupling strengths between the central region to the source and drain electrodes, characterized by the parameters \(\tau_{S}\) and \(\tau_{D}\), are fixed at \(0.8\,\)eV. For any other choices of parameter values, the physical picture qualitatively remains the same, which we confirm through our exhaustive numerical calculation. ### Energy eigenvalues and transmission spectra Let us begin the discussion with the spectral behavior of the antiferromagnetic helix in the presence of an electric field and also spin-dependent scattering parameter. Figure. 2(a) shows the eigenspectra of a typical short-range hopping AFH for \(N=20\) sites with \(h=0.5\), \(v_{g}=1\), and \(\beta=0\) for both the up and down spins, shown by red and black colors, respectively. The spectra for the up and down spins are non-degenerate. Similarly, Fig. 2(b) shows the eigenspectra of long-range AFH keeping with parameters same as the SRH. The spectra are also non-degenerate for the two opposite spin cases. Moreover, each of the spectra is gapped and distributed in multiple bands. Thus, what we accumulate is that for both scenarios, we get non-zero spin separation in the presence of an electric field and spin-dependent scattering parameter. Here it is important to point out that, whenever we set the field strength to zero, so such separation among up and down spin energy eigenvalues takes place. The channel separation suggests that, a finite mismatch is expected between up and down spin transmission probabilities (which is the key requirement to have spin figure of merit). To reveal this fact, in Fig. 3, we plot spin-resolved transmission probabilities as function of energy. The transmission probabilities for up and down spin channels are shown by the red and black colors, respectively, for the SRH (Fig. 3(a)) and LRH (Fig. 3(b)) antiferromagnetic helices. The system size and the other parameters are considered the same as in Fig. 2. The up and down spin transmission spectra are different from each other both for the SRH and LRH helices. The transmission spectrum shows gapped [77; 90] nature for both the spin channels due to the presence of the electric field, which acts as a correlated to the system, as mentioned earlier. Now, in order to have a favorable spin TE response, the spin-resolved transmission spectrum must satisfy two conditions. First, the transmission spectrum should be asymmetric around a fixed energy [91; 92]. Second, there must be a crossing between the up and down spin transmission spectrum. The first criterion is a general one, valid for both charge and spin TE cases, while the second one is desirable for a favorable spin TE response. In Figs. 3(a) and (b), a few of such crossovers in the transmission profile are marked with dotted ellipses with blue color and also shown in the insets for clarity. For example in Fig. 3(a), around the energy \(1.4\,\)eV, on the left side of the crossing, there is a sharp peak in the down-spin transmission, while the up-spin transmission spectrum has a sharp peak on the right side. Such a sharp peak leads to an asymmetry of the transmission function and the crossings assure a large spin thermopower. We discuss this aspect in greater detail in the context of thermopower in the next sub-section. Figure 3: (Color online). Spin-resolved transmission probability as a function of energy for (a) short-range hopping and (b) long-range hopping. All the parameters and color conventions are the same as described in Fig. 2. The blue dotted ellipses mark the cross-over regions between up and down spin transmissions. In the insets, the cross-over is more visible and also shows that the transmission probabilities are small but finite. ### Thermoelctric quantities Now, let us analyze the different TE quantities like electrical conductance, thermopower, thermal conductance, and figure of merit at room temperature (\(T=300\,\)K) in the presence of the electric field and spin-dependent scattering parameter. The charge and spin TE entities are computed for both the SRH and LRH helices. The variation of electrical conductance \(G_{\alpha}\) (in units of \(e^{2}/h\)) with Fermi energy \(E_{F}\) is shown in Figs. 4(a) and (b) for the SRH and LRH cases, respectively, where \(\alpha\) corresponds to charge and spin. In Fig. 4(a), we see that charge and spin dependent electrical conductances (shown by red and green curves, respectively) are almost symmetric about \(G_{\alpha}=0\) line for the short-range helix. The maximum value of \(|G_{\alpha}|\) is found to be \(\approx 0.46\) and becomes vanishingly small beyond \(E_{F}\sim 1.4\,\)eV. This can be explained using the transmission profile of the SRH case. Remember, the total charge electrical conductance is defined as the sum of contributions coming from up and down spin channels, whereas the spin counterpart is the difference between the two. Since the charge and spin \(G_{\alpha}\) are of the opposite signs below \(E_{F}\sim 1.4\,\)eV, it implies that the contribution from the up spin channel is vanishingly small. This is due to the fact that in this Fermi energy range, the up spin transmission probability is negligibly small compared to the down spin transmission probability as seen in Fig. 3(a). Now, in the range of Fermi energy from \(\sim 1.4\) to \(\sim 2\), we see that both the up and down transmission probabilities are small, leading to vanishingly small \(G_{\alpha}\) as shown in Fig. 4(a). The LRH helix for the same set of parameters shows somewhat similar behavior as observed in Fig. 4(d), reflecting the up and down spin transmission spectra shown earlier. As the TE efficiency is directly proportional to the square of the thermopower, a large \(S\) is always desirable. Moreover, in the case of spin TE, it is possible to achieve two different signs of thermopower, which can algebraically sum up to produce a larger value of \(ZT\). Thus the choice of Fermi energy becomes very tricky. To have different signs of thermopower, one should look for a small region of the Fermi energy window where the transmission function is asymmetric. Not only that, the up and the down spin channels must have slopes of different signs. The thermopower is calculated using Eq. 11(b) and the corresponding Landauer's integral \(L_{1}\) where the transmission function is multiplied by \((E-E_{F})\) and \(\frac{\partial f_{\text{FB}}}{\partial E}\). The latter term provides thermal broadening and the product of the two is antisymmetric around the chosen \(E_{F}\). As a result of that, if the transmission function is symmetric around \(E_{F}\), then the thermopower will be zero irrespective of the value of the transmission probabilities. Now, if the slopes of the spin-resolved transmission functions are of the opposite signs around the chosen Fermi energy, then the thermopower picks a different sign with large values due to the asymmetric nature of the \(\mathcal{T}(E)\). This will lead to a larger spin thermopower. Figures 4(b) and (e) show the variation of thermopower with Fermi energy in the same energy window as discussed for electrical conductance in the case of SRH and LRH, respectively. From the transmission profile of SRH helix (see Fig. 3(a)), it is clear that around \(E\sim 1.4\), the up and down spin channels have a slope of different sign Figure 4: (Color online). Behavior of different thermoelectric quantities at room temperature \(T=300\,\)K as a function of Fermi energy. The upper panel shows the results for the SRH helix and the lower panel for the LRH one. In (a) and (d) electrical conductance (\(G_{\alpha}\)), (b) and (e) thermopower (\(S_{\alpha}\)), and (c) and (e) thermal conductance due to electrons (\(k_{\text{el}}\)) are shown. All the parameters are the same as described in Fig. 2. The subscript \(\alpha\) represents the charge (\(c\)) and spin (\(s\)) degrees of freedom and their corresponding results are shown by red and green curves, respectively. (shown by the blue dotted ellipse in Fig. 3), leading to a large value of spin thermopower as is seen in Fig. 4(b). Since the charge thermopower is the algebraic sum of the up and down spins, respectively, it becomes very small at this Fermi energy. Similarly, one can explain the large value of the spin thermopower in case LRH as shown in Fig. 4(e). Here too, the corresponding transmission profile shows that the up and down spin channels have a slope of opposite signs at \(E_{F}\sim 1.38\,\)eV, yielding large spin thermopower compared to its charge counterpart. The maximum thermopower is about \(600\,\mu\)V/K for SRH helix and \(550\,\mu\)V/K for the LRH one. The behavior of thermal conductance due to electrons as a function of Fermi energy is shown in Figs. 4(c) and (f) for SRH and LRH helices, respectively. In the given Fermi energy window, the maximum value of thermal conductance is \(\sim 135\,\)pW/K (see Fig. 4(c)) and is suppressed beyond \(E_{F}\sim 1.5\,\)eV. In the case of LRH, thermal conductance is found to have lower values than that for SRH. The maximum value is close to \(33\,\)pW/K around \(E_{F}\sim 1.38\,\)eV. The system sizes considered in the present work are small (of the order of a few nm) and therefore it is expected that the thermal conductance due to phonons should have lower values compared to its electronic counterpart. However, for a precise estimation of the figure of merit, it is important to include the thermal conductance due to phonons, which we discuss now. ### Phonon contribution to thermal conductivity Before we discuss the behavior of \(k_{\rm ph}\), one needs to mention the spring constants of the electrodes and the central helix molecule. The 1D electrodes are considered Au electrodes, whose spring constant is \(14.68\,\)N/m [93]. For the helix molecule, we consider the spring constant about \(5.1\,\)N/m, which is considered as same as the single-crystal benzene [94]. Here we assume that two different atoms are adjacent to each other at the interface, one type of atom accounts for the Au electrode and the other type for the helix molecule. By averaging the spring constants of the electrodes and helix molecule, and the masses, the cut-off frequency for Au electrode comes out to be \(\omega_{c}=13.7\,\)Trad/s. Here it should be noted that the spring constant for the helix molecule is chosen for a light molecule. However, if one works with heavy molecules, the phonon vibrations will be less than our case and therefore, \(k_{\rm ph}\) is expected to have lower values, and hence larger \(ZT\). In Fig. 5(a), the phonon transmission probability is plotted as a function of phonon frequency. We observe a few Fabry-perot-like peaks [85]. The behavior of phonon thermal conductance with temperature is shown in Fig. 5(b). Within the temperature window \(50\) to \(150\,K\), \(k_{\rm ph}\) increases rapidly with temperature, and then it tends to saturate. The saturated value is about \(29\,\)pW/K. ### Thermoelectric efficiency With all the TE quantities and considering the phonon contribution, we finally compute FOM. At room temperature, the charge and spin \(ZT\)s as a function of the Fermi energy are presented for SRH and LRH helices, respectively, as shown in Figs. 6(a) and (b), respectively. Both SRH and LRH molecules exhibit favorable spin TE responses and dominate over their charge counterpart. Maximum spin-\(ZT\) is obtained about \(7\) for SRH and \(4.5\) for LRH at \(E_{F}\sim 1.35\,\)eV and \(E_{F}\sim 1.45\,\)eV, respectively. Thus, our prescription indeed shows a favorable spin TE response at room temperature. ### Role of \(\beta\) So far, the direction of the electric field was assumed to be parallel to the positive \(\hat{x}\)-axis, that is \(\beta=0\). To study the effect of \(\beta\) on TE performance, we consider Figure 5: (Color online). (a) Phonon transmission probability \(\mathcal{T}_{\rm ph}\) as a function of phonon angular frequency \(\omega\). (b) Phonon thermal conductance \(k_{\rm ph}\) as a function of temperature \(T\). other three different angles, namely, \(\beta=\pi/6\), \(\pi/3\), and \(\pi/2\). The result for \(\beta=0\) is also included for comparison. Figure. 7 shows the variation of charge and spin figure of merits in the case of LRH at room temperature as a function of Fermi energy for different values of \(\beta\). All other parameters are kept fixed, as stated earlier. The variation of \(Z_{\alpha}T\) as a function of Fermi energy varied from \(-2.5\) to \(3.5\,\)eV, which is the full energy window as shown in Fig. 2. Mostly, in all the cases, spin-\(ZT\) shows favorable response at different Fermi energies. Maximum spin-\(ZT\) is noted about \(4.5,1.75,0.8\), and \(6.58\) for \(\beta=0,\pi/6\), \(\pi/3\), and \(\pi/2\), respectively. Interestingly, maximum values of spin-\(ZT\) dominate over the charge-\(ZT\) for all the \(\beta\) values considered here. The effect of \(\beta\) on \(Z_{\alpha}T\) for SRH will lead to more or less similar features and hence is not included for the brevity of the presentation. ### Effect of temperature All the results discussed so far are at room temperature \(T=300\,\)K. To study the effect of temperature, we have plotted \(Z_{\alpha}T\) as a function of Fermi energy for three other different temperatures, namely, \(T=150\,\)K, \(250\,\)K, and \(350\,\)K, as shown in Fig. 8 for the LRH helix. The other parameters are kept fixed, as mentioned in Fig. 3. The spin \(ZT\) and the charge \(ZT\) are shown by the green and red colors, respectively. The temperature profile indicates that the maximum value of the spin figure of merit tends to increase with the increase in operating temperature. The maximum \(Z_{s}T\), in this case, is found to be around \(4.7\) at temperature \(350\,\)K. ## IV Conclusions In this present work, we have proposed a scheme to achieve a favorable spin TE response in a typical helical Figure 8: (Color online). Charge and spin thermoelectric FOMs as a function of Fermi energy at different temperatures as shown by the red and green colors, respectively for LRH helix. All the parameters are identical with Fig. 2. Figure 6: (Color online). Behavior of \(Z_{c}T\) and \(Z_{s}T\) as a function of Fermi energy at room temperature for (a) SRH and (b) LRH helices. All the parameters are considered as the same as in Fig. 2. The red and green curves represent the results for charge and spin FOMs, respectively. Figure 7: (Color online). Variation of charge and spin FOMs as a function of Fermi energy for different values of \(\beta\) as shown by red and green colors, respectively for LRH helix. All the parameters are identical to those in Fig. 2. \(\beta\)-values are considered as \(\beta=\pi/6\), \(\pi/3\), and \(\pi/2\). \(\beta=0\) is included for comparison. geometry with a spin configuration of antiferromagnetic texture. We have considered both the short and long-range hopping scenarios, which potentially mimic biological systems like single-stranded DNA and \(\alpha\)-protein molecules. We have considered the spin-dependent scattering phenomena and also a transverse electric field to study thermoelectric physics in the helical system. In the absence of electric field or helicity, spin-dependent phenomena is no longer observed. We have used the NEGF formalism following the Landauer-Buttiker prescription to study the thermoelectric phenomena. Both the charge and spin TE responses have been studied. For a precise estimation of the TE _figure of merit_, we have computed the phonon contribution to the total thermal conductance. We have achieved a highly favorable spin TE response compared to the charge counterpart at room temperature for both the SRH and LRH molecules. The role of \(\beta\), that is the angle between the direction of the electric field with the positive \(\hat{X}\)-axis on \(ZT\) and the effect of temperature have also been examined. To the best of our concern, spin-dependent TE phenomena in antiferromagnetic helix has not been studied so far in the literature. Our proposition provides a new route of achieving efficient energy conversion using similar kinds of fascinating antiferromagnetic systems.
2303.16116
Period-doubling bifurcations and islets of stability in two-degree-of-freedom Hamiltonian systems
In this paper, we show that the destruction of the main KAM islands in two-degree-of-freedom Hamiltonian systems occurs through a cascade of period-doubling bifurcations. We calculate the corresponding Feigenbaum constant and the accumulation point of the period-doubling sequence. By means of a systematic grid search on exit basin diagrams, we find the existence of numerous very small KAM islands ('islets') for values below and above the aforementioned accumulation point. We study the bifurcations involving the formation of islets and we classify them in three different types. Finally, we show that the same types of islets appear in generic two-degree-of-freedom Hamiltonian systems and in area-preserving maps.
Alexandre R. Nieto, Jesús M. Seoane, Miguel A. F. Sanjuán
2023-03-28T16:35:34Z
http://arxiv.org/abs/2303.16116v1
# Period-doubling bifurcations and islets of stability in two-degree-of-freedom Hamiltonian systems ###### Abstract In this paper, we show that the destruction of the main KAM islands in two-degree-of-freedom Hamiltonian systems occurs through a cascade of period-doubling bifurcations. We calculate the corresponding Feigenbaum constant and the accumulation point of the period-doubling sequence. By means of a systematic grid search on exit basin diagrams, we find the existence of numerous very small KAM islands ("islets") for values below and above the aforementioned accumulation point. We study the bifurcations involving the formation of islets and we classify them in three different types. Finally, we show that the same types of islets appear in generic two-degree-of-freedom Hamiltonian systems and in area-preserving maps. pacs: 05.45.Ac,05.45.Df,05.45.Pq Introduction One of the most remarkable characteristics of conservative nonlinear systems, such as area-preserving maps and non-integrable Hamiltonians, is the existence of Kolmogorov-Arnold-Moser (KAM) tori surrounding stable periodic orbits. Embedded in a chaotic sea, KAM tori constitute regions ("islands") of stability where periodic and quasiperiodic motions take place. Nonetheless, the inner structure of KAM islands is anything but simple. As shown by the Poincare-Birkhoff theorem [1; 2], resonant islands are constantly created around the main stable periodic orbit. Near these resonant islands, chaotic orbits can exist and form an inner chaotic domain [3]. As a result, chaotic and regular trajectories coexist within KAM islands, and they are separated from the chaotic sea by a boundary known as the "last KAM curve" [4]. As the parameters of the system are modified, the structure of the KAM islands evolves in a complex manner. Even though the presence of KAM islands is directly explained by the existence of stable periodic orbits, they undergo an infinite set of bifurcations that generate a fractal tree-like structure that has been firstly shown in a paper by Greene _et al._[5]. The ramifications appearing in the top of these structures are a consequence of a sequence of period-doubling bifurcations similar to the ones studied by Feigenbaum in the case of dissipative systems [6]. This analogous behavior observed in both dissipative and conservative systems lead to intensive efforts to numerically characterize the sequences of period-doubling bifurcations in conservative systems. So much so that during the early '80s of the past century, within only a few years different authors obtained that in two-dimensional area-preserving maps the Feigenbaum constant takes the value \(\delta_{H}\approx 8.721\)[7; 8; 9] (we recall that the dissipative Feigenbaum constant is \(\delta\approx 4.669\)). Some years later, these results have been extended to four-dimensional volume-preserving maps [10]. In the case of continuous-time Hamiltonian systems, the literature is filled with countless articles studying periodic orbits and their close relation with KAM tori. Some early works are [11; 12; 13; 14; 15], while more recent research can be found in [16; 17; 18]. Undoubtedly, one of the disadvantages of Hamiltonian systems when compared with discrete ones is the computational cost of the numerical simulations and, in this context, the difficulty to accurately detect periodic orbits. As a consequence, numerous research works have focused the attention on developing new methods and techniques to search for periodic orbits [19; 20; 21; 22]. Nonetheless, despite the wide variety of techniques for computing periodic orbits, the period-doubling cascades have not been exhaustively explored in two-degree-of-freedom Hamiltonian systems and, as far as we know, the conservative Feigenbaum constant has not been obtained in this kind of systems. In this paper, we use a two-degree-of freedom-Hamiltonian system to describe the destruction of the main tori in terms of the period-doubling cascade. We also calculate the conservative Feigenbaum constant, obtaining the same value that was found in discrete conservative systems, as indicated above. Based on previous research, one might assume that the structure and evolution of KAM islands can be fully understood by studying the bifurcations of the main stable periodic orbit. Additionally, by numerically obtaining the accumulation point (also known as Feigenbaum point) of the period-doubling sequence, the exact parameter value at which the last KAM tori are destroyed can be determined. Over this value, the reign of chaos begins. However, research conducted in the '80s of the past century discovered that typical area-preserving maps exhibit very small KAM islands ("islets") even for parameter values significantly above the accumulation point [23]. This finding was corroborated years later by Contopoulos _et al._, who found that these islets of stability were not related to the main tori, but instead seemed to appear in saddle-node bifurcations out in the chaotic sea. Recently, islets of stability have also been found in two-degree-of-freedom Hamiltonian systems [25]. Moreover, it has been demonstrated through computer-assisted proofs that they are not a product of spurious numerical simulations [17]. Although islets occupy a small volume in phase space and appear in a reduced range of parameter values, their existence implies that the system dynamics is not fully governed by chaos. Moreover, even small KAM islands can influence nearby chaotic trajectories through their stickiness [26; 27], as well as affect global system properties such as transport [28; 29] and decay correlations [30]. In this manuscript, we have conducted a comprehensive search for islets and we have found many of them below and above the accumulation point. After carefully analyzing the bifurcations involved in their formation, we have classified them into three different types. The manuscript is organized as follows. First, in Sec. II, we introduce the model used in this work and the methods for computing periodic orbits and their stability. The description of the destruction of the main tori, together with the numerical computation of the conservative Feigenbaum constant is shown in Sec. III. The analysis and classification of islets is carried out in Sec. IV. To illustrate the generality of the previous results, in Sec. V we show that the same types of islets also appear in different Hamiltonian systems and even in the case of area-preserving maps. Finally, in Sec. VI, we present the main conclusions of this manuscript. Model description For this research, we chose the Henon-Heiles system [31] as our model. This system is a well-known example of a two-degree-of-freedom Hamiltonian and has been extensively studied in the field of nonlinear dynamics. It was named after the French astronomer Michel Henon and the American astrophysicist Carl Heiles, who used it in 1964 to search for the third integral of motion. The Hamiltonian describing this system is given by: \[\mathcal{H}=\frac{1}{2}(\dot{x}^{2}+\dot{y}^{2})+\frac{1}{2}(x^{2}+y^{2})+x^{2} y-\frac{1}{3}y^{3}. \tag{1}\] As a consequence, the equations of motion read: \[\begin{split}\dot{x}&=p_{x},\\ \dot{y}&=p_{y},\\ \dot{p_{x}}&=-x-2xy,\\ \dot{p_{y}}&=-y-x^{2}+y^{2}.\end{split} \tag{2}\] Since the Hamiltonian function governing the Henon-Heiles system has no time dependence, the energy is conserved and can be expressed as \(\mathcal{H}(x,y,p_{x},p_{y})=E\). Above the threshold \(E_{e}=1/6\), known as escape energy, the potential exhibits three symmetric exits separated by an angle of \(2\pi/3\) radians, as can be seen in Fig. 1. When the energy exceeds \(E_{e}\), the particles can escape towards \(\pm\infty\) through one of these exits. Conversely, when the energy is below \(E_{e}\), the motion of the particles is bounded. The fact that the Henon-Heiles system exhibits escapes allows us to define exit basins [32; 33]. Similarly to basins of attraction in dissipative systems, exit basins are sets of initial conditions that lead to escape through a specific exit of the potential. Since initial conditions within a KAM island do not escape, it is possible to accurately detect the external structure of KAM islands by computing exit basin diagrams. This approach reduces computational cost compared to closed systems, where a systematic search for KAM islands requires the use of chaos indicators such as SALI or GALI [34; 35]. As an example, we show exit basin diagrams for two values of the energy (\(E=0.17\) and \(E=0.18\)) in Fig. 2. The colors green, red, and blue indicate initial conditions escaping through exits \(1\) (\(y\rightarrow\infty\)), \(2\) (\(x,y\rightarrow-\infty\)), and \(3\) (\(x\rightarrow\infty,y\rightarrow-\infty\)), respectively. The white regions inside the potential correspond to initial conditions that never escape, so they constitute KAM islands. Figure 2: Exit basins in the physical space of the Hénon-Heiles system with energy (a) \(E=0.17\) and (b) \(E=0.18\). The colors red, green and blue refer to initial conditions leading to the three exits of the potential: Exit \(1\) (\(y\to\infty\)), Exit \(2\) (\(x,y\to-\infty\)), and Exit \(3\) (\(x\to\infty,y\to-\infty\)). White regions inside the potential correspond to KAM islands. Figure 1: Isopotential curves of the Hénon–Heiles system for different values of the potential \(V(x,y)=\frac{1}{2}(x^{2}+y^{2})+x^{2}y-\frac{1}{3}y^{3}\). The curves are color-coded based on the value of the potential, as indicated by the accompanying color bar. Values below and above the escape energy \(E_{e}=1/6\) are displayed. The three saddle points of the potential are indicated on the plot by red dots. Using a simple tool like the exit basin diagrams, we can find KAM islands and detect with high accuracy their external structure. Hence, for a complete description of their evolution and destruction we only need to compute the associated periodic orbits and their stability. The Henon-Heiles system, like most Hamiltonian systems, has some symmetries. In particular, the system is time-reversible and possesses the symmetry group of an equilateral triangle (\(D_{3}\) symmetry). As a consequence, its periodic orbits are also symmetric. They can be symmetric with respect to the three symmetry axes or only with respect to one of them. On the latter case, there necessarily exist two additional periodic orbits that are symmetric with respect to the other two symmetry axes. Due to these symmetry arguments, all periodic orbits must perpendicularly cross one of the three symmetry axes. For convenience, we find periodic orbits that are symmetric about the \(y\)-axis. Hence, any trajectory that starts at \(x_{0}=0\) being perpendicular to the \(y\)-axis (i.e., \(\dot{y}_{0}=0\) and \(\dot{x}_{0}=f(y_{0},E)\)) and that eventually crosses perpendicularly again the same axis corresponds to a periodic orbit. The number of crossings between perpendicular intersections is the multiplicity \(m\) of the periodic orbit. On the other hand, the period \(T\) of a periodic orbit is twice the time needed to return perpendicularly to the \(y\)-axis. Therefore, the condition for a periodic orbit to exist is \(x(0,y_{0},\dot{x}_{0},0;T/2)=\dot{y}(0,y_{0},\dot{x}_{0},0;T/2)=0\). Consequently, we have computed periodic orbits following the systematic search for symmetric periodic orbits described in [21]. We have determined the stability of periodic orbits by means of the eigenvalues of the monodromy matrix \(M(T)\), which is the solution at time \(T\) (one period of the orbit) of the linear matrix differential system \[\dot{M}=\begin{pmatrix}0&I_{2}\\ -\text{Hess}(V(x,y))&0\end{pmatrix}M\qquad\text{with}\,M(0)=I_{4}, \tag{3}\] being \(\text{Hess}(V(x,y))\) the Hessian matrix of the potential function and \(I_{n}\) denotes the identity matrix of order \(n\). Since \(M(T)\) is a real symplectic matrix, its eigenvalues need not be explicitly calculated. Instead, the stability can be determined using the stability index \(\kappa=\text{tr}(M(T))-2\)[36]. In particular, a periodic orbit is stable if \(|\kappa|<2\), unstable if \(|\kappa|>2\), and critical if \(|\kappa|=2\). ## III The destruction of the main RAM island The Henon-Heiles system features a main KAM island that surrounds a stable periodic orbit and its bifurcation branches. The bifurcations that occur in the branches of periodic orbits before they become unstable have been profoundly studied in [17; 18]. Here, we focus our attention on the period-doubling bifurcations that destroy the main family of periodic orbits and cause the main KAM island to disappear. For low energy values, the main KAM island surrounds a periodic orbit of multiplicity \(m=1\). For energies near zero, the periodic orbit takes on an almost circular shape due to the system behaving like a two-dimensional harmonic oscillator. At higher energies, the orbit exhibits a Figure 3: Periodic orbits in the Hénon-Heiles system for energy values (a) \(E=0.1486\), (b) \(E=0.1488\), (c) \(E=0.2062\), and (d) \(E=0.2064\). The multiplicity \(m\) of the orbits is indicated in each panel. Orbits depicted in panels (a-b) and (c-d) have been computed for energy values just prior to and immediately following the first and second period-doubling bifurcations, respectively. triangular symmetry, as shown in Fig. 3(a) for \(E=0.1486\). By slightly increasing the energy until \(E_{1}\approx 0.14865\), the periodic orbit loses its stability and a stable periodic orbit of double multiplicity emerges (see Fig. 3(b)). Therefore, the first period-doubling bifurcation has occurred. Further increasing the energy causes the shape of the \(m=2\) periodic orbit to evolve until becoming almost unrecognizable, as illustrated in Fig. 3(c). Following the same fate of its parent periodic orbit, this \(m=2\) periodic orbit loses its stability in the subsequent period-doubling bifurcation, which occurs for \(E_{2}\approx 0.20626\). The newly bifurcated \(m=4\) periodic orbit is depicted in Fig. 3(d). This sequence of period-doubling bifurcations continues until reaching the accumulation point \(E_{\infty}\), where the last bifurcation branches become unstable. As a consequence, beyond \(E_{\infty}\) large KAM islands do not exist anymore in the system. The period-doubling bifurcations and their effects on the structure of KAM islands can be visualized by representing the branches of periodic orbits over an exit basin diagram in the \((y,E)\) plane. Since we are not interested here in the fractal structures of the exit basins, we have assigned white color to all escaping trajectories, while KAM islands are depicted in blue. The result is shown in Fig. 4, where green (red) lines denote stable (unstable) periodic orbits. In this figure, each panel is a magnification of the area enclosed by dashed lines in the previous one. Therefore, the \(m=2\) branches are represented in Fig. 4(a), while the following panels show the subsequent period-doubling bifurcations. Regardless of the energy range, it can be observed that panels (b) and (d) exhibit the same qualitative features, while panel (c) is a mirror image of the other panels. As a matter if fact, this self-similar fractal structure repeats itself indefinitely within a finite energy range. Moreover, the bifurcations that occur in the branches of periodic orbits before they become unstable repeat in the same sequence at different scales. Therefore, each of these figures captures the fundamental aspects of the formation, evolution, and destruction of the main KAM island. We highlight that these structures are not representative of the Henon-Heiles system only, but they are astonishingly similar in many different conservative systems (e.g, see Fig. 8 in [5] and Figs. 9 and 10 of this manuscript). By detecting the loss of stability of periodic orbits, we have obtained numerically the energy values \(E_{n}\)\((n=1,2,3...)\) where the first \(7\) period-doubling bifurcations occur. The results are shown in the first three columns of Table 1. In this table, and throughout the whole manuscript, the uncertainty in the last significant digits of the parameters is indicated between parentheses. In the case of \(E_{n}\), the uncertainty is given by half the difference between two consecutive energy values where we detect that the stability of the periodic orbit changes. Figure 4: Branches of periodic orbits and KAM islands in the Hénon-Heiles system. The stable (unstable) periodic orbits are represented using green (red) lines. The KAM islands have been determined by computing the exit basins along the \(y\)-axis for different energies. Escaping initial conditions are colored in white, while KAM islands (non-escaping initial conditions) are represented in blue. Panel (a) shows the \(m=2\) branches, while the next panels represent the subsequent period-doubling bifurcations. Note that each panel is a magnification of the area enclosed by dashed lines in the previous one. Once we have obtained the parameter values where the period-doubling bifurcations occur, we can estimate the Feigenbaum constant, which is given by: \[\delta_{H}=\lim_{n\rightarrow\infty}\frac{E_{n-1}-E_{n-2}}{E_{n}-E_{n-1}}, \tag{4}\] where the index \(H\) indicates that the constant is calculated in a Hamiltonian system. All estimates of \(\delta_{H}\) are shown in the last column of Table 1, while the standard methods to calculate its uncertainty are explained in Appendix A. Our best approximation (using \(E_{5}\), \(E_{6}\), and \(E_{7}\)) is \(\delta_{H}=8.72113(47)\), which agrees to a large extent with the result obtained by Greene _et al._ in two-dimensional area-preserving maps [5] and by Mao _et al._ in four-dimensional volume-preserving maps [10]. Therefore, we confirm that the value of the Feingenbaum constant is not only universal for area-preserving maps, but also for two-degree-of-freedom Hamiltonian systems. The infinite sequence of period-doubling bifurcations occurs within a finite energy range. Therefore, exists an accumulation point that can be calculated as follows: \[\begin{split} E_{\infty}&=E_{6}+\sum_{k=0}^{\infty }(E_{7+k}-E_{6+k})=E_{6}+\sum_{k=0}^{\infty}\frac{(E_{7}-E_{6})}{\delta_{H}^{k }}\\ &=E_{6}+\frac{\delta_{H}(E_{7}-E_{6})}{\delta_{H}-1}=0.2111081392 26(35),\end{split} \tag{5}\] where we have used our best estimation for \(\delta_{H}\). Using a more accurate value \(\delta_{H}=8.721097200(1)\), we obtain \(E_{\infty}=0.211108139227(30)\). Both estimations only differ in the last significant digit. \begin{table} \begin{tabular}{l c c c} \hline \(n\) & \(m\) & \(E_{n}\) & \(\delta_{H}\) \\ \hline \hline \(1\) & \(2\) & \(0.1486504275(5)\) & - \\ \hline \(2\) & \(4\) & \(0.2062564235(5)\) & - \\ \hline \(3\) & \(8\) & \(0.2105406495(5)\) & \(13.4460684(34)\) \\ \hline \(4\) & \(16\) & \(0.2110432870(1)\) & \(8.523491(12)\) \\ \hline \(5\) & \(32\) & \(0.21110070066(4)\) & \(8.754667(32)\) \\ \hline \(6\) & \(64\) & \(0.21110728629(1)\) & \(8.71802(87)\) \\ \hline \(7\) & \(128\) & \(0.211108041425(25)\) & \(8.72113(47)\) \\ \hline \end{tabular} \end{table} Table 1: Values of the energy, \(E_{n}\), where the first \(7\) period-doubling bifurcations occur, together with estimations of the Feigenbaum constant \(\delta_{H}\) using the former and the two previous values of \(E_{n}\). The first two columns indicate the number of the period-doubling bifurcation and the multiplicity of the created periodic orbit, respectively. S1.F4][ENDFIGURE] ## IV Islets of stability Although the only large KAM tori appear surrounding the main family of periodic orbits, unrelated and occasionally stable branches generate islets of stability. Since all periodic orbits cross at least once the \((0,y,\dot{x}(y,E),0)\) Poincare section, we can ensure that islets will appear on the \((y,E)\) exit basin diagram. Furthermore, as periodic orbits make up the boundary of the exit basins, the search for islets can be constrained. Following these facts, we have found \(24\) of them by performing a detailed grid search out in the boundary of the exit basins. Of course, by delving further into the structure of the boundary, one may discover an arbitrarily large number of islets. Due to their reduced area in the \((y,E)\) plane, we indicate their position by using solid white dots in Fig. 5. In this figure, the \(m=2\) branches of the main KAM island can be clearly observed at the bottom of the plot (note that, colors aside, Fig. 4(a) is a magnification of Fig. 5 in the vicinity of the main KAM island). Figure 5: Islets of stability (solid white dots) in an exit basin diagram for the Hénon-Heiles system. The color-code is as shown in the caption of Fig. 2. Note that the white region into the left part of the figure is a set of energetically forbidden initial conditions, not a KAM island. We have studied individually each detected islet and, based on the bifurcations of periodic orbits involved in their formation, we have classified them into three different types. For a better understanding of their origin, we can observe that they appear near the edge of the parabolic shapes arising in the basin boundary (see Fig. 5). These parabolic shapes correspond to an infinite set of bifurcations, usually characterized by the birth of two unstable branches which correspond to a single unstable periodic orbit that crosses the Poincare section twice. Nonetheless, in some cases a pair of stable-unstable periodic orbits is created in a saddle-node bifurcation. The stable branch is the responsible of the formation of a type I islet (see Fig. 6(a-b)). The remaining two types of islets always appear in branches of periodic orbits created in a saddle-node bifurcation. Therefore, islets of types II and III are always preceded by a type I islet. The stable periodic orbit that generates a type I islet eventually loses its stability after undergoing some standard bifurcation (typically pitchfork). For slightly higher energy values, the periodic orbit can become stable again, creating a type II islet (see Fig. 6(c-d)). Hence, if a type II islet exists, it always appears in the same branch where a type I islet existed (i.e., in the stable branch created in the saddle-node bifurcation). However, we emphasize that not all type I islets are followed by a type II islet, but they can also be alone. Type III islets can appear in both branches that are created in the saddle-node bifurcation. They arise from bifurcations where an unstable periodic orbit becomes stable (see Fig. 6(e-d)). While type II islets exhibit a smooth shape near the bifurcation point, type III islets are characterized by a sharp edge. Unlike the previous types, we have not observed the emergence of new unstable periodic orbits in the bifurcation leading to type III islets. For the sake of reproducibility, in Table 2 (see Appendix B) we list the range of coordinates in the \((y,E)\) plane where the \(24\) islets that we have detected can be found. We also indicate their type and the multiplicity of the generating periodic orbit. Except for the \(24\)th islet, we have detected and listed the islets that occupy a bigger area in the \((y,E)\) plane (in the case of the \(24\)th islet we have used higher resolution in the exit basin diagram with the aim of finding the energy value which generates the last islet). As can be seen in Table 2, the periodic orbits have a relatively low multiplicity. This fact suggests that periodic orbits with high multiplicity generate smaller islets. For illustrative purposes, in Fig. 7 we represent in the \((x,y)\) plane some stable periodic orbits that generate islets. Note that a single periodic orbit can cross the \((0,y,\dot{x}(y,E),0)\) Poincare section twice (e.g., the periodic orbits represented in panels (a) and (h) in Fig. 7). In these cases, two islets of the same type appear in the \((y,E)\) plane. Figure 6: Representative examples of the different types of islets. The pairs of panels (a-b), (c-d), and (e-f) represent islets of types I, II, and III, respectively. These pairs of panels contain similar information, but from different perspectives. Panels (a,c,e) display the bifurcations and the emergence of islets surrounding stable periodic orbits. Panels (b,d,f) represent the islets in contrast to the fractal basin boundary. In panels (a,c,e) the color-code is as in Fig. 4, while in panels (b,d,f) is as in Fig. 2. Occasionally, islets of types II and III can be observed in the same plot as type I islets, since they appear for close energy values. Two examples of this phenomenon are displayed in Fig. 8. In panels (a-b), we can see a type II islet forming in the same branch where a type I islet previously existed at lower energy levels. In panels (c-d), we see how a type III islet appears after the unstable branch created in a saddle-node bifurcation becomes occasionally stable. In this case, during a short energy range, islets of types I and III coexist. Figure 7: A gallery of stable periodic orbits for the Hénon-Heiles system. Each of these orbits generates one of the \(24\) islets that we have detected, classified, and listed in Table 1 (Appendix A). In particular, the number of the corresponding islet is (a) \(1\), (b) \(5\) and \(6\), (c) \(9\), (d) \(12\), (e) \(14\), (f) \(16\), (g) \(19\), (h) \(22\) and \(23\), and (i) \(24\). We aim to conclude our findings on the Henon-Heiles system by discussing an aspect that attracted the attention of some researchers: the energy value \(E_{k}\) for which the KAM tori disappear. Regarding this matter, various energy values have been put forward in the literature. The initial approximation to this limit value was \(E_{k}\approx 0.2113\)[37], which is a rough approximation of the accumulation point. Another suggested value was \(E_{k}\approx 0.2309\)[38], which probably arose as a result of detecting the islet number \(17\) (see Table 2). Finally, a recent paper found an islet for \(E_{k}\approx 0.2534\) (islet number \(21\) in Table 2). In our numerical simulations, the last detected islet is destroyed for \(E_{k}\approx 0.26194367\) (islet number \(24\) in Table 2). Figure 8: Two examples where islets of different types appear within a reduced energy range. The pairs of panels (a-b) and (c-d) contain similar information, but from different perspectives. Panels (a,c) display the bifurcations and the emergence of islets surrounding stable periodic orbits. Panels (b,d) represent the islets in contrast to the fractal basin boundary. In panels (a,c) the color-code is as in Fig. 4, while in panels (b,d) is as in Fig. 2. From the previous information, it is clear that the value of \(E_{k}\) is gradually increased due to higher precision in the numerical simulations. This is not surprising, since the range of energies where islets appear is reduced as the energy of the system is increased. However, bifurcations do not occur for arbitrarily high values of the energy. After searching into the structure of the boundary of the exit basins, we have found that the last bifurcation occurs for \(E=0.262158902577(1)\). We have not found a stable periodic orbit nor an islet in the neighborhood of the last bifurcation, but its existence cannot be definitively dismissed. Therefore, we cannot provide an exact value for \(E_{k}\), but we conjecture that its value is not significantly above the energy where the last bifurcation occurs. ## V Islets of stability in different systems The same types of islets that we have found in the Henon-Heiles system appear in generic two-degree-of-freedom Hamiltonian systems and area-preserving maps. To illustrate this generality, in this section we provide numerical evidence of the existence of islets in the Barbanis system [39] and in the standard map (also known as Chirikov-Taylor map) [40]. The Barbanis system is a two-degree-of-freedom Hamiltonian system given by: \[\mathcal{H}=\frac{1}{2}(\dot{x}^{2}+\dot{y}^{2})+\frac{1}{2}(x^{2}+y^{2})-xy ^{2}. \tag{6}\] Besides being time-reversible, the system is symmetric about the \(x\)-axis. Therefore, using similar arguments to those exposed in the Henon-Heiles system, the condition for a periodic orbit to exist in the Barbanis system is \(y(x_{0},0,0,\dot{y}_{0};T/2)=\dot{x}(x_{0},0,0,\dot{y}_{0};T/2)=0\). Thus, for detecting islets we have chosen the \((x,0,0,\dot{y})\) Poincare section and we have computed an exit basin diagram in the \((x,E)\) plane. The result is shown in Fig. 9, where the position of \(12\) islets is represented with white dots. Here we only see two colors in the exit basin diagram since the system exhibits two exits. The coordinate range where the islets can be found in this system is shown in Table 3 (see Appendix B). On the other hand, the standard map is an area-preserving map defined by the following formula: \[\begin{split}\theta_{n+1}&=\theta_{n}+J_{n+1}\mod 2\pi,\\ J_{n+1}&=J_{n}+K\sin\theta_{n},\end{split} \tag{7}\] where \(K>0\) is a constant. Unlike the continuous-time Hamiltonian systems studied above, the standard map is an area-preserving map, so that it does not have any exit. However, we can construct exit basin diagrams by defining artificial leaks in the system, as explained in [41]. In particular, we define two leaks \(L_{1}\equiv[(0.2-\omega)\pi,(0.2+\omega)\pi]\times[0,2\pi]\) and \(L_{2}\equiv[(1.8-\omega)\pi,(1.8+\omega)\pi]\times[0,2\pi]\) (this choice guarantees that both leaks have width \(\omega\pi\) and are symmetric about \(\theta=\pi\)). Thus, an exit basin is defined as the set of initial conditions falling after \(1\) or more iterations in one particular leak. To represent exit basin diagrams, we simply assign a different color to the initial conditions depending on the first leak visited. For \(K<4\), the periodic orbits of the system lie in the \(\theta=0\) line, while for higher values of \(K\) they appear in the lines \(J=2\theta-2\pi\) and \(J=2\theta\). We have searched for islets close to the value Figure 9: Islets of stability (solid white dots) in an exit basin diagram for the Barbanis system. The colors green and blue refer to initial conditions leading to the two exits of the potential: Exit \(1\) (\(y\to\infty\)) and Exit \(2\) (\(y\to-\infty\)). White regions inside the potential correspond to KAM islands. of \(K\) where the main KAM island is destroyed, so we have computed exit basin diagrams in the \((\theta,K)\) plane following the line \(J=2\theta-2\pi\) (we could have used the line \(J=2\theta\) in an equivalent way). Therefore, once the value of \(K\) and the initial condition \(\theta_{0}\) are chosen, the initial condition in the \(J\) coordinate is given by \(J_{0}=2\theta_{0}-2\pi\). The result is shown in Fig. 10, where the position of \(20\) islets is represented with white dots. The coordinate range where the islets can be found is shown in Table 4 (see Appendix B). ## VI Conclusions and Discussion In summary, our research reveals that the destruction of the main KAM island in two-degree-of-freedom Hamiltonian systems is explained by a cascade of period-doubling bifurcations. By using the Henon-Heiles system as a model, we have calculated the conservative Feigenbaum constant and the accumulation point where the last periodic orbit becomes unstable. The value obtained for the Feigenbaum constant confirms that the geometrical progression of bifurcations is not only universal for area-preserving maps, but also for two-degree-of-freedom Hamiltonian systems. Figure 10: Islets of stability (solid white dots) in an exit basin diagram for the standard map with two symmetric leaks of width \(0.1\pi\). The colors red and blue refer to initial conditions leading to the leaks \(L_{1}\) and \(L_{2}\), respectively. White regions inside the potential correspond to KAM islands. We have also shown that not all KAM islands surround the main family of periodic orbits, but islets of stability exist for values above and below the accumulation point. We have studied these islets exhaustively, finding that all of them can be classified in three different types. The first type appears surrounding a stable periodic orbit created in a saddle-node bifurcation. The other two types emerge in the branches created in saddle-node bifurcations, always preceded by type I islets. To further demonstrate the validity of our classification scheme, we have identified the same types of islets in a different two-degree-of-freedom Hamiltonian system and in an area-preserving map. We expect that this work could contribute to understand the formation, evolution, and destruction of KAM islands in Hamiltonian systems. The insights gained from this research may find applications in various physical systems where KAM islands play a critical role. Examples of such applications include plasma confinement in tokamaks [42], chaotic transport of particles advected by fluid flows [43], and conductance fluctuations in chaotic cavities [44]. ###### Acknowledgements. This work has been financially supported by the Spanish State Research Agency (AEI) and the European Regional Development Fund (ERDF) under Project No. PID2019-105554GB-I00 (MCIN/AEI/10.13039/501100011033). ## Appendix A Propagation of Uncertainty The energy values where period-doubling bifurcations occur have been calculated by detecting the change in the stability of periodic orbits. Our algorithm detects the values \(E_{s}\) and \(E_{u}\) for which the orbit is still sable and already unstable, respectively. Therefore, the bifurcation point is given by \(E_{n}=(E_{s}+E_{u})/2\) and its uncertainty by \(\Delta E_{n}=(E_{u}-E_{s})/2\). Since we use the \(E_{n}\) for calculating \(\delta_{H}\), its uncertainty is propagated as \[\Delta\delta_{H} =\left|\frac{\partial\delta_{H}}{\partial E_{n}}\right|\Delta E _{n}+\left|\frac{\partial\delta_{H}}{\partial E_{n-1}}\right|\Delta E_{n-1}+ \left|\frac{\partial\delta_{H}}{\partial E_{n-2}}\right|\Delta E_{n-2}\] \[=\frac{(E_{n-1}-E_{n-2})\Delta E_{n}+(E_{n}-E_{n-2})\Delta E_{n-1 }+(E_{n}-E_{n-1})\Delta E_{n-2}}{(E_{n}-E_{n-1})^{2}}.\] In the case of the accumulation point \(E_{\infty}\), its uncertainty is given by: \[\Delta E_{\infty} =\left|\frac{\partial E_{\infty}}{\partial E_{6}}\right|\Delta E _{6}+\left|\frac{\partial E_{\infty}}{\partial E_{7}}\right|\Delta E_{7}+ \left|\frac{\partial E_{\infty}}{\partial\delta_{H}}\right|\Delta\delta_{H}\] \[=\frac{\Delta E_{6}+\delta_{H}\Delta E_{7}}{\delta_{H}-1}+\frac{( E_{7}-E_{6})\Delta\delta_{H}}{(\delta_{H}-1)^{2}}.\] ## Appendix B Coordinates of Islets \begin{table} \begin{tabular}{c c c c c} \hline \(n\) & \(m\) & \(E\) & \(x\) & Type \\ \hline \hline 1 & 7 & \([0.330768,0.330815]\) & \([-0.56056,-0.56048]\) & III \\ \hline 2 & 7 & \([0.35304,0.35311]\) & \([-0.57842,-0.57826]\) & III \\ \hline 3 & 11 & \([0.387718,0.387727]\) & \([-0.55975,-0.55965]\) & II \\ \hline 4 & 3 & \([0.572922,0.572932]\) & \([-0.9473,-0.9458]\) & I \\ \hline 5 & 13 & \([0.357048,0.357049]\) & \([-0.774675,-0.774620]\) & I \\ \hline 6 & 13 & \([0.3570487,0.3570515]\) & \([-0.774735,-0.774695]\) & III \\ \hline 7 & 5 & \([0.377254,0.377257]\) & \([-0.7471,-0.7463]\) & I \\ \hline 8 & 7 & \([0.374565,0.374595]\) & \([-0.8487,-0.8482]\) & I \\ \hline 9 & 7 & \([0.375200,0.375455]\) & \([-0.84763,-0.84755]\) & III \\ \hline 10 & 7 & \([0.471438,0.471448]\) & \([-0.66935,-0.66885]\) & I \\ \hline 11 & 7 & \([0.47153,0.47160]\) & \([-0.6683,-0.6680]\) & III \\ \hline 12 & 1 & \([0.21330,0.21355]\) & \([0.093,0.113]\) & I \\ \hline \end{tabular} \end{table} Table 3: Range of coordinates in the \((x,E)\) plane of the Barbanis system where several islets of stability of different multiplicity and type can be found. \begin{table} \begin{tabular}{l c c c} \hline \hline \(n\) & \(K\) & \(\theta\) & Type \\ \hline \hline [MISSING_PAGE_POST] \hline \end{tabular} \end{table} Table 4: Range of coordinates in the \((\theta,K)\) plane of the standard map where several islets of stability can be found.
2306.02965
Accurate and efficient treatment of spin-orbit coupling via second variation employing local orbitals
A new method is presented that allows for efficient evaluation of spin-orbit coupling (SOC) in density-functional theory calculations. In the so-called second-variational scheme, where Kohn-Sham functions obtained in a scalar-relativistic calculation are employed as a new basis for the spin-orbit-coupled problem, we introduce a rich set of local orbitals as additional basis functions. Also relativistic local orbitals can be used. The method is implemented in the all-electron full-potential code \exciting. We show that, for materials with strong SOC effects, this approach can reduce the overall basis-set size and thus computational costs tremendously.
Cecilia Vona, Sven Lubeck, Hannah Kleine, Andris Gulans, Claudia Draxl
2023-06-05T15:27:56Z
http://arxiv.org/abs/2306.02965v1
Accurate and efficient treatment of spin-orbit coupling via second variation employing local orbitals ###### Abstract A new method is presented that allows for efficient evaluation of spin-orbit coupling (SOC) in density-functional theory calculations. In the so-called second-variational scheme, where Kohn-Sham functions obtained in a scalar-relativistic calculation are employed as a new basis for the spin-orbit-coupled problem, we introduce a rich set of local orbitals as additional basis functions. Also relativistic local orbitals can be used. The method is implemented in the all-electron full-potential code exciting. We show that, for materials with strong SOC effects, this approach can reduce the overall basis-set size and thus computational costs tremendously. + Footnote †: These two authors contributed equally. + Footnote †: These two authors contributed equally. ## I Introduction Spin-orbit coupling (SOC) is crucially important for accurate electronic-structure calculations of many materials. To illustrate, SOC is responsible for lifting the degeneracy of low-energy excitons in transition-metal dichalcogenides (TMDCs) [1; 2; 3], opening a tiny gap in graphene [4; 5; 6], and dramatically lowering the fundamental band gap in halide perovskites [7]. However, the impact of SOC is not limited to changing features of the electronic bands. It affects bond lengths [8; 9; 10], phonon energies [8; 11], and even turns deep defects into shallow ones [12]. In density-functional-theory (DFT) computations, SOC is treated differently in the various methods and codes. In this context, the family of full-potential linearized augmented planewaves (LAPW) methods is commonly used as the reference, _e.g._, for new implementations [13] or for assessing pseudopotentials [14]. Commonly used LAPW codes [15; 16; 17; 18] employ similar strategies to account for SOC. For the low-lying core orbitals, the standard approach is to solve the radial 4-component Dirac equation, assuming a spherically symmetric potential. For the semi-core and valence electrons, the common strategy is to employ a two-step procedure [19]. First, the Kohn-Sham (KS) problem is solved within the scalar-relativistic approximation (first variation, FV). Then, the solutions of the full problem including SOC are constructed using the FV wavefunctions as the basis. This step is known as the second variation (SV). The strategy relies on an assumption that SOC introduces a small perturbation, and, indeed, this scheme is appropriate and efficient for many materials, since all the occupied and only a handful of unoccupied bands are sufficient. Under these circumstances, the two-step procedure offers a clear computational advantage over methods where SOC is treated on the same footing with other terms of the Hamiltonian [20; 21; 10]. Some materials, however, require more involved calculations than others. For example, it was argued by Scheidemantel and coworkers [22] that Bi\({}_{2}\)Te\({}_{3}\) requires the consideration of unoccupied bands of at least 8 Ry above the Fermi level to give reliable results. Even more striking, in the halide perovskites, the full set of KS orbitals is needed for convergence [23]. These cases illustrate that for some materials SOC cannot be considered as a small perturbation. Moreover, it is known that scalar- and fully-relativistic orbitals have different asymptotic behavior at small distances from the nuclei. Most notably, SOC introduces a splitting within the \(p\)-orbitals into spinors, where the radial part of the \(p_{3/2}\) solution goes to zero, while the \(p_{1/2}\) one diverges. This behavior cannot be recovered in terms of scalar-relativistic (SR) functions. Therefore, in Refs. [19] and [24], the SV basis was extended by additional basis functions, local orbitals (LOs), that recover the correct asymptotic behavior of the \(p_{1/2}\) orbitals. This approach is a step forward compared to conventional SV calculations. By introducing, however, exactly one shell of \(p_{1/2}\) LOs per atom, it does not offer the possibility of systematic improvement toward the complete-basis-set limit. There are examples of SR calculations in literature where an extensive use of LOs is required to reach precision targets [25; 26; 27]. Furthermore, Ref. [21] demonstrated this point also in the context of fully-relativistic calculations. We therefore conclude that the state-of-the-art SV approaches, be it with or without \(p_{1/2}\) LOs, are not sufficient for a systematic description of SOC in condensed-matter systems. In this work, we introduce a new approach, termed _second variation with local orbitals_ (SVLO), which makes use of the fact that relativistic effects are strongest around the atomic nuclei. Therefore, in comparison to the standard SV approach, it is important to increase the flexibility of the basis specifically in these regions. To satisfy this need, we express the solution of the full problem in terms of FV wavefunctions and rich sets of LOs. In contrast to the usual approach, all LOs are treated as explicit basis functions, also on the SV level. In addition, we include LOs obtained from solving the Dirac equa tion (termed Dirac-type LOs in the following) beyond \(p_{1/2}\) functions. Based on the implementation in the all-electron full-potential package **exciting**[15], we demonstrate and validate our method in band-structure and total-energy calculations of Xe, MoS\({}_{\text{2}}\), PbI\({}_{\text{2}}\), \(\gamma\)-CsPbI\({}_{\text{3}}\), and Bi\({}_{\text{2}}\)Te\({}_{\text{3}}\). ## II Method ### Conventional second variation We consider the two-component KS equations \[\sum_{\sigma^{\prime}=\,\uparrow,\downarrow}\hat{H}_{\sigma\sigma^{\prime}} \Psi_{i\mathbf{k}\sigma^{\prime}}\left(\mathbf{r}\right)=\varepsilon_{i\mathbf{k}}\Psi_{i \mathbf{k}\sigma}\left(\mathbf{r}\right) \tag{1}\] for the spin components \(\sigma=\,\uparrow,\downarrow\). The resulting single-particle spinors \(\sum_{\sigma}\Psi_{i\mathbf{k}\sigma}\left(\mathbf{r}\right)\left|\sigma\right\rangle\) have eigenenergies \(\varepsilon_{i\mathbf{k}}\), where \(i\) is the band index and \(\mathbf{k}\) the Bloch wave vector. The Hamiltonian \(\hat{H}_{\sigma\sigma^{\prime}}\) consists of a SR part and a spin-orbit part that couples the two spin components: \[\hat{H}_{\sigma\sigma^{\prime}}=\delta_{\sigma\sigma^{\prime}}\hat{H}_{\sigma }^{\text{SR}}+\hat{H}_{\sigma\sigma^{\prime}}^{\text{SOC}}. \tag{2}\] As described in Refs. [10; 20; 21], Eq. 1 can be solved directly, _i.e._, non-perturbatively (NP), requiring a significantly larger computational effort compared to a SR calculation. Given that \(\hat{H}_{\sigma\sigma^{\prime}}^{\text{SOC}}\) typically leads to a small correction, it is unsatisfactory to pay the full price for the NP solution. For this reason, often the conventional SV method is employed. In this approach, first, one solves the scalar-relativistic problem in FV, \[\hat{H}_{\sigma}^{\text{SR}}\Psi_{j\mathbf{k}\sigma}^{\text{FV}}\left(\mathbf{r} \right)=\varepsilon_{j\mathbf{k}\sigma}^{\text{FV}}\Psi_{j\mathbf{k}\sigma}^{\text{FV }}\left(\mathbf{r}\right), \tag{3}\] and subsequently uses the resulting FV eigenstates (which are the FV KS wavefunctions) as a basis for the SV eigenstates \[\Psi_{i\mathbf{k}\sigma}^{\text{SV}}\left(\mathbf{r}\right)=\sum_{j}C_{\mathbf{k}\sigma j }^{\text{SV}}\Psi_{j\mathbf{k}\sigma}^{\text{FV}}\left(\mathbf{r}\right). \tag{4}\] Here, \(j\) runs over all \(N_{\text{occ}}\) occupied and a limited number \(N_{\text{unocc}}\) of unoccupied FV KS states. Approximating the exact solution \(\Psi_{i\mathbf{k}\sigma}\left(\mathbf{r}\right)\) by \(\Psi_{i\mathbf{k}\sigma}^{\text{SV}}\left(\mathbf{r}\right)\), one obtains the SV eigenequation for the expansion coefficients \(C_{\mathbf{k}\sigma ji}^{\text{SV}}\) \[\sum_{\sigma^{\prime}j^{\prime}}H_{\mathbf{k}\sigma\sigma^{\prime}jj^{\prime}}C_{ \mathbf{k}\sigma^{\prime}j^{\prime}i}^{\text{SV}}=\varepsilon_{i\mathbf{k}}^{\text{SV} }C_{\mathbf{k}\sigma ji}^{\text{SV}}, \tag{5}\] where \(H_{\mathbf{k}\sigma\sigma^{\prime}jj^{\prime}}\) are the matrix elements of \(\hat{H}_{\sigma\sigma^{\prime}}\), as defined in Eq. 2, with respect to the basis functions \(\Psi_{j\mathbf{k}\sigma}^{\text{FV}}\left(\mathbf{r}\right)\), \[H_{\mathbf{k}\sigma\sigma^{\prime}jj^{\prime}} = \left\langle\Psi_{j\mathbf{k}\sigma}^{\text{FV}}\right|\hat{H}_{ \sigma\sigma^{\prime}}\left|\Psi_{j^{\prime}\mathbf{k}\sigma^{\prime}}^{\text{FV}}\right\rangle \tag{6}\] \[= \delta_{\sigma\sigma^{\prime}}\delta_{jj^{\prime}}\varepsilon_{j \mathbf{k}\sigma}^{\text{FV}}+\left\langle\Psi_{j\mathbf{k}\sigma}^{\text{FV}}\right| \hat{H}_{\sigma\sigma^{\prime}}^{\text{SOC}}\left|\Psi_{j^{\prime}\mathbf{k}\sigma ^{\prime}}^{\text{FV}}\right\rangle.\] ### Second variation with local orbitals The SVLO approach makes use of the underlying LAPW+LO method that is utilized to solve the FV problem in Eq. 3. Within the LAPW+LO method, KS orbitals are represented by two distinct types of basis functions, namely LAPWs, \(\phi_{\mathbf{G}\mathbf{k}}\left(\mathbf{r}\right)\), and LOs, \(\phi_{\mu}\left(\mathbf{r}\right)\), which are indexed by reciprocal lattice vectors \(\mathbf{G}\) and LO indices \(\mu\), respectively, \[\Psi_{j\mathbf{k}\sigma}^{\text{FV}}\left(\mathbf{r}\right)=\sum_{\mathbf{G}}C_{\mathbf{k} \sigma\mathbf{G}j}\phi_{\mathbf{G}\mathbf{k}}\left(\mathbf{r}\right)+\sum_{\mu}C_{\mathbf{k}\sigma \mu j}\phi_{\mu}\left(\mathbf{r}\right). \tag{7}\] In order to avoid linear dependency issues between FV eigenfunctions and LOs in our new approach, we modify these FV eigenfunctions such that LO contributions are neglected, and only the first sum in Eq. 7 is further considered: \[\bar{\Psi}_{j\mathbf{k}\sigma}^{\text{FV}}\left(\mathbf{r}\right)=\sum_{\mathbf{G}}C_{\mathbf{k} \sigma\mathbf{G}j}\phi_{\mathbf{G}\mathbf{k}}\left(\mathbf{r}\right). \tag{8}\] We combine these modified FV functions with the original set of LOs to form the SVLO basis \[\Psi_{i\mathbf{k}\sigma}^{\text{SVLO}}\left(\mathbf{r}\right)= \sum_{j}C_{\mathbf{k}\sigma ji}^{\text{SVLO}}\bar{\Psi}_{j\mathbf{k} \sigma}^{\text{FV}}\left(\mathbf{r}\right) \tag{9}\] \[+ \sum_{\mu}C_{\mathbf{k}\sigma\mu i}^{\text{SVLO}}\phi_{\mu}\left(\mathbf{r }\right).\] The total basis-set size in the SVLO method includes the number of these LO basis functions, \(N_{\text{LO}}\). To summarize, the total number of basis functions in the two methods is \[N_{\text{b}}^{\text{SV(LO)}}=\begin{cases}N_{\text{occ}}+N_{\text{unocc}}& \text{SV}\\ N_{\text{occ}}+N_{\text{unocc}}+N_{\text{LO}}&\text{SVLO}.\end{cases} \tag{10}\] In both cases, \(N_{\text{unocc}}\) is a computational parameter, and the results need to be converged with respect to it. We note in passing that the SVLO basis is not orthogonal and thus carries a slight computational overhead compared to the conventional SV method, since it leads to a generalized eigenvalue problem. The SVLO method is implemented in **exciting**. How the different types of LOs are constructed will be described in the next section. Unlike Ref. [19] and [24], our approach uses the entire set of LOs from the FV basis (including Dirac-type LOs if necessary) as the basis in the SV step. ### Local orbitals LOs are basis functions with the characteristic of being non-zero only in a sphere centered at a specific nucleus \(\alpha\)[34]. They take the form of atomic-like orbitals which read \[\phi_{\mu}(\mathbf{r})=\delta_{\alpha,\alpha_{\mu}}\delta_{l,l_{\mu}}\delta_{m,m_{\mu} }U_{\mu}(r_{\alpha})Y_{lm}(\hat{r}_{\alpha}), \tag{11}\] where \(Y_{lm}(\hat{r}_{\alpha})\) are spherical harmonics and \(U_{\mu}(r_{\alpha})\) are linear combinations of two or more radial functions \[U_{\mu}^{\rm SR}(r_{\alpha})={\sum_{\xi}}a_{\mu\xi}u_{\alpha\xi l}(r_{\alpha}; \varepsilon_{\alpha\xi l}). \tag{12}\] The index \(\xi\) sums over different radial functions. These radial functions \(u_{\alpha\xi l}(r_{\alpha};\varepsilon_{\alpha\xi l})\) are the solutions of the SR Schrodinger equation and/or their energy derivatives (of any order), evaluated at predefined energy parameters \(\varepsilon_{\alpha\xi l}\). Depending on their purpose all radial functions have the same energy parameter or one corresponding to a different state. To account for the asymptotic behavior of relativistic orbitals at the atomic nuclei, we build LOs in which the radial functions are solutions of the Dirac equation \[U_{\mu}^{\rm Dirac}(r_{\alpha})={\sum_{\xi,J}}a_{\mu\xi J}u_{\alpha\xi J}(r_{ \alpha};\varepsilon_{\alpha\xi Jl}). \tag{13}\] Here, the radial functions and the energy parameters are characterized by the additional quantum number \(J\). We sum over the index \(J\) to show that it is possible to combine radial functions with different total angular momentum (but the same angular momentum \(l\)). It is also possible to combine \(J\)-resolved radial functions with SR radial functions. In the following we call any LOs including at least one \(J\)-resolved radial function, Dirac-type LOs. With this, we can add one or more LOs with any relativistic quantum number, going beyond what has been suggested by Singh [19]. This approach, also used in Ref. [10], is convenient since the general form of the LOs of Eq. 12 is kept. ## III Computational details We consider a set of five materials, including 3D and 2D semiconductors and a topological insulator, with different atomic species, stoichiometry, and degree of SOC. For all of them, we employ experimental atomic structures. All calculations are performed with the package exciting[15] where the new method is implemented. Exchange and correlation effects are treated by the PBE parametrization of the generalized gradient approximation [35; 36]. Core electrons are described by means of the 4-component Dirac equation considering only the spherically symmetric part of the KS potential. For semicore and valence electrons, the zero-order regular approximation [37; 38] is used to obtain the SR and SOC contributions to the kinetic energy operator. The SOC term is applied only within the muffin-tin region and is evaluated by the following expression: \[\hat{H}^{\rm SOC}=\frac{c^{2}}{(2c^{2}-V)^{2}}\frac{1}{r}\frac{dV}{dr}\mathbf{\sigma L}, \tag{14}\] where \(\mathbf{\sigma}\) and \(V\) are the vector of Pauli matrices and the spherically symmetric component of the KS potential, respectively. The structural and computational parameters are displayed in Table 1. The respective \(\mathbf{k}\)-mesh and the dimensionless LAPW cutoff \(R_{\rm MT}^{\rm min}G_{\rm max}\) are chosen such that total energies per atom and band gaps are within a numerical precision of \(10^{-2}\) eV/atom and \(10^{-2}\) eV, respectively. The actual LAPW basis cutoff \(G_{\rm max}\) is determined by dividing \(R_{\rm MT}^{\rm min}G_{\rm max}\) by the smallest MT radius \(R_{\rm MT}^{\rm min}\) of the considered system. SR calculations serve for comparison with the other methods to investigate the magnitude of SOC effects. To determine the advantages of the SVLO over the SV method, we have carefully monitored the convergence of all considered quantities with respect to the number of SV(LO) basis functions. The NP method, as described in Ref. [10], is used as a reference for this assessment. For SR and SV calculations, we employ SR LOs. As we mainly address \(p\) states in our examples, we label this case as \(p\). The LO set including Dirac-type LOs, referred to as \(p_{1/2}\), is constructed by adding to the SR LOs two \(p_{1/2}\)-type LOs for each \(p\) state. Due to their \(p\) character, each LO gives rise to three degenerate basis functions. The method is, however, fully general such to include relativistic LOs of other characters. For instance, we explore the effect of \(d_{3/2}\) LOs in MoS\({}_{2}\) since its valence band maximum (VBM) and conduction band minimum (CBm) exhibit predominant \(d\)-character [39]. As the impact on the total energy and the electronic structure turns out to be negligible, however, we do not include this case in the following analysis. In the other materials, we additionally investigate the effects of \(p_{3/2}\) LOs, by replacing the \(p\) LOs. Due to their similar behavior near the nuclei, their impact is, however, only of the order of \(10^{-2}\) eV or smaller, which is within our convergence criteria. For this reason, we do not consider them further. \(p_{1/2}\) LOs are used in SVLO and in the corresponding NP reference when specified. The number of LOs used in the different systems is displayed in Table 2 together with the size of the LAPW basis and the number of occupied valence \begin{table} \begin{tabular}{l|c|c|c|c|c} Material & Xe & MoS\({}_{2}\) & PbI\({}_{2}\) & CsPbI\({}_{3}\) & Bi\({}_{2}\)Te\({}_{3}\) \\ \hline \(a\) [Å] & 6.20 & 3.16 & 4.56 & 8.86 & 10.44 \\ \(b\) [Å] & 6.20 & 3.16 & 4.56 & 8.57 & 10.44 \\ \(c\) [Å] & 6.20 & 15.88 & 6.99 & 12.47 & 10.44 \\ Space group & Fm3-m & P-6m2 & P-3m1 & Pnam & R-3m \\ Ref. & 28, 29 & 30 & 31 & 32 & 33 \\ \hline \(R_{\rm MT}\) [\(a_{0}\)] & 3.00 & 2.30/2.05 & 2.90 & 2.90 & 2.80 \\ \(R_{\rm MT}^{\rm min}G_{\rm max}\) & 8 & 8 & 8 & 9 & 10 \\ \(\mathbf{k}\)-mesh & 4\(\times\)4\(\times\)4 & 6\(\times\)6\(\times\)1 & 6\(\times\)6\(\times\)4 & 3\(\times\)3\(\times\)2 & 6\(\times\)6\(\times\)6 \\ \end{tabular} \end{table} Table 1: Structural information and convergence parameters used in the calculations of the considered materials. \(R_{\rm MT}^{\rm min}G_{\rm max}\) is the product of the largest reciprocal lattice vector, \(G_{\rm max}\), considered in the LAPW basis and the (smallest) MT radius, \(R_{\rm MT}^{\rm min}\). For MoS\({}_{2}\), the latter refers to the S sphere (\(R_{\rm MT}\)=2.05 \(a_{0}\)). For more detailed information, we refer to the input files provided at NOMAD. states. To obtain the band-gap position for Bi\({}_{2}\)Te\({}_{3}\), for the different sets of LOs (\(p\)-type and the \(p_{1/2}\)-type), we perform on top of the self-consistent NP calculation an additional iteration with a 48\(\times\)48\(\times\)48 \(\mathbf{k}\)-mesh. The so determined respective \(\mathbf{k}\)-points are then included in the band-structure path, from which we extract the final values of the energy gaps in the SV and SVLO calculations. All input and output files are available at NOMAD [40]. ## IV Results ### Xe Our analysis starts with solid Xe. Fig. 1 shows the convergence behavior of the total energy with respect to the number of basis functions, taking the NP calculation as a reference. For comparison, we also show the results of the conventional SV method. Since we employ the same number of occupied states in the SV and the SVLO method, and for the \(p\) and \(p_{1/2}\) sets of LOs (Table 2), the number of basis functions on the x-axis does not include the occupied states, _i.e._, \(\tilde{N}_{\rm b}^{\rm SV(LO)}=N_{\rm b}^{\rm SV(LO)}-N_{\rm occ}\). \(N_{\rm b}^{\rm SV(LO)}\) is defined in Eq. 10 that applies to both methods. The number of LO basis functions in the equation is also predetermined, therefore, the increase in \(\tilde{N}_{\rm b}^{\rm SV(LO)}\) reflects only the number of unoccupied bands. We will always refer to \(\tilde{N}_{\rm b}^{\rm SV(LO)}\) when discussing the basis-set size. In the case of Xe, we consider 26 SR LO basis functions (see Table 2). Strikingly, the total-energy differences obtained by the SVLO method stay within \(2\times 10^{-3}\) eV/atom when employing a total number of basis functions comparable with the number of LO basis functions, while the SV method requires all available FV states to reach values even one order of magnitude larger (\(7\times 10^{-2}\) eV/atom). To visualize this behavior better, Fig. 2 depicts the convergence of both methods on a logarithmic scale. We can observe that the SVLO method reaches convergence within \(10^{-6}\) eV/atom with \(\sim\)80 basis functions. In this figure, we also analyze the convergence of the energy gap, \(E_{\rm g}\), and the SOC splitting at the \(\Gamma\) point, \(\delta_{\rm SOC}\). When SOC is considered, \(E_{\rm g}\) decreases by 0.43 eV due to the splitting of the (disregarding spin) three-fold degenerate VBM into a single state and a double-degenerate state by about 1.30 eV (see also Table 3 and Fig. 3). For both quantities, we observe that the SVLO method reaches a precision of the order of \(10^{-4}\) eV already with a number of basis functions comparable with the number of LO functions; with approximately 80 basis functions even two orders of magnitude better. In contrast, the SV treatment, employing all available FV KS eigenstates, only converges within \(10^{-3}\) eV and \(10^{-2}\) eV for the energy gap and the SOC splitting, respectively. If we consider a target precision often used for production calculations such as \(10^{-2}\) eV/atom for the total energy and \(10^{-2}\) eV for energy gaps and SOC splittings, the advantage of the SVLO method is particularly considerable for the total energy. In contrast, the SV energy gap reaches the target precision at a number of empty states smaller than the number of LO basis functions, and the corresponding SOC splitting requires approximately 75 empty states. Dirac-type LOs turn out to be significant for the SOC splitting which increases by 0.1 eV (Table 3) upon adding four \(p_{1/2}\)-type LOs, each of them contributing three degenerate basis functions (Table 2). Their effect on the energy gap is negligible. The convergence behavior of the energy gap and the SOC splitting with respect to the number of basis functions is comparable to that of the SVLO method with SR LOs (Fig. 2). Contrarily, the total energy converges to a worse precision (within \(10^{-4}\) eV/atom). Also with Dirac-type LOs the analyzed quantities reach the targeted prec \begin{table} \begin{tabular}{l|c c c c} Material & \(N_{\rm LAPW}\) & \(N_{\rm LO}\) & \(N_{\rm LO}^{1/2}\) & \(N_{\rm occ}\) \\ \hline Xe & 138 & 26 & 38 & 13 \\ MoS\({}_{2}\) & 939 & 35 & - & 13 \\ PbI\({}_{2}\) & 318 & 73 & 109 & 33 \\ CsPbI\({}_{3}\) & 3236 & 496 & 736 & 228 \\ Bi\({}_{2}\)Te\({}_{3}\) & 895 & 141 & 201 & 57 \\ \end{tabular} \end{table} Table 2: Basis functions considered in the calculations of the five studied systems. \(N_{\rm LAPW}\) is the number of LAPWs, \(N_{\rm LO}\) (\(N_{\rm LO}^{1/2}\)) the number of LO basis functions for calculations without (with) Dirac-type LOs. The last column shows the number of occupied valence states, \(N_{\rm occ}\). Figure 1: Convergence behavior of the total energy with respect to the number of basis functions \(\tilde{N}_{\rm b}^{\rm SV(LO)}\) (excluding occupied states). The energy of the NP calculation is taken as a reference. Blue circles indicate the SVLO scheme, green diamonds the conventional SV treatment. The right panel zooms into the gray region where the SVLO method converges. states in addition to the LO basis functions. In the Appendix, we explain why the SV and the SVLO method do not converge to the same precision. ### MoS\({}_{2}\) The transition-metal dichalcogenide MoS\({}_{2}\) is among the most studied 2D materials, and a candidate for many Figure 2: Convergence behavior of total energy, energy gap, and SOC splitting in Xe, MoS\({}_{2}\), PbI\({}_{2}\), and CsPbI\({}_{3}\), with respect to the number of basis functions used in the SV(LO) methods. Note the logarithmic scale on the y-axes. For the energy differences, the NP results serve as a reference. In the NP reference calculations, we employ sets of \(p\) or \(p_{1/2}\) LOs, depending on the method to compare with. Green diamonds stand for the SV method with SR LOs. All other results are obtained with the SVLO method, using different types of LOs: those obtained using SR (Dirac-type) orbitals are indicated by blue circles (red triangles). The vertical lines mark the respective number of LO basis functions. For PbI\({}_{2}\), we display \(\delta^{2}_{\text{SOC}}\) and the energy difference \(\delta^{1}_{\text{SOC}}\) (both indicated in Fig. 3). The gray shaded areas are guides to the eye for highlighting the points which are within the target precision (\(10^{-2}\) eV/atom for the total energy and \(10^{-2}\) eV for the other quantities). applications in optoelectronics. SOC reduces the energy gap by 0.07 eV only (Table 3), caused by a splitting of the VBM. Although this splitting is rather small, _i.e._, 0.15 eV, it is fundamental as, being not considered, could lead to the unphysical prediction of an indirect band gap [41]. Moreover, the splitting at the K-point of the Brillouin zone (BZ) is essential for the accurate description of the optical spectra [1; 2]. Regarding the convergence behavior (second column of Fig. 2), we observe a small improvement of the SVLO method over the SV method for the total energy: With around 40 basis functions, the SVLO (SV) method reaches a precision of the order of \(10^{-3}\) eV/atom (\(10^{-2}\) eV/atom). For the energy gap and the SOC splitting, both methods reach convergence with a few basis functions and reproduce the NP treatment with a precision of the order of \(10^{-4}\) eV. \begin{table} \begin{tabular}{l|c c c c c|c c c c|c|c} Method & \multicolumn{4}{c|}{\(E_{\rm g}\) [eV]} & \multicolumn{4}{c|}{\(\delta_{\rm SOC}\) [eV]} & \(E_{\Gamma\to\Gamma}\) [eV] \\ \cline{2-11} & Xe & MoS\({}_{2}\) & PbI\({}_{2}\) & CsPbI\({}_{3}\) & Bi\({}_{2}\)Te\({}_{3}\) & Xe & MoS\({}_{2}\) & PbI\({}_{2}\) & (\(\delta_{\rm SOC}^{2}\)) & PbI\({}_{2}\) & (\(\delta_{\rm SOC}^{2}\)) & CsPbI\({}_{3}\) & Bi\({}_{2}\)Te\({}_{3}\) \\ \hline SR & 6.22 & 1.78 & 2.20 & 1.64 & 0.25 (\(\Gamma\to\Gamma\)) & - & - & 0.94 & - & - & 0.25 \\ SVLO (\(p\)) & 5.79 & 1.71 & 1.66 & 0.82 & 0.10 (B\(\to\)B) & 1.30 & 0.15 & 1.25 & 0.63 & 0.71 & 0.58 \\ SVLO (\(p_{1/2}\)) & 5.79 & & 1.40 & 0.55 & 0.03 (D\(\to\)C) & 1.40 & & 1.45 & 0.68 & 0.76 & 0.69 \\ \end{tabular} \end{table} Table 3: Energy gaps, \(E_{\rm g}\), and SOC splittings, \(\delta_{\rm SOC}\), of the considered materials computed with the SVLO method for different sets of LOs. For comparison, scalar-relativistic (SR) results are shown. For Bi\({}_{2}\)Te\({}_{3}\), that does not exhibit any SOC splitting, we show the energy difference between the highest valence band (VB) and the lowest conduction band (CB) at \(\Gamma\), \(E_{\Gamma\to\Gamma}\). Note that in this material, SOC not only changes the magnitude of the gap but also the position of the VBM and the CBm. Both are again altered when Dirac-type LOs are considered. Figure 3: Band structures of Xe (upper-left panel), Mo\({}_{2}\) (upper-right panel), PbI\({}_{2}\) (lower-left panel), and CsPbI\({}_{3}\) (lower-right panel) computed with different methods and types of local orbitals. Black lines correspond to SR calculations and blue (red) lines to the SVLO method without (with) Dirac-type orbitals. The VBM is set to 0. At the right of each panel, we zoom into the corresponding region indicated by a gray box. ### PbI\({}_{2}\) Lead iodide, PbI\({}_{2}\), is a semiconductor used for detectors, and it is also a precursor for the heavily investigated solar-cell materials, the lead-based halide perovskites. Like in the latter, in PbI\({}_{2}\), the SOC effects are massive. The band gap reduces by 0.54 eV (Table 3, Fig. 3) and (disregarding spin) the two-fold degenerate second conduction band (CB) experiences a splitting of \(\delta^{2}_{\rm SOC}\)= 0.63 eV. There is also an increase in the energy distance between the CBm and the second CB which is 0.31 eV (Table 3). For convenience, we label this energy difference as \(\delta^{1}_{\rm SOC}\). For PbI\({}_{2}\), the advantages of the SVLO method over SV are considerable. With a number of basis functions comparable to the number of LO functions (here 73, see Table 2), the SVLO method has reached the target precision for all considered quantities (Fig. 2). Contrarily, the SV method, requires basically all empty states (\(\sim\)375 basis functions) for reaching the target precision for the total energy; \(\sim\)225 empty bands are needed for the energy gap and \(\sim\)150 for \(\delta^{1}_{SOC}\), while only \(\sim\)10 empty states for \(\delta^{2}_{SOC}\). Except for the total energy, for which the SV method converges to a precision of the order of \(10^{-2}\) eV/atom and the SVLO method of the order of \(10^{-5}\) eV/atom, both approaches converge to comparable precision. For an accurate prediction of the electronic structure, \(p_{1/2}\)-type LOs are crucial (Fig. 3). We add 4 for each species, with a total of 36 LOs basis functions (see Table 2). They reduce the energy gap further by 0.26 eV and increase \(\delta^{1}_{\rm SOC}\) by additional 0.20 eV. \(\delta^{2}_{\rm SOC}\) increases by 0.05 eV only (Table 3). The convergence behavior with \(p_{1/2}\)-type LOs is overall comparable to that with SR LOs. Note that the two curves appear shifted by these 36 additional basis functions. Although this number of LO basis functions is considerable for such a system (see Table 2), the speed-up with respect to the SV method is significant also when Dirac-type LOs are employed. ### CsPbI\({}_{3}\) CsPbI\({}_{3}\) is among the most studied inorganic metal halide perovskites [23; 42]. We consider it in the orthorhombic \(\gamma\)-phase that contains 20 atoms. Being composed by three heavy elements, SOC effects are enormous. The band gap decreases from 1.64 eV with SR to 0.82 eV when SOC is considered (Table 3). This is caused by a 0.71 eV splitting of the (disregarding spin) two-fold degenerate CBm (Fig. 3). When Dirac-type LOs are added, the gap further reduces by 0.27 eV, while the splitting increases by only 0.05 eV (Table 3). Although SV and SVLO(\(p\)) converge to the same results within the target precision, the computational effort required for the two approaches is noticeably different. In the limit of large unit cells -CsPbI\({}_{3}\) is the largest one considered here- the dominant contribution to the run time comes from the tasks that scale cubically with respect to the system size. These tasks include the construction of the Hamiltonian matrices and the diagonalization. As shown in Fig. 2, a converged SV calculation requires that essentially all unoccupied bands are included for solving the full problem. In this light, SV does not offer any advantage over the NP approach. In contrast, to converge the SVLO(\(p\)) calculation, it is sufficient to use a significantly smaller basis with \(N_{\rm occ}=228\), \(N_{\rm LO}=496\), and \(N_{\rm unocc}\sim 0\) (see Table 2). Taking into account the spin degrees of freedom, the size of the Hamiltonian matrix in the SV step is \(\sim\)1500. As discussed above, diagonalization is also required in the FV step, where the dimension of the SR Hamiltonian is \(\sim\)3800. This step is therefore the most computationally intensive in this example. Compared to the NP calculation, we find that total time spent on the FV and SV steps is reduced by a factor of 3.6. Finally, the inclusion of \(p_{1/2}\)-type LOs increases \(N_{\rm LO}\) to 736 and thus also slightly increases the size of the SV diagonalization problem. ### Bi\({}_{2}\)Te\({}_{3}\) Bi\({}_{2}\)Te\({}_{3}\) is a topological insulator with a single Dirac cone at \(\Gamma\)[43; 44]. It is characterized by strong SOC effects, shifting the fundamental band gap from \(\Gamma\) to an off-symmetry point in the mirror plane of the first Brillouin zone that is displayed in the bottom panel of Fig. 4. The positions of the VBM and CBm are highly sensitive to the structure and the choice of the exchange-correlation functional, thus there are controversial results present in the literature. Ref. [45] presents an overview of this diversity that increases when more accurate methods, such as the \(GW\) approximation, are applied [46; 47]. All these aspects together make Bi\({}_{2}\)Te\({}_{3}\) computationally challenging. Bi\({}_{2}\)Te\({}_{3}\) crystallizes in a rhombohedral structure with R-3m symmetry, shown in the top panel of Fig. 4. It consists of five layers, with alternating Te and Bi sheets, repeated along the z-direction. There are two chemically inequivalent Te sites. Including SOC, the band structure undergoes significant changes that are further enhanced when Dirac-type LOs are added [48; 49] (top and middle panels of Fig. 5). A relevant difference is observed at \(\Gamma\) where the valence band (VB) and the CB obtained from SOC calculations show a hump as a consequence of the band-inversion characteristic of this material [43; 45]. Differently from similar topological insulators, the hump is well preserved in spinor \(GW\) calculations, which include the off-diagonal elements of the self-energy, even though the band dispersion is strongly altered [47]. SR calculations lead to a direct band gap of 0.25 eV at \(\Gamma\) (Table 3). By adding SOC -but no Dirac-type LOs- it reduces to 0.10 eV and is located at point B=(0.67, 0.58, 0.58), which appears sixfold in the BZ. Our results are comparable with those of Ref. [49] and Ref. [50]. In the former, a direct band gap of 0.13 eV at (0.667,0.571, 0.571) was measured, while in the latter, a value of 0.11 eV was computed, but different from our result, it was reported to be indirect. However, VBM and CBm are very close to each other being located at (0.652, 0.579, 0.579) and (0.663, 0.568, 0.568), respectively. One may assign these differences to the use of different \(\mathbf{k}\)-grids and crystal structures (here we use \(a=10.44\) A and \(\theta=24.27^{\circ}\)[33], while Refs. [49; 50] use an experimental structure with \(a=10.48\) A and \(\theta=24.16^{\circ}\)). By adding 4 \(p_{1/2}\)-type LOs for each species, _i.e._, a total of 60 basis functions (Table 2), the gap reduces to 0.03 eV and becomes indirect. In the bottom panel of Fig. 5, we observe that the VB is lowered at B and raised at D=(0.52, 0.35, 0.35) where the VBM is now located. The CB is not altered at B but lowered at C=(0.65, 0.54, 0.54) which is the approximate location of the CBm (the resolution being limited by the \(48\times 48\times 48\)\(\mathbf{k}\)-mesh). Points C an D are six-fold degenerate. D is located between \(\Gamma\) and A=(0.64, 0.43, 0.43), which lies on the path between \(Z\) and \(U\). C and B are close to \(Z\to F\). Larson [48] and Huang and coworkers [49] obtained gaps of 0.05 eV and 0.07 eV, respectively, with \(p_{1/2}\)-type LOs. The locations of the band extrema slightly differ between the three works where ours is in better agreement with that of Larson [48]. As evident from Fig. 6, for Bi\({}_{2}\)Te\({}_{3}\), our new method has clear advantages over the conventional SV method, which reaches the target precision for the total energy only with basically all available FV KS states (about Figure 4: Top: Crystal structure of Bi\({}_{2}\)Te\({}_{3}\), built by Bi (pink) and two chemically inequivalent Te atoms (Te\({}_{1}\) orange, Te\({}_{2}\) gold). Bottom: Corresponding Brillouin zone. The mirror plane containing the points depicted in the band structure in Fig. 5 is indicated in red. Figure 5: Band structure of Bi\({}_{2}\)Te\({}_{3}\), computed without (top panel) and with SOC (other panels). The coordinates of the high-symmetry points are U (0.823,0.339, 0.339), Z (0.5,0.5,0.5), F (0.5,0.5,0.0), L (0.5,0.0,0.0); those of points A, B, C, and D are given in the text. The dashed vertical lines in the two top panels indicate the position of point D. The bottom panel zooms into the region of the band edges, showing the direct (indirect) band gap computed with \(p\) (\(p_{1/2}\)) LOs. Note that, differently from Fig. 3, the energy zero is not at the VBM but in the middle of the band gap. \(\sim\)1000, see Table 2), while the SVLO method requires a basis-set size comparable with the number of LO basis functions (141 for the \(p\)-set and 201 for the \(p_{1/2}\)-set). The SVLO method converges in either case within a precision of \(10^{-4}\) eV/atom. Like for the other materials, the electronic structure obtained by SV, converges faster than the total energy, but the convergence is still not comparable with that of the SVLO method. To obtain an energy gap within a precision of \(10^{-2}\) eV, the SV method requires about twice the number of basis functions (without including the occupied states); to obtain \(E_{\Gamma\to\Gamma}\) with a precision of \(\sim 10^{-4}\) eV, the basis size needs to be further doubled. The band gap converges to a precision of \(\sim 10^{-5}\) eV, while the SV method cannot go lower than \(10^{-4}\) eV. ## V Discussion and Conclusions In this work, we have introduced a novel approach - the SVLO method- to treat spin-orbit coupling in DFT calculations efficiently. It allows us to obtain rapid convergence and highly precise results, _e.g._, band energies within the order of \(10^{-4}\) eV or even better. SOC splittings and total energies within a precision of \(10^{-2}\) eV and \(10^{-2}\) eV/atom, respectively, can actually be obtained with a number of basis functions that is comparable to the number of occupied states plus a set of LOs. Its efficiency is owing to the fact that SOC effects mainly come from regions around the atomic nuclei where atomic-like functions play the major role in describing them. We have demonstrated this method with examples of very different materials. The use of the SVLO method is most efficient when SOC effects are strong. In the cases, we also observe significant contributions of \(p_{1/2}\) LOs. Obviously, the overall gain of our method is getting more pronounced the bigger the system is. In summary, by providing a method that allows for reliable and efficient calculations of SOC, our work contributes to obtaining highly-accurate electronic properties at the DFT level. ###### Acknowledgements. This work was supported by the German Research Foundation within the priority program SPP2196, Perovskite Semiconductors (project 424709454) and the CRC HIOS (project 182087777, B11). A.G. acknowledges funding provided by European Regional Development Fund via the Central Finance and Contracting Agency of Republic of Latvia under the grant agreement 1.1.1.5/21/A/004. Partial support from the European Union's Horizon 2020 research and innovation program under the grant agreement N\({}^{\circ}\) 951786 (NOMAD CoE) is appreciated. ## Appendix In the examples discussed above, the SV method often converges to worse precision than the SVLO method. This may appear counter intuitive since the two methods should be equivalent if the SV basis includes all available FV KS states. The reason for the seeming discrepancy comes from the fact that the LAPW basis-set size may be different at different \(\mathbf{k}\)-points, depending on their symmetry. In contrast, in the SV method, the size of the basis is controlled by an input parameter and limited by the number of available FV KS orbitals. In our implementation, the same number is considered for all \(\mathbf{k}\)-points. In Fig. 7, we show for the example of Xe that -when carrying out the SVLO calculation with a single \(\mathbf{k}\)-point- the two methods reach the same precision (of the order of \(10^{-6}\) eV) for all analyzed properties. In this case, all KS orbitals can be used as basis functions in the SV method. We emphasize, however, that the inclusion of all KS states is not efficient and thus not desirable anyway.
2310.00658
The Robots are Here: Navigating the Generative AI Revolution in Computing Education
Recent advancements in artificial intelligence (AI) are fundamentally reshaping computing, with large language models (LLMs) now effectively being able to generate and interpret source code and natural language instructions. These emergent capabilities have sparked urgent questions in the computing education community around how educators should adapt their pedagogy to address the challenges and to leverage the opportunities presented by this new technology. In this working group report, we undertake a comprehensive exploration of LLMs in the context of computing education and make five significant contributions. First, we provide a detailed review of the literature on LLMs in computing education and synthesise findings from 71 primary articles. Second, we report the findings of a survey of computing students and instructors from across 20 countries, capturing prevailing attitudes towards LLMs and their use in computing education contexts. Third, to understand how pedagogy is already changing, we offer insights collected from in-depth interviews with 22 computing educators from five continents who have already adapted their curricula and assessments. Fourth, we use the ACM Code of Ethics to frame a discussion of ethical issues raised by the use of large language models in computing education, and we provide concrete advice for policy makers, educators, and students. Finally, we benchmark the performance of LLMs on various computing education datasets, and highlight the extent to which the capabilities of current models are rapidly improving. Our aim is that this report will serve as a focal point for both researchers and practitioners who are exploring, adapting, using, and evaluating LLMs and LLM-based tools in computing classrooms.
James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Peterson, Raymond Pettit, Brent N. Reeves, Jaromir Savelka
2023-10-01T12:54:37Z
http://arxiv.org/abs/2310.00658v1
# The Robots are Here: ###### Abstract. Recent advancements in artificial intelligence (AI) are fundamentally reshaping computing, with large language models (LLMs) now effectively being able to generate and interpret source code and natural language instructions. These emergent capabilities have sparked urgent questions in the computing education community around how educators should adapt their pedagogy to address the challenges and to leverage the opportunities presented by this new technology. In this working group report, we undertake a comprehensive exploration of LLMs in the context of computing education and make five significant contributions. First, we provide a detailed review of the literature on LLMs in computing education and synthesise findings from 71 primary articles, nearly 80% of which have been published in the first 8 months of 2023. Second, we report the findings of a survey of computing students and instructors from across 20 countries, capturing prevailing attitudes towards LLMs and their use in computing education contexts. Third, to understand how pedagogy is already changing, we offer insights collected from in-depth interviews with 22 computing educators from five continents who have already adapted their curricula and assessments. Fourth, we use the ACM Code of Ethics to frame a discussion of ethical issues raised by the use of large language models in computing education, and we provide concrete advice for policy makers, educators, and students. Finally, we benchmark the performance of LLMs on various computing education datasets, and highlight the extent to which the capabilities of current models are rapidly improving. There is no doubt that LLMs and other forms of generative AI will have a profound impact on computing education over the coming years. However, just as the technology will continue to improve, so will our collective knowledge about how to leverage these new models and tools in educational settings. We expect many important conversations around this topic will emerge as the community explores how to provide more effective, inclusive, and personalised learning experiences. Our aim is that this report will serve as a focal point for both researchers and practitioners who are exploring, adapting, using, and evaluating LLMs and LLM-based tools in computing classrooms. ## CCS Concepts * Social and professional topics \(\rightarrow\) Computing education; * Computing methodologies \(\rightarrow\) Artificial intelligence. ## Keywords AI; artificial intelligence; code generation; Codex; computer programming; Copilot; CS1; GitHub; GPT; large language models; LLM; novice programming; OpenAI; pedagogical practices ### ACM Reference Format James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albulwi, Michelhe Craig, Hieke Keuming, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Petersen, Raymond Pettit, Brent N. Reeves, and Joramir Savelka. 2023. The Robots are Here: Navigating the Generative AI Revolution in Computing Education. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2 (ITCSE 2023), July 8-12, 2023, Turku, Finland_. ACM, New York, NY, USA, 51 pages. [https://doi.org/10.1145/3587103.3594206](https://doi.org/10.1145/3587103.3594206) ## 1. Introduction Many disruptions to computing education - and education globally - have occurred in the past few years. During the COVID-19 pandemic, students adapted to learning online in unprecedented ways. It was during this time Generative AI became available to the public with the November 2022 release of ChatGPT being the main catalyst. Suddenly students are not just learning _about_ AI in advanced computer science courses, but _using_ it. Unlike before, they are not using it just passively where AI powers some aspect of the tools they might use (such as Google Translate where AI _transforms_ data), but in an active manner where students are knowingly and intentionally using and interacting with AI as a tool to _generate_ new data with natural language prompts. These generative tools have much broader capabilities than what was available just a few years ago and can be used in all disciplines including computing for myriad tasks. In computing education, researchers have demonstrated that these models have an increasing capacity to perform source code generation and interpretation through a natural language interface (Zhou et al., 2021). For instance, it is likely that pair programming might evolve in some cases from two students working together to a student and their LLM working together (Zhou et al., 2021). Many of these models are easily available and free for students, and early reports reveal that students are already using them for assistance on their assignments (Zhou et al., 2021). In addition, there is now at least one textbook, published in September 2023, which features the use of Generative AI - specifically GitHub Copilot and ChatGPT - from day 1 of introductory programming courses (Zhou et al., 2021). The profound impacts of LLMs on computing education are still not entirely known but are already being felt by educators (Zhou et al., 2021). The evidence gathered over the past few decades about how students learn best supports the commonly adopted approach of having students write many small programs checked by automated assessment tools over the course of their introductory terms (Zhou et al., 2021). However, this approach may have become obsolete given how easily most LLMs can now solve introductory computing problems with simple prompts (Zhou et al., 2021; Zhou et al., 2021; Zhou et al., 2021). Furthermore, generative AI models can provide wrong or biased answers, and students may also become over-reliant on LLM tools or generate code plagiarised from online sources by the model (Zhou et al., 2021). The models might generate code students do not understand (Zhou et al., 2021) or may distract them with large blocks of text they did not write (Zhou et al., 2021). Teachers may look to AI detectors to enforce some semblance of normal, but evidence is mounting that these tools are currently ineffective (Zhou et al., 2021). However, these models offer computing educators opportunities in addition to the aforementioned challenges. Recent research has shown promising possibilities for providing students with LLM partners in pair programming, given the right context and with the right scaffolding and support (Zhou et al., 2021; Zhou et al., 2021; Zhou et al., 2021). LLMs can also provide detailed code explanations to support students working through difficult problems (Zhou et al., 2021; Zhou et al., 2021) and can even explain error messages (Zhou et al., 2021) known to have vexed students for decades (Zhou et al., 2021). Instructors can also benefit as these models can rapidly generate new and personalised teaching materials and programming assignments (Zhou et al., 2021; Zhou et al., 2021). Most exciting are the opportunities for entirely new types of programming problems utilising LLMs, such as Prompt Problems (Zhou et al., 2021). Large language models will have a profound impact on computing education in the next decade as the technology matures and as teachers and researchers identify opportunities. LLMs will change how, what, and whom we teach not only in computing but in all of education (Zhou et al., 2021). This working group1 report aims to summarise these early movements in computing education to set an agenda for researchers and to collect effective practices for educators. Footnote 1: [https://github.io/](https://github.io/) ## 2. Contributions This working group report describes the following efforts that, taken together, aim to describe the current state of LLM issues in computing education and to set out a comprehensive vision of the future of programming education in the LLM era: 1. **Reviewing Literature (Section 3):** We review the existing literature on LLMs in computing education2 and present a guide to the opportunities and challenges of LLMs in this domain. Footnote 2: Through August 2023. 2. **Evaluating Current Attitudes (Section 4):** We conducted an international survey of students and instructors to obtain their perspectives of LLMs. From this data, we provide a snapshot of current attitudes toward LLMs and their uses. 3. **Identifying New Instructional Approaches (Section 5):** We interviewed instructors in terms of teaching about and/or using LLMs in the classroom. They provide insight into advantages and disadvantages of using LLMs in computing education. 4. **Exploring Ethical Implications of LLMs (Sections 6 and 7):** We perform an evidence-based ethical analysis on the use of LLMs in computing education by evaluating the AI policies of several leading universities in the context of the ACM Code of Ethics. These examples suggest how universities are responding - and may in the future further respond - to the ethical challenges presented by such systems. From this, we furthermore discuss academic integrity issues with LLMs and provide resources for both faculty and students to understand when it may or may not be permissible to utilise LLMs. 5. **Encouraging Replication (Section 8):** We replicate prior work using new LLMs, highlighting challenges driven by the speed at which LLMs are improving and with current standards for describing research methods. To encourage comparisons between published work, we identify appropriate, openly available datasets and identify concerns with the quality and type of datasets available. ## 3. Review of Literature The working group aimed to identify prior work that explores how large language models might impact computing education. We recognise that any such attempt in this nascent and rapidly expanding area of research will quickly become out of date but aim to establish the _status questions_ of this new research field and to provide recommendations based on the current scholarly discourse. Furthermore, we used the work we found to inform the other activities of the working group listed in Section 2. ### Method We chose to perform a scoping review to rapidly identify gaps and major themes in the literature discussing how large language models can support computing education. We explicitly considered but decided not to perform a systematic review, as the research in this area is evolving quickly and relies heavily on dissemination through non-traditional publication channels such as arXiv. We chose to perform one step of forward and backward snowballing (Krishnan et al., 2018) from a set of reference papers that were identified as being currently significant work in the area of large language models in computing education. We decided only one step in the snowballing phase was necessary given the recent advent of large language models in computing education. We conducted two separate phases of forward snowballing, one in May 2023 and one in August 2023, with the aim of including as much of recent work as possible. #### 3.1.1. Reference papers We collected a set of reference papers using keyword searches over three databases: (1) ACM Digital Library (Full-Text Collection), (2) Taylor & Francis Online, and (3) IEEE Xplore. These choices were guided by the book "Past, Present and Future of Computing Education Research: A Global Perspective" (Taylor et al., 2018) which includes a chapter on venues that have shaped computing education research (pp 121-150). This chapter lists 13 conference and magazine venues and two journals dedicated to computing education research literature, and our database searches were scoped to cover these venues: ACM SIGCSE Sponsored (SIGCSE Technical Symposium, ITiCSE, ICER, CompEd); ACM SIGCSE In-Cooperation (ACE, Koli Calling, COMPUTE, WiPSCE, CCSC); ACM Journal (TOCE); Taylor and Francis (CSE); and IEEE (Fi, Toe, TLT). The keywords used included "large language models" and "generative AI" as well as three common models. Queries were refined as appropriate for the different databases, and filters were used as appropriate when scoping the search, such as filtering by "SIGCSE sponsored" venues in the ACM Digital Library. In addition, the searches were conducted using a filter for dates beginning in January 2021. The start date of January 2021 was chosen based on the technological timeline of LLMs and their relevance to computing education. By January 2021, LLMs, especially with the advent of models like GPT-3 in mid-2020, started gaining significant traction and recognition in broader research and application areas. Furthermore, the integration of such advanced LLMs into computing education, pedagogically and practically, was still in nascent stages. By setting January 2021 the start date of the literature search, we aimed to capture the most recent and relevant research insights right from the outset of substantial scholarly attention towards the intersection of LLMs and computing education. As an example, the final query used when searching the ACM Digital Library was: [A1]: "large language models"] OR [A1]: "generative AI"] OR [A1]: "Codex"] OR [A1]: "GPT-3"] OR [A1]: "GPT-4"] The search was conducted on 26th April 2023 and resulted in 19 papers. For each paper, the following inclusion criteria was applied: 1. _Must mention generative AI, large language models, or a specific tool using those technologies, such as GitHub Copilot._ 2. _At least 4 pages in length (inclusive).3_ Footnote 3: This criterion rules out posters and abstracts. 3. _Written in English._ A total of 3 papers were excluded based on length and 5 for not being aligned with the topic. The resulting set of reference papers ("seed papers"), listed in Table 1, includes 10 papers. By necessity due to the age of the research area, these papers are largely published in 2022 and 2023. #### 3.1.2. Snowballing (phase 1) Each paper that cites or is cited by at least one of the reference papers was evaluated for inclusion by two working group members. The backward snowballing phase, which considered all papers in the reference list for each paper in the reference set, resulted in 381 papers. For forward snowballing, we used the "cited by" feature in Google Scholar at the beginning of May 2023, resulting in 132 papers. There were duplicates in this list, but we decided not to identify duplicates until the final review. Each of these 513 snowballed papers were assigned to two members of the group. At this stage of the review, the papers were not read in detail; rather, the evaluators searched for evidence that a paper should be given deeper consideration. The inclusion criteria included the three criteria used to filter the reference papers plus a publication date criterion and a content criterion: 1. _Must mention generative AI, large language models, or a specific tool using those technologies, such as Copilot, AND_ 2. _At least 4 pages in length (inclusive), AND_ 3. _Written in English, AND_ 4. _"Published" in or after 2021._ For papers published in non-traditional venues such as arXiv, this is the upload date, AND 5. _Must have direct applicability to computing education._ This criterion was refined to the following and was interpreted generously: 1. _The paper explicitly states a relation to computing education, OR_ 6. _the participants include students working on problems typical of a computing education context, OR_ * _the problems or inputs featured are drawn from a computing education context, OR_ * _the resource or tool being created is specifically designed for computing education._ Each of the five criteria had to be satisfied to include the paper. Each paper was independently evaluated twice; the evaluator could flag the paper for inclusion, exclusion, or discussion. If there was disagreement between the two evaluators or if they flagged the paper for discussion, it was evaluated by a third evaluator who made a final decision. In addition, given the subjectivity of criterion (5), all papers that were marked as not being included because of this criterion were reviewed by a third evaluator. As a result, papers were included if (a) both initial evaluators flagged it for inclusion or (b) if the third evaluator intervened due to a disagreement between the initial reviewers or when they reviewed criterion (5). All evaluators were instructed to interpret the criteria generously, to avoid exclusion of potentially relevant work. After these reviews, 46 (9.0%) papers were identified for inclusion. Finally, each of these papers selected for inclusion were read in depth by a single member of the working team. The goal of this step was to identify potential impact on future research in the area or on computing education practice. In addition, we extracted some details about the work being performed, such as the location of authors, the type of work published, and evidence of research quality. Data extraction was guided by a set of questions implemented as an online form to help standardise the process. The questions on the form are presented in Appendix A. \begin{table} \begin{tabular}{l|l|l|l} \hline \hline **Title** & **Venue** & **Year** & **Citation** \\ \hline The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming & ACE & 2022 & [(81)] \\ \hline Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models & ICER & 2022 & [(163)] \\ \hline Github copilot in the classroom: learning to code with AI assistance & JCSC & 2022 & [(149)] \\ \hline Programming Pedagogy and Assessment in the Era of AI/ML: A Position Paper & COMPUTE & 2022 & [(154)] \\ \hline My AI Wants to Know If This Will Be on the Exam & ACE & 2023 & [(83)] \\ \hline Using Large Language Models to Enhance Programming Error Messages & SIGCSE TS & 2023 & [(116)] \\ \hline Experiences from Using Code Explanations Generated by Large Language Models in a Web Software & SIGCSE TS & 2023 & [(127)] \\ Development E-Book & & & \\ \hline Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language & SIGCSE TS & 2023 & [(69)] \\ \hline Using GitHub Copilot to Solve Simple Programming Problems & SIGCSE TS & 2023 & [(179)] \\ \hline Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code & SIGCSE TS & 2023 & [(34)] \\ Generation & & & \\ \hline \hline \end{tabular} \end{table} Table 1. Reference papers used to seed the literature review. Figure 1. Phases of the literature review. During this final step and upon a more thorough reading of the papers, 8 additional papers were flagged as not being relevant. The exclusion of some papers at this stage was expected, as the reviewers had been instructed to identify any _potentially_ relevant work. These exclusions left us with 38 relevant papers (including the original ten reference papers). #### 3.1.3. Snowballing (phase 2) We ran a second phase of forward snowballing using the "cited by" feature in Google Scholar at the end of August 2023. After removing the duplicate papers that appeared in the first snowballing phase, and applying the same process described above for paper inclusion and exclusion, we ended up with a total of 71 papers (10 reference papers, 28 papers from the first snowballing phase, and 33 papers from the second). The final list of papers (including the 10 original reference papers and papers from the two snowballing phases) is shown in Table 2. Interestingly, while the second snowballing phase covered only a few months, it resulted in a number of papers that is very close to the number of papers that resulted from the first snowballing phase, which covered a period of around two and a half years. Including a second snowballing phase was motivated by the very fast pace at which the literature is growing in this area, which this finding supports. ### Descriptive statistics Statistics about the papers included in our analysis are presented in Table 3 and Table 4. The work has been presented in a range of venues, including traditional conferences and journals. However, due to rapid changes in this field, a large number of papers were published only on arXiv. Some of this work was later published in a conference or journal, but some only remains visible - and is cited from - that site. Despite the relative recency of this area of research, the papers we reviewed also used a wide range of LLMs, which is described in Table 4. The rapid pace of the field is a potential threat, however, to the results being published. For example, the most commonly used LLM considering papers from the first snowballing phase only (i.e. up to May 2023) was Codex, which is now no longer available, and the most recent version of GPT (GPT-4) had only a single piece of research using it. Table 4 shows the results considering all the papers we analysed (i.e. up to August 2023), where the most commonly used LLM has become GPT-3/3.5, and the most recent version of GPT has 11 papers using it. Table 4 also describes the languages being investigated. The majority of the research focuses on Python, with some work being done on Java and C. The table omits languages only explored by one paper in our set; most of these come from a single paper that investigates multiple languages. Python being the most popular language used is not too surprising, however, as popular LLMs such as Codex have been reported to be most proficient in Python (Cordex, 2017). Table 5 contains our evaluation of four quality metrics reported in Hellas et al. (Hellas et al., 2017). They reported these metrics as part of a review of performance prediction research, so several of their questions are focused on work from that domain. For example, they ask, "is the value being predicted clearly defined?" We selected the most generally applicable questions, and we updated their question about threats to validity to focus specifically on whether they were discussed in an explicit subsection. Compared to their results, we find that the work in this area is reported more clearly in all four aspects measured. In particular, threats to validity are explicitly discussed in the majority, rather than minority, of cases, and slightly more of the work we examined present an explicit research question. ### Classification of literature The papers we reviewed broadly fall into five categories, with respect to the role that the LLM plays in the study: (i) assessing the performance, capabilities, and limitations of LLMs, (ii) using LLMs to generate teaching materials, (iii) using LLMs to analyse student work (e.g. identifying errors and repairing bugs), (iv) studying the interactions between programmers and LLMs, and (v) position papers and surveys/interviews. Category (i) is by far the largest group, indicating a strong desire to assess the current capabilities and limitations of LLMs in computing education contexts. We acknowledge that some papers would fit into more than one category; in these cases, we classified the paper into the most fitting category. We now briefly summarize the main contributions of the papers included in our review, organised into these five categories. #### 3.3.1. Assessing the performance, capabilities and limitations of LLMs (35) More than thirty papers looked into assessing the performance or capabilities of large language models. Most of these looked into the performance of LLMs in generating code, often for programming exercises (Hellas et al., 2017; D'Amica et al., 2017; D'Amica et al., 2018; D'Amica et al., 2019; D'Amica et al., 2020; D'Amica et al., 2021; D'Amica et al., 2020; D'Amica et al., 2021); D'Amica et al. (2020); D'Amica et al. (2021); D'Amica et al. \begin{table} \begin{tabular}{|l|l|l|l|} \hline Author & Title & \multicolumn{1}{c|}{Vennie} & Year \\ \hline Ahmed et al. & STNSHINE. improved funding of Syntax Errors & \multicolumn{1}{c|}{IEEE Trans. Softw. Eng.} & 2021 \\ \hline Austin et al. & Program Synthesis with Large Language Models & a/NOX & 2021 \\ \hline Breman and Leage & Exploring the Implications of OpenAI Codes on Education for Industry 4.0 & SOHOHA & 2022 \\ \hline Daklet et al. & GitHub: Copilot AI Pair Programizer: Asterile Mailidity? & J. Syst. Softw. & 2022 \\ \hline Denvy et al. & Autonomous Education: Experimental Resources - Leveraging Large Language Models for Learnsourcing & a/NOX & 2022 \\ \hline Plant and Havdavi & AI-Diverse Development Is Here: Should You Warry? & H2SE Software & 2022 \\ \hline Funne-Analyse et al. & The Robots Are One: Exploring the Implications of OpenAI Coltex on Introductory Programming & ACE & 2022 \\ \hline Chetiaren & Net Impact of Large Language Models trained on Code & Student conf. & 2022 \\ \hline Li et al. & Competition level code generation with AlphaCode & Science & 2022 \\ \hline Parys and Spitt & Graph completion in the Education. Farming to code with AI assistance & J. Comput. Sci. Coll. & 2022 \\ \hline Raman and Kumar & Programming Pedayed and Assessment in the Era of AI/ML A Position Paper & COMPUTE & 2022 \\ \hline Sara et al. & Automatic Generation of Programming Services and Code Explanations Using Large Language Models & CREB & 2022 \\ \hline Vathulainen et al. & Expectation vs Experience Evaluating the Viability Code Generation Tools Powered by Large Language Models & CHIA & 2022 \\ \hline Zhang et al. & Reporting Bug in Python Assignments Using Large Language Models & an/NOX & 2022 \\ \hline Al-Hossain et al. & Socartic Question of Movie Languages: A Benchmark Dataset and Preliminary Evaluations & REA & 2023 \\ \hline Alves and Cipinoino & The content program: Twitter Keyword A Advanced Chess Bugs prove for the software development of the future & an/NOX & 2023 \\ \hline Babe et al. & Student/All A Benchmark of Student Written Prompts for Large Language Models of Code & an/NOX & 2023 \\ \hline Babe et al. & Investigating the Potential of GPT-13 Providing Feedback for Programming Assessments & TricksE & 2023 \\ \hline Rake et al. & Grounded Copilot: How Programmers interact with Code-Generating Models & OOPSLA & 2023 \\ \hline Becker et al. & Programming Is Hard? or At Least I Used to Be: Educational Opportunities and Challenges of AI Code Generation & SGCSTS TS & 2023 \\ \hline Belletini et al. & Daviste Geos Heolars: A Study on the Problem Solving Ability of GPT-3 & CSEDI & 2023 \\ \hline Bruklovlov et al. & The Future of Converting Education Materials & (in draft) & 2023 \\ \hline Bull and Kharranta & Generative AI Assistants in Software Development Education: A vision for integrating Generative AI into educational practice, not instinctively & IEEE Software & 2023 \\ \hline Cipino and Alves & GFF-13 Object Oriented Programming Assignments: An Experience Report & TricksE & 2023 \\ \hline Drunay et al. & Can We Trust AI Educational Content? Computer Analysis of Human and AI-Generated Learning Resources & an/NOX & 2023 \\ \hline Drunay et al. & Computing Education in the Era of Generative AI & arXiv & 2023 \\ \hline Drunay et al. & Conversion with Copilot: Exploring Programming Engineering for Solving CS Problems Using Natural Language & SGCSTS TS & 2023 \\ \hline Demmy et al. & Promptly: Using Prompt Problems to reach Lectures How to Effectively Utilize AI Code Generators & an/NOX & 2023 \\ \hline Dubkay and Berth & Exponents with Genetic Examination Formats in Light of GPT-4 & a/NOX & 2023 \\ \hline Drug and Toronto & Scalable Explicit Evaluation Assessment: A-Assisted Creative Coding for Families & a/NOX & 2023 \\ \hline Funne-Analyse et al. & SystAll Wands to Know If This will be on the Exam & ACE & 2023 \\ \hline French et al. & Creative Use of OpenAI in Education: Case Studies from Game Development & Multi-modalTech. \& Interaction & 2023 \\ \hline Idella et al. & Exploring the Repeemies of Large Language Models to Beginner Programmers’ Help Requests & a/NOX & 2023 \\ \hline Dalla et al. & Whoandit: Human or AI? & 2023 \\ \hline Daparersal et al. & Decomposed Progniting to Anover Questions on a Course Discussion Board & AI in Education & 2023 \\ \hline Dalla et al. & ChatCP/and Software Testing Education: Promptes a Ferils & IEEE KSTW & 2023 \\ \hline Kazeminshuz et al. & Studying the effect of AI Code Generators on Supporting Nowre Letters in Introductory Programming & CII & 2023 \\ \hline Kendon et al. & AI-Generated Code Not Considered Imartial & WCCCE & 2023 \\ \hline Kiefer and Schillinger & Language Models in Introductory Programming Education: ChatCP's Performance and Implications for Assessments & an/NOX & 2023 \\ \hline Lan and Mao and Brown & From Jan It's Un models. All papers suggest that LLMs will have substantial impact on computing education and programming more generally. Five papers in the literature review used interviews to understand user perceptions and attitudes towards LLMs. The authors of one paper interviewed professional software developers on how they use code generation tools (Kazemitabaar et al., 2017). They found that the interviewees thought that generative AI has many use cases in software development. They note that while the tools do not require training to use, developers will need to understand the generated code for quality assurance, and to avoid over-reliance as the quality of code produced by these tools can vary. Three other papers report on interviews with students and instructors about their experiences with, and attitudes towards LLMs (Liang et al., 2017; Zhang et al., 2018; Zhang et al., 2018), finding that there is no consensus about the use of LLMs in higher education, its benefits or risks, but there is general awareness of the problem of academic integrity in the light of LLMs used by students. One study reports on the experiences of five students using generative AI for assignments (Zhu et al., 2018), highlighting both aspects of where it offered effective support, but also many limitations. #### 3.3.3. Studying the interactions between programmers and LLMs (9) A total of nine papers looked into interactions between programmers and LLMs. Some focused on finding interaction patterns (Zhu et al., 2018; Zhang et al., 2018; Zhang et al., 2018) while others focused more on how productivity is impacted by the use of models (Zhu et al., 2018; Zhang et al., 2018), whether code produced when using AI code generators is less secure than when not (Zhang et al., 2018), and how students use code explanations generated by LLMs (Zhang et al., 2018; Zhang et al., 2018). Based on the findings of this research, students engage in different interaction modes when using AI code generators. These include exploration (Zhu et al., 2018), acceleration (Zhu et al., 2018), shepherding (Zhu et al., 2018), and drifting (Zhu et al., 2018). In exploration, the programmer is unsure of what to do next, using the code generator for exploring potential approaches to tackle the problem. In acceleration, the programmer knows what they are doing and uses the LLM for producing the desired code faster. In shepherding, the programmer spends the majority of their time on guiding the LLM to produce the desired code. In drifting, the programmer drifts from one incorrect code suggestion to the next, indicating struggles in understanding the generated code. Kazemitabaar et al. studied how novice programmer productivity and learning is affected by the use of AI code generators (Zhu et al., 2018). They found that students who used AI code generators performed significantly better (1.15x progress, 0.59x errors, 1.8x higher correctness, 0.57x time spent) without negative effects on learning. Sandoval et al. found that using code generator tools did not seem to introduce new security risks (Zhang et al., 2018). MacNeil et al. found that students generally found code explanations generated by LLMs useful for learning, but the perceived utility of the explanations and students' engagement with them varied by explanation type (Zhang et al., 2018). One study looked at how students write prompts for LLMs (Zhu et al., 2018). To this effect, the problems had to be stated in a more visually oriented way to prevent them from copying the problem statement directly, but rather write prompts on their own. Most students found the prompt writing to be beneficial, while a few voiced concerns. Another study (which we classified in group (i)) used student written prompts in order to evaluate LLMs and found it to be an effective benchmark (Zhu et al., 2018). #### 3.3.4. Using LLMs to analyse student work (5) Five papers used LLMs to analyse student work, for example, by looking into using LLMs to fix bugs or errors in student work (Zhu et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018). Two papers looked at repairing programming errors (Zhu et al., 2018; Zhang et al., 2018), one at enhancing programming error messages (Zhang et al., 2018), and one at providing feedback to students based on a buggy student program (Zhang et al., 2018). The studies that examined the performance in bug repair both reported that their results surpassed previous state-of-the-art automated program repair results. Zhang et al. reported an overall repair rate of up to 96.5% using Codex with few-shot examples and iterative prompting (Zhang et al., 2018) and Ahmed et al. achieved a repair rate of 89.4% (Zhu et al., 2018). Leinonen et al. found that Codex could enhance programming error messages - which are notoriously hard for students to understand - about 54% of the time on average, noting that this performance is not good enough for using the model directly with students (Zhang et al., 2018). Phung et al. propose a method where instructors could balance the 'coverage' of feedback, i.e. whether a student receives feedback at all, and the 'precision' of feedback, i.e. whether the feedback is of good quality. They found that in the best case, their proposed method can achieve a precision of 72.4% with a coverage of 64.2% for one of the datasets they used and a precision of 76% and a coverage of 31.2% for the other dataset included in the study. One paper presents a tool to help out students with issues without revealing the full solution (Zhang et al., 2018) and reports positive feedback from both students and instructors. Other studies have also looked into mitigating risks of LLMs such as wrong answers (Zhang et al., 2018) or guiding the students with hints rather than full solutions (Zhang et al., 2018) (we categorised these papers as assessing LLMs). \begin{table} \begin{tabular}{l|l|l|l} \hline \hline LLM & Count & Language & Count \\ \hline Codex & 20 & Python & 37 \\ Copilot & 12 & Java & 6 \\ GPT-3/3.5 & 28 & C/C++ & 6 \\ Other & 7 & Javascript & 2 \\ GPT-4 & 11 & C\# & 2 \\ \hline \hline \end{tabular} \end{table} Table 4. LLMs and languages featured in the reviewed literature. Some papers reported on more than one (or no specific) LLM or language, so the counts do not match the number of papers reviewed. \begin{table} \begin{tabular}{l|c} \hline \hline Venue & Count \\ \hline ACM & 22 \\ ArXiV & 32 \\ IEEE & 5 \\ \hline Other Publishers & 9 \\ Grey Literature & 3 \\ \hline \hline \end{tabular} \end{table} Table 3. Venues presenting the work included in our literature review. #### 3.3.5. Using LLMs to generate teaching materials (5) Five papers looked into using LLMs to generate teaching materials (Friedman, 1979; Goyal, 1980; Goyal, 1981; Goyal, 1982; Goyal, 1983). In two cases, the teaching materials being generated were programming exercises (Goyal, 1980; Goyal, 1982). One of the main findings in both papers was that LLMs can be coaxed into generating exercises with prescribed themes (such as basketball or cooking) and programming concepts (such as loops or conditionals). In addition, the exercises generated by LLMs were novel and sensible, although the authors cautioned that the quality might not be good enough to provide the LLM-generated exercises directly to students. Another study that compared LLM-generated content with student-generated content concluded that the quality is comparable, but still recommends further research (Goyal, 1980). Similarly, LLMs were found to be able to generate reasonable learning objectives (Goyal, 1983). One paper presents a new tool, but does not offer an actual evaluation of it (Goyal, 1980). All papers suggest that using LLMs could help instructors save time in generating teaching materials. ### Educational opportunities and risks The broader literature on LLMs and their potential effects is often organized around the dichotomy of opportunities and risks (Goyal, 1980; Goyal, 1982; Goyal, 1983). For example, Bommasani et al. produced an extensive report documenting the opportunities and risks of foundation models across a broad variety of domains, including education (Goyal, 1980). Kasneci et al. documented similar opportunities, risks and mitigation strategies, specifically focusing on the use of ChatGPT in education (Kasneci et al., 2017). Among the papers in our dataset, there was broad agreement that LLMs would have a major impact on teaching and learning in computing courses. Authors identified various opportunities and risks for both students and teachers and we present these in the following sections. ### Opportunities The papers we reviewed identified a number of potential opportunities that could positively impact computing education. One of the prominent opportunities that emerged was related to reducing instructor workload, for example by generating large banks of diverse learning resources and support materials (Goyal, 1980; Goyal, 1982; Goyal, 1983), automating various aspects of the grading process (Goyal, 1980), and providing personalised help to students who are struggling and who would otherwise consume considerable instructor effort (Goyal, 1980; Goyal, 1983; Goyal, 1983). Related to this theme, Bull and Kharrufa argue that the type of scaffolding that AI tools can provide are able to "support the student in their learning and... offload some of that... burden from the educator" (Kasneci et al., 2017). Improving the learning experience for students was another common opportunity that emerged. Several papers described the idea of using an LLM as an assistant or pair programmer, which represents a significant change from current pedagogical practice (Goyal, 1980; Goyal, 1983; Goyal, 1983). As a concrete example of the kind of assistance that could be provided while students are programming, Leinonen et al. suggest that LLMs could help students understand terse error messages (Goyal, 1983) which have traditionally been a source of difficulty for novice learners (Goyal, 1983). Several groups also identified opportunities for creating new tools around LLMs, e.g., to support repair of syntactically incorrect code (Goyal, 1980), to help answer questions (Kasneci et al., 2017; Kasneci et al., 2017) or provide hints (Goyal, 1983), or even to support the crowdsourcing of new questions (Goyal, 1980). Indeed, some recent papers found in the second phase snowballing focused on introducing tools around LLMs, such as CodeHelp (Goyal, 1983) and Promptly (Kasneci et al., 2017). Despite the promise of using AI tools for learning support, Dakhel et al. and Prather et al. caution that although they can be a great asset for professional developers, they may be less helpful for novices if the tools generate non-optimal or erroneous outputs which could cause confusion (Kasneci et al., 2017; Goyal, 1983). As the quality and performance of the models improve, this may be less of an issue as time goes by. Another recurring opportunity mentioned in the papers we reviewed was the potential for a renewed focus on problem solving. For example, Vaithilingam et al. explored the usability of code generation tools and suggest they can be used to rapidly provide a good starting point for a solution, thus allowing programmers to focus on the problem solving process and reducing the need for a focus on lower-level details (Goyal, 1983). In a similar vein, Denny et al. (Denny et al., 2017) and Prather et al. (Prather et al., 2017) explore the use of Copilot in two different contexts, suggesting it can be used to teach students how to express problem solutions in natural language, and to focus on guiding students through problem-solving strategies, respectively. \begin{table} \begin{tabular}{l|l} \hline \hline & Yes: 44 \\ Is there a clearly defined research question/hypothesis? & No: 18 \\ & Vague / Unclear: 9 \\ \hline & Yes: 55 \\ Is the research process clearly described? & No: 10 \\ & Vague / Unclear: 6 \\ \hline & Yes: 57 \\ Are the results presented with sufficient detail? & No: 6 \\ & Vague / Unclear: 8 \\ \hline & Yes, in a separate (sub)section: 38 \\ Are threats to validity / limitations addressed in an explicit (sub)section? & Yes, but not in a separate (sub)section: 15 \\ & No: 18 \\ \hline \hline \end{tabular} \end{table} Table 5. Assessment of quality metrics adapted from Hellas et al. (Hellas et al., 2017) Finally, LLMs present clear opportunities for instructors to re-think assessment practices and reconsider what assessment means in computing courses [(69; 143; 38; 69)]. Raman et al. suggest assessments could focus more on code understanding, such as tracing and verification, and less on syntax and code writing [(154)]. LLMs can also be used to generate a variety of flawed solutions, providing plentiful opportunities for incorporating code review tasks [(82)]. The systematic literature review of Yan et al. explored practical and ethical challenges of LLMs in education, categorising some of this prior work around the 'Assessment and grading' category and argue that grading student assessments is a promising application of LLMs [(183)]. The impressive documented performance of code generation models like Codex on typical CS1 and CS2 problems suggests that some rethinking of assessments is essential [(83)]. In summary, the papers we reviewed suggest that LLMs present a wide array of opportunities for computing education, improving instructor productivity by reducing workload, enhancing student learning experiences, enabling a greater emphasis on problem-solving, and suggesting new assessment practices. ### Risks Several risks were identified by the papers we reviewed. Authors were concerned that generative AI could be used in ways that limit student learning or make the work of educators more difficult [(87)]. #### 3.6.1. Risks for students The learning resources produced by generative AI pose significant risks to student success. Wermelinger [(179)] and Sarsa et al. [(163)] observe that explanations of code can be a useful learning resource, but if the explanations contain mistakes then learning could be negatively impacted. Since AI generated content is presented authoritatively (and is frequently correct), students are unlikely to question the content and may learn incorrect information [(96; 51)]. Automatically generated tests may be partially complete, leading students to inadequately test their code [(179)]. The resources created by LLMs might also have less variation than those created by humans [(68)] and thus limit the variety of examples to which students are exposed. AI generated content is not curated for specific courses, so learning material generated could potentially include syntax, programming constructs, or other content that is inappropriate for students in a given course [(34; 90)]. Example programs used by students for learning could have poor implementation or poor style, which may result in students acquiring undesirable programming habits [(81)]. The use of generative AI may also result in students spending time in unproductive ways. Wermelinger [(179)] speculated that students may spend excessive time on prompt engineering in the hope of hitting on a successful solution rather than making process towards the solution, which was indeed observed in the study by Prather et al. [(148)] in the'shepherding' interaction pattern. Bull and Kharrufa [(51)] suggested that it may take longer to figure out an effective prompt than to write a solution. Hellas et al. [(90)] found that LLMs tended to hallucinate issues in student code which could cause students to focus on these non-existent issues instead of the actual issues in their code. Wermelinger [(179)] observed that explanations generated by Copilot focused on a line-by-line description of _what_ the code did rather than how it achieved the goal desired. This is supported by the findings of Sarsa et al. [(163)] who noted that Codex seems most proficient at crafting line-by-line code explanations, as opposed to e.g. higher level summaries of the code. This is akin to a multiv structural explanation [(169)], which may focus student attention at lower levels of the SOLO taxonomy, rather than thinking about the overall purpose of code. However, non-code models such as GPT-3 have been found to be more apt at creating higher level code explanations [(127)]. The most common concern expressed by authors about student learning was the potential for students to become over-reliant on generative AI tools to solve problems [(115; 118; 158; 34; 82)] and assist in debugging code [(116; 140; 187)]. Students who rely on generative AI may be misled into believing they are making progress, and this illusion of capability may reduce their self-understanding about their level of mastery of the subject matter [(147; 148)]. As introductory students realise that generative AI can outperform them on most tasks, they may lose motivation to learn the material, and become demoralised about the future of computing [(115)]. Further, novice learners of programming may become overwhelmed and confused by generated code, which could add to the high levels of frustration that are common in introductory programming courses [(176; 70)]. #### 3.6.2. Risks for teachers Several authors raise concerns about the impact of generative AI on teachers and teaching practice. Unsurprisingly, issues of academic integrity were the most common concern raised. Generative AI is reported to perform very well in assessments that are commonly used in introductory courses, raising concerns that students will submit solutions that they have not created themselves [(142; 165; 34; 69; 83; 113; 142; 158; 166)]. The solutions generated by AI cannot be easily identified as plagiarism [(149)], which requires teachers to adapt and develop new teaching strategies to ensure academic integrity is maintained [(179)]. Wermelinger [(179)] recommended that educators stop're-dressing' toy problems because generative AI will provide good solutions which will restrict the learning opportunities for coding, debugging and algorithmic thinking, compared to problems with interesting 'wrinkles'. Teachers who shift away from using many small problems to create larger and more authentic problems in an attempt to reduce reliance of generative AI will lose access to the quick and easy assessment methods such as automated grading associated with many introductory programming courses [(83)]. Educators will need to develop new resources to explicitly address LLMs and guide students instead of leaving them alone with the tool [(176)]. Teachers who use generative AI to assist in the creation of learning resources may unintentionally produce exercises that are underspecified or that contain incorrect reference solutions or inadequate/incorrect test cases [(163)]. Teachers who are concerned about the impact of generative AI may intentionally modify course delivery in ways that reduce the effectiveness of their teaching practice (e.g., by increasing academic integrity at the cost of scaffolded programming exercises), or adjust the curriculum to shift focus away from code writing, leaving students poorly prepared for subsequent programming courses [(154)]. Although there is a growing need to teach students how to use generative AI appropriately, it is unclear how we should do so. Barke et al. (2019) discuss the need to balance the introduction of generative AI too early in the curriculum where over-reliance is a possibility, against introducing generative AI too late and fail to provide an authentic experience relevant to industry practice. Bull and Kharrufa (2018) note that it is challenging for novices to understand generative AI capability and use prompts effectively, suggesting a need to formally teach students to effectively use the tools they have at their disposal. #### 3.6.3. Risks for the community The rising use of generative AI raises concerns for the broader community. As more programming code is likely to be generated automatically, there is potential for biases to be unintentionally introduced due to the algorithm used to generate content or due to the source material used in training (Barke et al., 2019), More concerningly, generated code may contain security vulnerabilities and bugs (Barke et al., 2019; Barke et al., 2019; Barke et al., 2019). Finally, Kazemitabaar et al. (2019) found that students with more knowledge benefited more from code generation than students with less knowledge. Similarly, Nam et al. (2019) found that professionals benefited more from AI code generation than students. These findings suggest that AI code generators may widen the gap between over- and under-achievers, exacerbating teaching challenges arising from classes with heterogeneous ability levels. In response to this, Prather et al. discuss design considerations for generative AI tools that could eschew these risks and lead to more direct benefit for novice programmers (Razemitabar et al., 2019). ### Limitations and threats to validity Our literature review was conducted from April to August 2023, so work published after this point or with low visibility will have been missed. Due to the relatively recent emergence of powerful large language models, especially their use in the field of computing education, and fast pace of the field, only literature published in less than a three year period (from 2021 to 2023) was considered. Due to this, we also conducted only a single step of snowballing (i.e., we did not do further snowballing on the papers found in the snowballing). This may have omitted some work that failed to reference the most visible early work (our reference papers), but we do not believe that this will include significant numbers of papers or change the general trends identified in our analysis. The inclusion of results from arXiv and grey literature sources is driven by pragmatism. The machine learning community makes wide use of arXiv due to the fast-paced nature of the field, and if we omitted it, we would miss the most recent results (up to a year of papers) in an already narrow window of time. However, the inclusion of these sources admits work that has not yet undergone peer-review. During our deep review of the included papers, several reviewers raised concerns about the quality of a paper they were reading. We retained these papers as they met the inclusion criteria, and note that, on the whole, the papers appear to be ready for review and demonstrate many of the criteria for quality proposed by Hellas et al. (2018). ## 4. Survey of student and instructor perceptions about Genai Students and instructors may have quite different views of the use of generative AI tools in computing classrooms. For example, one of the well-documented concerns regarding generative AI in educational contexts is that students may become over-reliant on them for the generation of answers (Han et al., 2017; Razaemitabar et al., 2019). In this case, students who rely on the tools may initially take a more positive view of them when compared with instructors, however, their views may change over time. Given the speed with which generative AI tools are being developed and adopted, documenting student and instructor perceptions at the current time provides a useful snapshot of current practice and facilitates future explorations of how views may change as these tools become more embedded in the educational sequence. In this section, we report the findings from two surveys, one with computing instructors and the other with students, that we conducted from July to August 2023 with responses spanning 20 countries. We first review similar explorations in computing and other disciplines, and then after describing our methods we organize our findings around insights derived from analysis of both quantitative and qualitative data. ### Prior explorations of student and instructor perceptions Several recent studies have explored the perceptions of students and teachers toward the potential impact of generative AI in broad educational settings. Chan and Lee acknowledge a generation gap in how generative AI is perceived (Razaemitabar et al., 2019). Using an online survey involving 399 students and 184 teachers, predominantly from Hong Kong although across a diverse range of academic disciplines, they examine distinctions in perceptions, experience, and knowledge of generative AI between educators and students across different generations, classified as Gen Z (students) or Gen X and Y (teachers). They observe that while students are generally optimistic about the use of these new technologies, teachers exhibit more concerns regarding over-dependence and ethical issues, and were also more sceptical about the abilities of generative AI tools. They emphasise the urgent need for clear policies and guidelines to ensure that academic integrity is maintained and to promote equitable learning experiences. In follow-up work, Chan addresses this need by proposing an AI policy framework specifically for higher education (Razaemitabar et al., 2019). This encompasses three dimensions: pedagogical, which uses AI to enhance teaching and learning outcomes; governance, which addresses privacy, security and accountability issues; and operational, which pertains to infrastructure and training. To inform the policy, they conduct an online survey of 457 students and 180 teachers and academic staff from Hong Kong universities. They argue that the student voice plays an essential role in the drafting and implementation of policy. In general, both students and teachers reported limited experience with AI tools, suggesting potential for growth in adoption and the need for training on the effective use of AI technologies. In a related strand of work, Chan and Tsi focused specifically on the capacity of generative AI for replacing human teachers (Razaemitabar et al., 2019). Their rationale for this direct question was to assist educators in preparing for the inevitable integration of AI into educational settings. The authors review existing literature on the role of AI in the classroom and present a synthesis of its limitations, classifying these into eight categories covering 26 aspects. For example, the category 'Emotional and Interpersonal Skills' highlights the social-emotional competencies of human teachers and covers aspects such as human connection, cultural sensitivity and building trust and rapport. An online survey consisting of 11 closed items and several open-response questions was distributed to universities in Hong Kong and received responses from 144 teachers and 384 students. Students generally indicated an appreciation for the unique emotional qualities of human teachers, whereas they expressed concern about student misuse. Despite some variation in responses, both students and teachers generally agreed that AI is not likely to entirely replace human teachers and in particular the social-emotional competencies. A recent study by Amani used a survey to measure student and instructor perceptions of generative AI in academia with the goal of capturing perceptions, misconceptions, concerns, and current usage trends (Zastudil et al., 2018). The authors argue that it is essential to report instructor and student perceptions now given the rapid changes and improvements in the tools that are underway. Two online surveys were created, with the student-oriented survey focusing on current usage and perceptions and the instructor-oriented survey focusing on how it is affecting their current courses and how they think students should use it. Data was collected from 243 staff and 813 students at Texas A&M university, revealing a clear perception that resisting these new technologies is not feasible, and that teaching practices must adapt in response. Students value the high availability of the tools, but recognise the potential for their misuse. Forman conducted a similar online survey exploring student perceptions of ChatGPT (Zastudil et al., 2018). Analysis of 71 responses to the 7-question survey revealed that students generally had a positive long-term view of the role that such technologies would play in their lives, and that they currently relied on ChatGPT to save time when working on assignments and projects. Raman et al. investigate the factors that influence the adoption by university students of ChatGPT (Zastudil et al., 2018). Their work, which utilises Rogers' Diffusion of Innovation theory as a conceptual framework, proposes that five attributes of the technology influence its adoption, namely relative advantage, compatibility, ease of use, observability, and trialability. Their empirical analysis, which is based on a survey of 288 students delivered via Google Forms, supports their hypotheses and indicates gender-based differences in how students prioritise the attributes. Although online surveys have been a popular instrument in work exploring student and instructor perspectives, a few recent interview studies (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018; Kang et al., 2018) have investigated the impacts of generative AI on computing education research and practice. Notably, Lau and Guo recently conducted in-depth interviews with instructors to understand how they planned to adapt to the emergence of tools like ChatGPT and Copilot (Kang et al., 2018). They conducted Zoom interviews with 20 instructors from nine countries. The interviews were framed around a hypothetical question, where participants were asked to imagine a future where students had access to an AI tool that could both write perfect code for any programming problem and that was undetectable to plagiarism detection methods. Instructors were asked to describe how they would adapt their pedagogical approaches over the short-term and long-term. In the short-term, the primary concerns centred around cheating and plagiarism, to which instructors have responded by relying more heavily on invigilated exams and educating students about current model limitations. Longer term perspectives varied, with one school of thought aiming to resist AI tools and teaching in conventional ways, and the other aiming to integrate AI tools into the curriculum to better prepare students for the changing requirements of industry. Specific examples from this latter category included using AI to provide more personalised help to students, using assignments that focus more on code reading and critique and more open-ended design, and using AI to evaluate new kinds of assessment tasks. These instructors also viewed AI as being potentially useful for broadening participation and accessibility in computing due to their capacity for providing personalised assistance. Significantly, whether they tended towards resisting or embracing AI tools, instructors generally agreed that the objectives of computing education will likely need to change to adapt to the growing influence of AI. Zastudil et al. (Kang et al., 2018) conducted Zoom interviews with six CS instructors and 12 CS students. The analysis compared and contrasted their experiences, hopes, and concerns about the emergence of generative AI on computing education. Students and instructors aligned on key concerns such as over-reliance, model trustworthiness, and plagiarism; however, they diverged regarding how each group preferred those issues to be addressed. Students stressed the importance of crafting engaging and culturally relevant assignments as well as reducing busy work to address plagiarism, whereas instructors proposed increasing the weight of proceder exams. Students were concerned about the quality of the model's responses and instructors were concerned that students would be unable to identify wrong or misleading responses. Instructors and students were both excited about the potential for GenAI tools to shift course topics toward higher-levels of abstraction, such as design patterns. Wang et al. (Wang et al., 2018) conducted a three-part study which culminated in Zoom interviews with 11 instructors. The authors found that instructors are concerned that students will misuse or over-rely on GenAI tools, but instructors did not have plans to adapt their courses due to a current lack of effective strategies. Instructors believed these problems would be harder to address in the introductory courses. Similar to the findings of Zastudil et al. (Kang et al., 2018), instructors were concerned about how incorrect model responses might lead students to develop faulty mental models. Rajabi et al. (Rajabi et al., 2018) interviewed 36 instructors in-person and 4 instructors virtually. Their interviews uncovered four primary themes that related to adapting pedagogy, plagiarism, assessment, and job preparedness. Instructors raised concerns about the trustworthiness of GenAI tools and their capacity to mislead students. However, instructors also argued that GenAI tools should not be banned because students will continue to find ways to use them. Instructors advocated for doing in-class assignments to avoid plagiarism concerns, but acknowledged that this could increase students anxiety about exams and grade weight--a concern that has been previously raised in computing education (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018). Based on these prior interview studies, the goal of our survey was to provide a large-scale, systematic, and international overview of the experiences students and instructors have had with generative AI in computing education contexts and to uncover their preferences for how these models should be used in computing classrooms. ### Methods for data collection and analysis To better understand the perceptions and experiences of students and instructors in computing courses as they relate to Generative AI tools, we developed two surveys--one for students and a second for instructors. We designed the survey to have questions that were asked to both groups to facilitate comparisons between these two crucial stakeholders. This method also draws inspiration from previous studies that directly compare the responses from students and instructors to the same questions (Selley et al., 2015; Selley et al., 2016; Selley et al., 2017). We also draw inspiration from previous large-scale surveys in computing education research. The use of online surveys and recruitment of participants via bulk email such as the SIGCSE mailing list is a common method, and has been used in work by Denny et al. (Denny et al., 2017), Schulte and Bennedsen (Schulte and Bennedsen, 2017), Elarde and Fatt-Fei (Elande et al., 2017), and Dale (Dale, 2017). The following sections describe how participants were recruited and how the survey was constructed. #### 4.2.1. Recruitment and participants We recruited 57 instructors and 171 students to complete the corresponding online surveys. To recruit instructors (\(n=57\)), we sent emails to the mailing lists of computing education professional groups including _sigscene members_, _sigce-australasia-members and _uki-sigce-members_. The goal for targeting these mailing lists was to draw a broad sample of computing education practitioners and researchers. However, we recognise that the resulting sample likely results in a selection bias of instructors who are particularly invested in computing education compared with their peers. This is a well-known challenge, as noted by Schulte and Bennedsen (Schulte and Bennedsen, 2017). In an attempt to address this, we included a request for them to share the recruitment materials with colleagues in their department. This snowball sampling technique was also used by ITiCSE working group members to share the recruitment materials in their personal networks. To recruit students (\(n=171\)), we also used a snowball sampling method where instructors were requested to share a recruitment announcement with students through their courses and department mailing lists. In this case, it is possible that we may experience a response bias with high-achieving students being more likely to respond to the survey. To address this potential bias we included the phrase "if you have struggled with your computing courses, your voice is especially appreciated to ensure better experiences for students like you in the future". #### 4.2.2. Questionnaire design We developed the questionnaire to focus on critical topics that have recently emerged across birds-of-a-feather discussions (Selley et al., 2015), workshops (Selley et al., 2015), and position papers (Selley et al., 2015; Selley et al., 2015). These topics include calls for curricular and pedagogical changes, consideration of ethics, and a need for replications and benchmarking. We also included questions inspired from work on plagiarism in programming assessments (Selley et al., 2015), student help-seeking behaviour (Schulte and Bennedsen, 2017), and from the related work discussed in Section 4.1. This resulted in thirty-five survey questions for the student survey (counting all open- and closed-response questions) and forty-two questions for the instructor survey. The overlap between the student and instructor surveys included twenty-seven questions that were either identical or minor rewordings to improve the readability between groups (e.g. "... using GenAI tools in ways that your instructors would not approve of?" on the student survey was reworded as "... using GenAI tools in ways that approve of?" for the instructors). The questions used in the student survey are listed in Appendix B, and the instructor survey questions appear in Appendix C. #### 4.2.3. Thematic analysis To analyse responses from students and instructors for the open-response questions we followed an approach for thematic analysis similar to the reflexive process described by Braun and Clarke (Braun and Clarke, 2017). The reflexive thematic analysis process is not prescriptive, but provides guidance for the phases needed to robustly explore, interpret and report patterns in qualitative data. Given the data set size was relatively small (i.e. there were a total of nine open-response questions in common on the instructor and student surveys, and a total of 228 responses across both groups) the questions were divided amongst two researchers who analysed all responses to the questions they were assigned. Each researcher began by reading the responses to familiarise themselves with the data, and then defining succinct labels that were assigned to each response that captured important features of the data. Practically, this process used a spreadsheet in which responses were listed on the rows and the labels that were defined for coding the data appeared on the columns. The final steps of the analysis involved grouping the labels into broader themes suitable for reporting. ### Quantitative insights #### 4.3.1. Demographics We recruited 57 instructors from 12 countries with an average of 18.2 years of teaching experience. Participants lived primarily in the USA (45.6%) with others coming from the United Kingdom (17.5%), Canada (8.8%), Jordan (5.3%), and Pakistan (5.3%). The most common class sizes that instructors reported teaching were in the 11-30 (36.8%), 31-50 (26.3%), or 100-250 (24.6%) ranges. The majority of instructors self-identified as men (77.2%) with far fewer instructors identifying as women (19.3%) or nonbinary (3.5%). Through our snowball sampling method, we additionally recruited 171 student participants across 17 countries. The top five countries included New Zealand (35.7%), Jordan (17.5%), USA (14.0%), Indonesia (8.8%), and Australia (7%). About half of the students self-reported being in their first year (48.5%). Students in their second, third, and fourth years accounted for 21.6%, 19.9%, and 7% of the respondents, respectively. The average number of courses taken was 4.6 with 38% of students only having taken one course. 90% of students had taken 10 or fewer courses. Most participants selected computer science as their major (42.7%). Additional majors included undeclared engineering (15.8%), software engineering (12.3%), computer engineering (7.6%), data science (5.3%), and information technology (4.1%). There were five participants who majored in either chemistry, supply chain, math, economics, or psychology; and there were two students who majored in physics. #### 4.3.2. Comparison of student and instructor perceptions Figure 2 summarises students' and instructors' responses to the Likert scale questions on the survey. Responses from students and instructors to questions related to experiences and expectations were largely ## References * (1) Figure 2. Summaries of the survey responses from 171 students and 57 instructors: 1) Students’ and instructors’ perspectives were compared along likert scale responses, 2) students ranked their help seeking preferences from 1 to 6, and 3) instructors shared their beliefs about the ethical use of Generative AI Tools. aligned. However, some important differences emerged for questions focusing on course policies. In this subsection, we review the results and briefly discuss the implications. _Experience and usage._ Students and instructors shared similar experiences with GenAI tools, using them primarily for writing code and working with text. However, fewer individuals in both groups used GenAI tools for tasks involving images. Students had slightly more experience using GenAI for writing code than instructors. While the difference was currently minor, instructors should keep in mind that students may rapidly become more expert at using GenAI tools. In light of this possibility, instructors should proactively stay informed about these tools' capabilities, even if they do not intend to incorporate them into their courses. This proactive approach is crucial to ensure instructors can continue providing meaningful educational experiences to students and remain well-informed about evolving technological advancements. _Course and institutional policies._ Students and instructors were aligned in their belief that some restrictions should be placed on the use of GenAI tools in their coursework. However, there may be some misalignment around what those restrictions should be. While students had mixed opinions about whether the university policies were clear, instructors largely disagreed with the statement that the policies were clear. This misalignment could lead to challenges where students are following implied policies rather than explicit policy guidelines. Given the shared belief that use should be limited, it is important that students, instructors, and institutions be aligned on course and institutional policies. However, it should be noted that students and instructors agreed slightly more strongly that course policies were clearly defined. _Expectations and beliefs._ Based on the responses from students and instructors, there was a close alignment in expectations and beliefs regarding GenAI tools. Both groups strongly agreed that GenAI tools cannot replace human instructors and that human teachers provide more effective guidance than GenAI tools. However, both students and instructors also expected GenAI tools to play an increasing role in the future of their teaching and learning, as well as in students' future careers. This suggests that while GenAI tools do not currently replace the value provided by instructors, it is important for instructors to reflect on and clearly define their value to students. It may be that students rely less on instructors for explanations and help, but rely on them more for curating the learning environment and ensuring that learning objectives are being achieved. #### 4.3.3. Help-seeking behaviors We surveyed students about their help-seeking behaviours to understand how prominently GenAI tools are being used by students when they require assistance. The results indicate that students predominantly continue to favour web searches as their primary resource for help. Nevertheless, GenAI tools are progressively establishing themselves as a dependable resource, surpassing online discussion forums as a preferred source of help. Interestingly, the extent of students' reliance on GenAI tools appears to be influenced by their academic program stage. Upper-level students exhibit a greater tendency to use generative AI tools over other resources, including peers, instructors, and teaching assistants. In contrast, first-year students still exhibit a preference for seeking assistance from their peers when facing challenges. This may reflect differences in the kinds of help students are seeking at different stages in their academic program or differences in their willingness to adopt new technologies, such as GenAI tools. #### 4.3.4. Ethical use of GenAI tools To better understand the ethical uses of GenAI tools, we surveyed instructors about the scenarios of use that they considered unethical. The findings reveal a consensus among instructors that auto-generating an entire assignment solution is considered unethical when students lack comprehension of the generated code. However, instructors held differing opinions on the ethics of generating solutions for an entire assignment when students possess a full understanding of the generated code or when students write code in a different programming language and then translate it into the language used in the course. In these cases, approximately half of the instructors deemed such practices ethical, while the other half considered them unethical. This suggests that instructors may be supportive of students using these tools as long as students demonstrate a clear understanding of the task and achieve the intended learning outcomes of the course. Along this line of reasoning, instructors generally concurred that it is acceptable for students to employ these tools to generate solutions for specific portions of assignments, facilitate code debugging, elucidate concepts, or enhance the style and readability of their code. These are situations where the tool could save time without negatively affecting learning outcomes. Finally, when asked about the extent to which instructors believed that their students were using GenAI tools unethically, around half (50.8%) believed that many or almost all of their students were using the tools unethically. ### Qualitative insights #### 4.4.1. Instructor use of GenAI We asked instructors to describe the ways that they currently make use of GenAI tools, seeking separate responses for text generation and code generation (Questions 16 and 17 in Appendix C). Overwhelmingly, for both types of content, the most common response from instructors was that they were not currently using GenAI tools. Half of instructors reported they had not used GenAI tools for text generation and 40% said they had not used such tools for code generation. A small number of these instructors (two and four for text and code generation, respectively) indicated they planned to use GenAI tools in the near future. For example, one instructor planned to use text generation tools to aid in preparing drafts for problems (i.e. _"None currently, but plan to use in the near future for generating ungraded practice problems or first drafts of graded problems"_), and another instructor planned to start using code generating tools in the upcoming semester (i.e. _"So far I have not, but I will next spring to help write code."_). This points to an emergent interest in GenAI from instructors in our survey and recognition of the use of GenAI for teaching. The primary theme to emerge from responses around the current usage of text generation tools was for the creation of a wide variety of learning resources. Of these resources, equally popular was the production of assessment questions (i.e. _"Occasionally will work with ChatGPT to ideate exam questions"_) and for aiding in various kinds of writing tasks such as report writing and turning brief notes into longer form prose (i.e. _"Generate readable sentences of my brief notes"_). Other types of text-based artefacts that instructors reported creating were course materials, examples for students, explanations of complex algorithms, and scenarios for highlighting ethical issues in software engineering. Several other interesting uses of text generation tools were mentioned. Several instructors described using such tools to support other tasks, such as for performing background research, overcoming writer's block, and for paraphrasing papers when constructing references (i.e. _"creating a reference for paper and paraphrase"_). Several instructors highlighted the summarisation capacity of GenAI tools since they can effectively condense long-form text content. One instructor used this feature to extract insights from written student feedback (i.e. _"paste in student feedback about course and ask GenAI to summarise for me"_). Finally, one instructor reported integrating text generation capabilities into other tools designed to support students (i.e. _"We are actively building a tool to help respond to common questions for students in forums"_). **Low uptake of GenAI tools:** The survey revealed that most instructors are not currently using GenAI tools for text or code generation, but some have expressed plans to integrate them in the future. The tools, when utilised, are primarily used for creating diverse educational materials, however, satisfaction regarding the quality of outputs vary. When reporting their use of code generating tools, instructors describe a variety of tasks that involve creating code in varying levels of detail. Many responses to this question described fairly generic use of such tools (e.g. _"Code writing"_, _"generating part or some of function"_), whereas some were much more specific. For example, several instructors described generating code examples that they would then give to their students to modify or analyse. One instructor described generating code as a way to help them understand the suitability of certain topics and common coding patterns (i.e. _"I use GenAI tools to write initial code on topics I am looking into including in coursework or learning more about for course purposes in order to understand common forms of code"_). Another reported use was for generating programming exercises that could be given to students for practice. This included some novel ideas, such as asking students to compare their own code with code generated by the AI tool, and asking students to use ChatGPT to generate code and then critique the output that it produces, including highlighting necessary changes. One instructor noted that attempts to generate exercises suitable for their course were largely unsuccessful due to lack of context regarding the course structure (i.e. _"But because it lacks context about the ordering of course concepts and the goals of the exercises, it has not been much help"_). Another instructor also mentioned that such tools were not particularly helpful to them for coding, noting that they often found it quicker to write the code themselves, but that they did find value in using it to generate data (i.e. _"I used it fairly heavily in a database course to generate sample data"_). Overall, most instructors who participated in our survey were not currently using GenAI tools, although several were explicit in their plans to do so in the near future. Those that were using them were doing so to generate a broad array of educational content, including assessment questions, practice exercises and examples, although not all appeared satisfied in the outcomes. #### 4.4.2. Instructor observations of student use of GenAI We asked instructors to describe their observations regarding how students are currently using GenAI tools (Question 26 in Appendix C). The most common response to this question, mirroring the earlier results regarding their own use of the tools, was that they had not observed students using GenAI tools. However, this was relatively less common (reported by fewer than one-third of participants), suggesting that they have observed their students using GenAI tools more than they use them themselves. The next most common theme that emerged, appearing in 20% of responses, was around academic misconduct. Instructor responses for this theme indicated that students frequently use generative AI tools to cheat on their assignments, in-class exercises, projects and on exams. They noted students using AI for generating complete solutions, including "blindly copying and pasting solutions", and submitting these as their own work even when they sometimes contained advanced elements that were not taught in the course. One instructor responded to the question about how their students are using GenAI tools with: _"Comprehensively. They are feeding my assignments into ChatGPT and directly copying results and handing them in"_. This misuse of the tools was a clear concern for instructors, and highlighted problems around over-reliance (i.e. _"they don't check and don't understand the solution generated most of the time"_, and _"they don't realise that it generates something very different from what was asked"_). More positive uses of the tools were also reported. The next most common theme was around using GenAI tools to debug and understand code. Instructors reported observing students use AI for debugging purposes (i.e. _"they have used it to help fix errors and better understand compiler messages"_), to generate test data and code (i.e. _"writing test cases or code to generate test data"_), and for explaining code that they do not understand. A similar number of responses also focused on generic code writing help, such as _"to complete small coding exercises"_. Two instructors mentioned that the students they had observed using GenAI for writing code actually found the experience frustrating, noting that it would have been easier to write the code themselves. A related, but less common theme, was around the use of GenAI tools as a conceptual learning aid. A few instructors discussed students using AI to help them understand topics better from the class, and assisting with ideation for project work but not giving complete solutions (i.e. _"They are using them to better understand topics from class (when they miss a meeting, get distracted, whatever)"_). An interesting theme emerged around language enhancement and communication. Several instructors observed students using generative AI tools to help improve their English language skills, both in their essays and in communicating with others online, such as writing emails or making posts on forums (i.e. _"We have a variety of students using them to generate English text particularly among English language learners even for short textual interactions (like a brief regrade request)"_). However, not all instructors viewed this use positively, with one commenting (i.e. _"Students use it when they are not comfortable with their English skills, and the results of this is really frustrating/insulting to read"_). **Potential for Academic Misconduct:** Where instructors have observed students using GenAI tools, there are concerning reports of academic misconduct, including generating complete solutions for assignments. On the other hand, some instructors observed students using GenAI productively for debugging, generating test data, understanding code, and improving English skills. Finally, it is worth noting that not all of the responses to this question appear to be derived from direct observation. At least one response indicated that they had no proof but were "pretty sure" (relating to academic misconduct). While instructor responses to this question do reveal the potential for GenAI tools to be used to aid student learning, they also highlight a concerning trend of academic misconduct and over-reliance. This underscores the importance of providing clear guidelines to students in how to use such technologies productively and ethically in computing courses. #### 4.4.3. Student use of GenAI Similarly, we asked students to describe the ways that they currently make use of GenAI tools in computing courses for both text generation and for code generation (Questions 18 and 19 in Appendix B). Many of the students in our survey had not used GenAI in their courses, with around 40% of participants responding in this way for both text and code generation. This proportion was similar to that of the instructors who had reported not using GenAI. Of the students who reported not using GenAI, a small portion (fewer than 5%) refused to do so for various reasons ranging from the risks around learning to it detracting from their joy of programming (e.g. _"I do not use it at all. I love programming, I love to write programs, and I would not let anyone else do it for me"_ and _"I do not use GenAI tools in computing courses at all. Straggling and debugging is a valuable part of the learning process"_). Several students were equally emphatic about not using such tools in the future (i.e. _"i will never use GenAI for computing courses for code generation"_), and one student also refused to use GenAI tools on ethical grounds (i.e. _"I do not feel that the output produced by a GenAI tool can safely be called'my own work'_; _when GenAI tools use so many other people's work as input to produce their result"_). **Student Adoption of GenAI Tools:** Most students have explored GenAI tools for text or code generation. Those who do use GenAI tools most commonly apply them for paraphrasing or summarising text, for debugging errors in their own code, and slightly less often for code generation. However, some emphatically refuse to use these tools due to concerns around risks to learning and ethical issues about originality. This framing may help when providing course policies and explaining ethical considerations in syllabi (see Appendix D). With respect to text generation, the most common use (reported by 20% of students) was paraphrasing or summarising existing text, for example to improve their own writing (e.g. _"I use AI to summarise my own writing to see if the point I want to communicate is clear"_ and _"I will write something and then put it into ChatGPT to make it read better"_) or to produce a summary of a large quantity of text (e.g. _"I use it to write summaries about books I've been reading"_). A smaller proportion of students, fewer than 15%, reported using GenAI tools for writing new text, with responses ranging from very short descriptions (i.e. _"writing reports"_) to much more detailed processes including iterative development of written reports through multiple rounds of prompting (i.e. _"I keep sending prompts for it to change this and that, add some topics, reward some sentences, explain to me what this sentence means so I understand"_). The most common use of GenAI by students for coding-related tasks was debugging errors in code they had written, and this was reported by 25% of respondents (i.e. _"try to fix code when not working"_, _"Helping search for bugs"_ and _"I copy and paste the codes that gives wired error. I tell them what i am expecting but i am getting this then they tell me which part of codes are wrong"_). Code generation was the next most popular theme, with some students reporting using GenAI to generate solutions directly (i.e. _"If I was solving a question that I don't have the answers to, I would ask it to give me the solution"_), although others were more cautious about the outputs that were generated, with two students describing its use as a 'last resort' (i.e. _"I consider using it as a last resort. If I'm running short on ways that would solve a problem and have exhausted all the possible ideas I have then I ask for the explanation of the problem first and if that was unhelpful then I ask for a piece of code which I check for mistakes and incorporate in what I already have written."_ #### 4.4.4. Student perceptions of the effects of GenAI tools on employment We asked students to describe the effects they think GenAI tools will have on their prospects for future employment (Question 20 in Appendix B). There was a mix of positive and negative responses, which is consistent with the quantitative results of the corresponding Likert question (Question 17 in Appendix B). A considerable number of students (33) seemed concerned that GenAI tools will reduce job opportunities. Several were very pessimistic and went as far as to say that all jobs will be replaced by AI (e.g. _"I think that if left unchecked as it is going right now, it will eventually take over all the jobs, no matter who you are really"_). Others were concerned that entry-level jobs will be affected more than senior-level jobs (e.g. _"competition for entry level jobs is going to skyrocket"_ and _"entry-level opportunities will probably become quite rare"_). Along these lines, 19 students mentioned that GenAI tools will raise the expectations of employers and increase the difficulty of bootstrapping in the industry (e.g. _"I do believe that the standards that companies require will be higher, as AI has proved to be above mediocre, perhaps affecting juniors and paid interns"_). Some students argued that software engineering jobs are particularly at risk. For example, a student said: _"programmers will be among the first group to see massive job losses. I believe this because the entirely text based nature of coding is well suited to LLMs. The tech industry is also faster to adapt than other industries."_ Another student said: _"I'm genuinely concerned about companies realising they only need 1/5 or 1/10 as many software engineers... especially since GenAI can read an entire codebase and easily put together working code from a prompt from a senior dev that also passes tests created by senior devs"_. Interestingly, a student argued that competition for software engineering jobs will increase because GenAI will make learning programming easier, which _"will likely draw the attention of a lot of new people who would otherwise be uninterested in programming"_. A different concern raised by some students relates to the hiring process itself. Two students indicated that the use of AI tools to "_judge resumes_" might have a _"massive impact_" on employment. According to one student, it feels _"unethical and unfair"_ and it is _"incredibly draining to know that all that's between you and a job is a machine"_. Three students also mentioned that standing out to employers will become harder, given that many applicants might use GenAI tools to build portfolios that make them appear as solid candidates despite lacking the required skills. According to one student, such candidates _"will flood the job market_", and it will be harder for future employers to distinguish between them and those who genuinely have the required skills. These concerns were recently echoed by Armstrong et al. (Armstrong et al., 2018) who explored the impacts of automated hiring systems as "black boxes". While many students raised concerns, a good number of students (28) indicated that they are not concerned about AI taking over their jobs or about the job market being significantly impacted. Some of these students questioned the ability of GenAI to perform the tasks that humans are good at (e.g. _"I do not think coding will become obsolete though, AI isn't even close to that good yet."_). Other students questioned whether the job market will change at a fast enough pace to pose a threat to their employability in the near future (e.g. _"I strongly believe that a proper programmer will most likely not be affected in terms of employment in the coming 7-12 years by GenAI"_ and _"at least for the next ten years employment will be fine, my skills are transferable and I am always happy to learn new things."_). **Implications for Future Employment:** Students expressed mixed views on how GenAI tools might affect their future career prospects. While some believed job opportunities would decrease, others were optimistic that these tools would improve their productivity and give rise to new careers. Beyond this, many students thought GenAI would have a positive impact on their future employment. Eleven students mentioned that they expect more job opportunities to emerge because of advances in GenAI tools, and 32 students indicated that GenAI will have a positive effect on their productivity in the workplace. In fact, expecting an increase in work efficiency was the most commonly occurring comment amongst student responses. As one student put it: _"it will make work so much easier; no more boilerplate code, or searching forever through StackOverflow"_. According to another student, GenAI tools will _"allow for more creative flow, more interesting products because the hard work can be done easier, less research time on how to complete a job, and more time completing it"_. Additionally, several students indicated that GenAI tools help them learn better, and thus will improve the skills they need to get employed (e.g. _"It can teach LeetCode pretty good :) so I'll have better chance to pass the technical interview"_). This attitude is orthogonal to that of some of the students whose responses showed negativity towards using GenAI tools while learning. According to those students, relying on GenAI tools may reduce their understanding of the material and thus affect their future chances of employment. #### 4.4.5. Student and instructor perspectives on when GenAI tools should be allowed We asked instructors to elaborate on when they believe GenAI tools should be allowed or disallowed (Question 12 in Appendix C). The prevailing sentiment in the responses was that GenAI tools should not be used when students are learning the basics. Hence, many instructors indicated that GenAI tools should be disallowed in lower-level courses but allowed in upper-level courses. Some instructors argued that it is a function of complexity rather than course-level. For example, more complex assessments (regardless of the course level) are more appropriate for the use of GenAI tools than simpler ones that can be easily completed in their entirety by the tools. Another recurring and related theme was that allowing the use of GenAI tools depends on the course and assessment learning outcomes. For example (as described by an instructor), _"if the assignment is to create a website with the goal of learning to apply HCI principles in the design, they should be able to use GenAI or other tools for the mechanical code generation"_. However, if the goal of the assignment is to see if they can write a piece of code, then they should not use GenAI tools to generate that piece of code. An instructor argued that _"this is analogous to many other practices within the University; for example, a student would not normally have to build their own computer, but if they were on a hardware design course they might have to, and submitting a purchased machine would not satisfy the learning outcomes of the module"_. A minority of the instructor responses supported unconditionally allowing the use of GenAI tools. Some said that their use is fine as long as the student acknowledges that appropriately. Others argued that it is useless to attempt to disallow their use outside closed-exam conditions, as students will use them anyway. At the other end of the spectrum, a few responses supported always disallowing them, or disallowing them in all graded assessments (regardless of the level, type, topic, etc.). **Conditional Acceptance of GenAI Tools:** Both instructors and students suggested that whether the use of GenAI tools is permissible should depend on factors such as the course level, assessment type, purpose of the task, and how the tools are used. This indicates the need for nuanced guidelines and policies when dealing with GenAI in academic settings. We asked students the same question (Question 13 in Appendix B). Student answers were more diverse and more polarised than those of the instructors. Interestingly, a good number of students (n=29) argued for disallowing the use of GenAI tools in all course-work and exams. Some also argued for completely disallowing them even outside assessments (i.e. while learning). The arguments used by these students included a range of reasons, like ethical concerns regarding how the models were trained, concerns regarding the correctness of the tools, and concerns regarding the fairness of assessments if these tools are used. However, the majority of these students argued that the use of GenAI tools _"harms learning"_ and _"defeats the purpose"_ of assessments. Some of these students made strong statements indicating that GenAI tools _"have no place in learning"_, are _"completely counter-intuitive to going to University"_, and are _"only used by people who aren't smart enough to solve problems on their own!"_. Many of the students opposing the use of GenAI tools emphasised the importance of doing the assigned work and going through the full _"discovery process"_ without _"taking shortcuts"_. A student said: _"effort is the road to success and minimising effort can create a generation of couch warriors"_. This comment captures the gist of many of the responses that linked using GenAI tools with deficient learning. Another recurring argument was that using GenAI tools in coursework and exams defeats the purpose of assessments. A student likened it to continuously _"looking to the back of a textbook for the answer"_ and another likened it to having someone _"sitting next to you and helping you"_ complete the work on which you are being assessed. On the other hand, fewer students argued for always allowing the use of GenAI tools. An argument made by several of these students was that GenAI tools are _"the future of where the industry is going"_ and thus learning how to use them is important for their success. One student said: _"when we go into employment, we will need to use whatever resources we have available to us to be as productive and efficient as possible"_. The majority of students argued for a situational or a conditional use of GenAI tools. They provided a wide range of factors that affect (in their point of view) when the use of GenAI tools is acceptable. These factors include: * _Course level_: GenAI tools should be allowed in upper-level courses, but not in lower-level courses when students are learning the basics. * _Assessment type_: GenAI tools should be allowed in coursework, but not in exams. * _Assessment weight_: GenAI tools should be allowed in minor assessments that carry a small weight, but not in major assessments. * _Task goal_: GenAI tools should be allowed if the goal is the application of already-learned concepts (e.g. to build an artifact). It should be disallowed if the goal is learning the concepts. * _Task size_: GenAI tools should be allowed if the task is large and complex, requiring stitching many pieces together. It should be disallowed if the task is small or trivial. While these factors relate to the assigned task, some students felt the acceptability of the use of GenAI tools should be conditional on the way the tools are used, rather than on the task itself. For example: * _How_: Using GenAI tools with understanding is fine. Blind copying and pasting of answers is wrong. * _How much_: Using bits and pieces or partial solutions generated by GenAI tools is fine. Using a complete solution generated fully by a GenAI tool is not fine. * _Why_: Using GenAI tools as a last resort, when stuck, or when there is no other way of getting help is fine. Relying on GenAI tools right from the beginning is not fine. The last category above is interesting as it assumes that, in general, the use of GenAI tools is unethical unless it is out of necessity. The following are several quotes from the student responses that support this idea: * _GenAI can be used as a last resort when the lecturer is rather difficult to explain a material and students use GenAI when they cannot understand what is being explained or assigned at all._ * _GenAI should be allowed if the courses force us to do work manually without any mentoring. Vice versa, if the mentor is giving course completely i think GenAI should be disallowed._ * _... when you run into a dead end and even after looking online and asking a friend and either don't know or you still don't understand to go and ask GenAI for an answer._ * _I believe GenAI should be allowed sometimes when you have no one else left to ask._ ## 5. Curriculum and Assessment In the past 50+ years, a great body of research within the SIGCSE community addressed many trends, opportunities and challenges in Introductory Programming (CS1) courses (Kolmogorov, 1998). Among these are teaching and learning approaches, new forms of assessment, shifts in content, tools, and overall course design (Kolmogorov, 1998). For example, at the turn of the millennium computing educators passionately debated whether to use an objects-first approach (or not) (Kolmogorov, 1998). Similarly, Alice, Scratch, Blockly, and other block-based programming languages have been the subject of much research (Kolmogorov, 1998; Kolmogorov, 1998; Kolmogorov, 1998). Although these developments were important for many reasons and groups of (present and future) students, they are not comparable, at least not in pace and ubiquity, to the rapid changes Large Language Models (LLMs) are currently triggering in higher education, the computing disciplines, and CS1 in particular. With LLMs available on nearly everyone's phone and laptop4, it is not only knowledge that is instantly retrievable but also problem explanations and solutions - in the form of programming code that is not necessarily correct. Given the pervasiveness of LLMs, this paradigm shift regarding the availability of knowledge, solutions, examples, and content (particularly in the form of code) is more comparable with the advent of the internet than other developments in the annals of how we teach and learn computing - yet the speed of internet adoption was much slower as a whole. Footnote 4: Acknowleding that internet access is required to access LLMs, and that subscription-based services which could be superior to free ones present issues of access based on means, and potentially opening new divides. Given the myriad impacts of LLMs, it is important to acknowledge that educational systems are notoriously slow to change. Reasons for this are many, yet Lee Shulman (Kolmogorov, 1998) adds the pedagogical psychologist perspective, pointing out that the "signature" of a profession's teaching and learning is pervasive and perpetuated at three levels: surface, deep, and implicit (i.e., curricula, pedagogy, attitudes & values). Now, however, with the seeming ubiquity of LLMs, it is inevitable that educators consider their impact on teaching, learning, assessment and delivery, leading to possible redesign of their courses at all of these levels. Based on the concept of Constructive Alignment (Kolmogorov, 1998), learning objectives need to be aligned with exercises, assignments, and assessment methods. Therefore, we discuss the relevance of LLMs for CS curricula and assessments with regard to course objectives, and course activities including formative and summative assessments. This discussion is centred on expert interviews we conducted with introductory programming educators, focusing on their changed educational views and practices to highlight how computing education is evolving (with what seems to be lightning speed) in light of LLMs. ### Methodology for expert interviews To understand how computing curricula and assessments are currently being affected by the emergence of LLMs, we conducted an interview study with computing educators as experts in the field. The interviews were semi-structured, with an interview guide as a basis. Using the purposeful sampling method (Krishnan, 2017) led to the selection of experts via the authors' networks, who were contacted via email. Moreover, an invitation was sent out to active contributors to a discussion thread from the SIGCSE mailing list concerned with LLMs. Another recruitment attempt was made via an open question in the instructor survey, where respondents willing to elaborate on their responses in an interview could enter their contact details (see Appendix C). The most important criterion for inclusion was that educators would have concrete plans or views toward changing their current course structure, assessment, or classroom practices in light of LLMs. This is one of the main ways the present work differs from that of Lau and Guo (2018) discussed in Section 4.1. The interview guide included the following questions and follow-up questions: * Which course(s) are you teaching in the next semester? [If they are teaching multiple courses, then try to talk about one particular course or at least make sure it is clear which course is under discussion at any point in the conversation.] * Do you have an explicit set of written learning objectives/ competency goals for this course? * If yes, will LLMs change these goals? * If yes, what goal will change or be removed? What goal(s) will you add? * If no but they have informal learning objectives, ask how they think these will change (or have changed). * Are you planning to change your pedagogy and/or learning activities because of LLMs? * If yes, how (what did you use before, how do you change it, and why exactly)? * Are you planning to change the assessment mechanism? * If yes, how (what did you use before, how do you change it, and why exactly)? * What is your vision for that course in the context of LLMs? * Which opportunities for enhancing teaching, learning, and assessment can you think of? * Which challenges come to your mind if you think about LLMs in the context of computing education? These questions resemble some of the survey questions, but they allow for a more in-depth elaboration of instructors' practices. Interviews were scheduled to last between 20 and 60 minutes. In practice most were closer to 60. They were conducted via Zoom and automatically transcribed via speech recognition software, followed by a correction loop by a human (interviewers checked transcripts of other reviewers, not their own). After the transcripts were finalised, we deleted all audio and video recordings in accordance with the protocol submitted to the University of Toronto Research Ethics Board, who approved it prior to the study. Respondents were free to decide if they wish to remain anonymous or to be named. Those that are named in this report gave their consent for this. Affiliations are noted in the acknowledgements section and on the interviewees' first mention in this section. We also allowed participants to review a draft manuscript before final publication, to ensure that they are not misrepresented. The sample comprises 22 computing educators from nine countries and five continents. Table 6 shows the locations of these 22 interviewees. The interviews were fully transcribed verbatim and served as a basis for thematic analysis (Raj to). One of the types of data they categorised was explicit learning outcomes. Of the 234 syllabi, 154 contained explicit learning outcomes. The five most common learning outcomes were: "testing and debugging," writing programs", "selection statements (if/else, etc.)", "problem solving (including computational thinking terms)", and "arrays, lists, vectors, etc.". These five objectives appeared on at least 40% of the syllabi. Looking through the full list of learning objectives, the only one that was directly related to reading code was "tracing program execution", appearing on only 3% of syllabi. Kiesler did a similar study in 2022 [105, 106] using syllabi from 35 German universities. She found that the most common objective was "writing code" and that the objective "being able to read, explain and identify the output of (foreign) code" appeared on less than 10% of syllabi. In response to the rapid advance in LLM capabilities, educators are reconsidering their courses' objectives. In the following subsections, we present the respective themes identified in the interview transcripts and relate them to some recent studies in the field. #### 5.2.1. How instructors are changing their learning objectives Several instructors discussed changes in their upcoming course learning objectives. However, given the rapid emergence of LLMs in computing education, in most cases, these changes are not yet reflected in official curricula or course syllabi. Instead, some educators are changing more fine-grained learning objectives in their courses. James Davenport (University of Bath, UK), for example, introduced two new sessions concerning the impact of LLMs on cybersecurity in his class. Even though the official course objectives remain as they were due to their general nature, Davenport has started to teach students how defenders and attackers could take advantage of LLMs. Many educators acknowledge the dynamic nature of technology and anticipate potential adjustments to their learning objectives in the near future. Viraj Kumar (Indian Institute of Science, Bengaluru, India) expressed the need for flexibility, recognising that changes might be necessary as technology evolves even further: "_And even now I'm sort of holding my breath because now I'm saying, hey, let's put out these things, but you know, maybe things change._" Statements like this reflect the fast pace of advancing technologies in computing education and educators' openness to adapting their practices. Kumar recently updated their CS1 class of approximately 50 students to include the topic of LLMs' role in code generation. Educators like Ewan Tempero (University of Auckland, New Zealand) emphasise the role of LLMs in automating routine tasks, enabling educators to shift their focus toward nurturing critical thinking skills: "_The more tools that [students] have to support doing the stuff that really isn't that interesting, the more [educators] can focus on the interesting stuff like critical thinking._" In this context, Briana Morrison (University of Virginia, USA) highlighted the importance of using citations with LLM-generated code and teaching students to evaluate LLM output in a critical manner. Even though educators are not teaching students how to write prompts at the University of Virginia, they are "_going to have a statement in the syllab that using LLMs is allowed. However, we are going to require a reference, like a citation, that says this was generated by an LLM._" Some educators, including Leo Porter (University of California San Diego, USA) note the importance of prompt engineering when using LLMs. Porter introduced new learning goals for his CS1 class, emphasising the non-deterministic aspect of LLMs, prompt engineering, and problem decomposition. Porter feels that crafting effective prompts is a competency students should develop as students should learn how to interact effectively with LLMs, ensuring that they obtain meaningful and accurate results. As for problem decomposition, Porter said: "_We didn't used to teach problem decomposition_" but felt that this was part of their hidden curriculum that he and his colleagues are pleased to see moving to the forefront. In this context, Michael Kolling (King's College London, UK) adds that LLMs might force us to look at learning outcomes at the program level instead of the course level and that using LLMs should be explicitly taught. This aspect is also emphasised by Kristin Stephens-Martinez (Duke University, USA) who said "_We're going to have to help students understand how to use LLMs -- and you all [the students] need to understand that ChatGPT is fallible, and you need to be very critical of what it's doing._" Rodrigo Duran (Federal Institute of Mato Grosso do Sul Brazil, Brazil) explicitly encourages students to test and understand LLM-generated code, enabling them to evaluate if the LLM's answers are correct and to adapt their answers accordingly. A recurring theme among the educators we interviewed was the elevation of code comprehension and critical thinking skills. Jean Mehta (Saint Xavier University, USA) highlighted the need to assess students' ability to read and understand code more thoroughly, a competency that has often been overlooked in the past. Mehta concludes that "_we should have more time to spend on these kinds of things._" It is possible that the impact of LLMs on introductory programming might go beyond changing course objectives and lead to the introduction of entirely new courses, syllabi, and learning objectives. An example of such a recent development is the "Generative AI" online course by DeepLearning.AI [64]. #### 5.2.2. Preserving core learning outcomes Computing education has always experienced change over time as new tools and technologies were introduced. Although Copilot may serve as a useful springboard for solving CS1 problems, students still need to dedicate time to learning algorithmic thinking, program comprehension, debugging, and communication skills [179] in order to become not only proficient computing experts but also to use LLMs effectively. A number of interviewees shared this perspective and highlighted the need to preserve several core learning outcomes. Educators who do not alter their learning objectives stress their focus on teaching fundamental programming concepts. The learning objectives of Peter Mawhorter's (Wellesley College, USA) introductory CS1 class remain stable and focus on fundamental concepts. Similarly, Frank Vahid (University of California, Riverside, USA) believes that it is still important to teach students how to define variables, use branches or loops to solve problems, how to use functions to keep code modular or use vectors to store data. Thus, students still need to learn to code and practice. In the era of LLMs, Vahid thinks "... _the most pressing thing is making it [assessment] harder... it [Generative AI] takes the work out of homework._" Vahid pointed out that students have for some time been getting help through other means such as Discord forums, Chegg, Stack Overflow, Piazza, etc., noting that "_We've had a leaky roof for a long time. And LLMs are the storm that finally causes us to be flooded._" Similarly, Leo Porter who recently published a textbook with Daniel Zingaro for teaching introductory programming with the help of LLMs from day one (Zingaro, 2016) emphasised the importance of preserving core learning objectives that focus on teaching fundamental programming concepts. Porter noted that these objectives remain unchanged due to the foundational nature of the skills taught. They conclude that "_... things are staying the same for the Intro class because the skills I'm teaching there are so basic._" Some educators including Mark Liffiton (Illinois Wesleyan University, USA) views LLMs as valuable tools that can aid students in coding tasks. Liffiton intends to maintain hands-on coding practices while integrating LLMs into Programming Languages and CS2 courses. They focus on building foundational knowledge that complements the capabilities of LLMs: "_I still want the students to be doing that work. I still want them to be practising the things that the tool could do to build the base of knowledge so they can later do the things that the tool can't do._" Another interviewee shares this concern: "_I'm very worried that if everybody forgets how to write code and think through this stuff, we will lose the ability to make new things._" At the same time, educators seem to be aware of the need to prepare students for industry expectations, which is likely to include LLM use. Dan Garcia (UC Berkeley, USA) pursues a similar approach. He recognises the potential of LLMs as educational tool, or aids, but emphasises the importance of teaching programming basics first. Garcia encourages the use of LLMs once students have mastered traditional programming concepts, concluding that "_... we can't stop teaching kids how to program._" This perspective is shared by Austin Cory Bart (University of Delaware, USA) who is not changing the learning goals in an introductory programming class for about 280 students. Nonetheless, they expect adjustments in subsequent courses as LLMs become more integrated into programming practices: "_But I look at almost every single course after mine as, Oh, yeah, that's probably going to need to change learning objectives._" To conclude, computing educators express the need to balance between leveraging LLMs for problem-solving while ensuring that students continue to develop competencies in algorithmic thinking, program comprehension, and debugging. Educators vary in their approaches, with some maintaining a focus on teaching fundamental programming concepts, while others see LLMs as complementary tools to enhance learning. The preservation of core learning objectives, particularly those related to basic programming, remains a consistent concern among educators. #### 5.2.3. Towards conversational computing If computational thinking is the learning goal of a non-majors course, then using an LLM-based tool such as Github Copilot may be a useful approach as advocated by Denny et al. (2017). For students in non-computing majors who currently only take one (or just a few) programming course(s) to learn enough to write simple programs, it may be that using an LLM tool is all that is needed, making a programming-specific course unnecessary (Zingaro, 2016). It may even be more effective than typical courses at introducing these students to programming and may also broaden participation in CS courses in the future. Michael Caspersen (It-vest & Aarhus University, Denmark) believes that LLMs are forcing us to rethink what we actually teach our students, and encourages us to view them as an opportunity. According to Caspersen, LLMs do not add something qualitatively new, but quantitatively indeed! They emphasise issues that have always been present. Hence, LLMs may even contribute to increasing the quality of computing education, thereby making it more attractive to a broader range of students. ### Course activities LLMs can be useful in generating several kinds of learning activities including novel variations of programming assignments (Zingaro, 2016). However, this may introduce problems such as ensuring that students have been provided appropriate content in order to understand novel variations. LLMs can also generate good explanations of code (Zingaro, 2016; Zingaro, 2016; Zingaro, 2016; Zingaro, 2016). This provides a mechanism for novices struggling to understand the run-time behaviour of novel problems to get auto-generated and hopefully helpful explanations. This use-case exemplifies the potential for LLMs to ease the burden often felt by teaching assistants, and could be a first line of help for students (Zingaro, 2016). Another type of learning activity that can be provided by LLMs occurs in settings where students use them to help generate code solutions. Here, students can use LLMs in an iterative improvement loop. Students can continually alter prompts to refine the model output helping them to 'build-up' a solution (Zingaro, 2016). In their interview Mark Liffiton took this concept one step further stating "_So they're great educational tools because they can give something akin to one-on-one tutoring... and I think there's a ton of value in there._" This particular use-case of LLMs is less about what LLMs can produce, but what they can do for students (and educators) and aligns with what the Artificial Intelligence in Education (AIEd) community has been discussing for years (Zingaro, 2016). Liffiton has also developed a tool called CodeHelp (Zingaro, 2016) that uses LLMs that he is going to use with his students which can do some of what ChatGPT can do, but specifically does not solve the problem, and does not give students complete solutions - a kind of "sanctioned" access to the educational power of LLMs which could combat the concern of students becoming over reliant on them, and not actually learning from them. Mark sees this as adding to the course learning objectives to include working with this new tool and therefore LLMs. Similarly, Frank Vahid sees LLMs as tutors that will be available around the clock, and importantly, as tutors that will not judge students, stating "_I'm very hopeful that it will become another TA for the class._" An important aspect of teaching is using carefully crafted examples to illustrate salient points. If the goal is to use a _real_ or _running_ example, it can be tedious to have to deal with the many irrelevant (to the example at hand) aspects of a problem in order to ensure the example compiles and runs, in addition to keeping focus on the point desired. LLMs can help instructors with the tedium of such minutiae (Zingaro, 2016; Zingaro, 2016). Clearly, many of the student-initiated possibilities discussed above come with academic integrity concerns we discuss in Section 7. Several interviewees were aware of this, stressing the need for educators to emphasise the importance of students doing their own work. Perhaps the most extreme example of how Generative AI might be used in the introductory programming course comes by way of Dan Zingaro and Leo Porter's new textbook (Zingaro and Porter, 2018). The book begins by introducing students to the GitHub Copilot plugin within the Visual Studio Code IDE before students have learned to write a single line of Python code. Students create their first programs by typing English comments and letting Copilot generate the code. This corresponds to the _sketch model_ from Alves and Cipriano's Centaur Programmer (Alves and Cipriano, 2018) where the programmer generates the outline of the solution and the AI fills in the gaps. As Zingaro and Porter explain each line of LLM-generated code, they use the opportunity to teach the related Python syntax and programming concepts. But before they do this, they introduce functions and use this to motivate top-down design. Porter will be teaching 700 students using this approach in the upcoming semester (September 2023). #### 5.3.1. In-class Activities LLM-based tools have motivated some changes to activities that specifically happen during classes. Denny et al. (2019) found that Copilot's performance is substantially improved when it is prompted with individual problem-solving steps in natural language and encourage explicitly teaching prompt engineering to students. David H. Smith IV (University of Illinois, Urbana-Champaign, USA) will be using a tool and specific exercises for students to practice LLM prompts so they can use LLMs effectively. Leo Porter also stated that they will be adding a learning goal on prompt engineering as discussed earlier. Although not planning on getting into prompt engineering, Brian Morrison is going to do more live coding with LLMs in the classroom including analysing why certain code is wrong. Viraj Kumar has also used Generative AI for live coding. Jeremie Lumbroso (University of Pennsylvania, USA) created guided sessions using LLMs and shows examples of them giving incorrect answers. Similarly, Kristin Stephens-Martinez told us that they plan to work out examples in advance of using prompts that demonstrate a hallucinated answer or other technically-incorrect response to help students see for themselves that LLMs are not oracles. Austin Cory Bart is going to demonstrate the use of LLMs in class for three reasons: First, because LLMs are a great way to introduce AI topics, also providing a way to bring advanced material into the introductory course so students can look forward to what is coming later; Second, because Bart feels that if it is not discussed with students they'll just use it anyway, but in a more misguided manner. Finally, Bart stated: "_At some point we have to start incorporating these tools... programmers are going to be putting these tools into the workflow. And I see that on my own... when I'm coding this summer having co-pilot auto-complete an entire function that I just started writing, that's too much of a game changer for it not to be addressed._" Interestingly, Leo Porter is the only interviewee who mentioned pair programming directly. Porter plans on continuing to use pair programming and peer instruction in class. Dan Garcia noted that LLMs should be used as nudgers, hint-givers or help-givers, but not oracles - seeming to match the desired role of a human pair programmer, and Michael Caspersen noted that LLMs should be integrated in peer to peer learning. Others in the literature have speculated on what may come of pair programming in light of LLMs. For instance, Dakhel el al. noted that when used by experts, Copilot can be an asset as its suggestions could be 'comparable to humans' in terms of quality. However, it could become a liability when used by novices who may fail to filter its buggy or non-optimal solutions due to their lack of expertise (Denny et al., 2019). Wermelinger stated that it is not surprising that GitHub dubs Copilot as 'your AI pair programmer', even though the interaction is far more limited than with a human - notably, Copilot does not provide a rationale for its suggestions (Zingaro and Porter, 2019). However, Frank Vahid did note a positive aspect of how many Generative AI tools present their output noting positively that LLMs generally do not judge the student. Do we stop teaching long division now that we have calculators? Do we let students use calculators when they are learning long division? No and no. However, in the same way that they expect students to learn to use an IDE on their own time, some instructors have indicated that they will not be dedicating classroom time to explicitly teach students LLMs. Rodrigo Duran is not actively encouraging LLM use but at the same time is not actively discouraging their use. Dan Garcia does not plan on incorporating LLMs into their course, at least for the time being, choosing to keep their 'ear to the ground' and see what others are doing, and stressing that the fundamentals are important: _"Do we stop teaching long division now that we have calculators? Do we let students use calculators when they are learning long division? No and no."_ Peter Mawhorter does not plan to allow LLM use in their course out of fear of harming particular students, expecting that a small percentage of students - likely those who are marginalised in other ways - will have bad experiences due to biased, possibly sexist or racist output. It is also possible that these tools will give different output to different students, for instance based on the student's name if provided in starter code or a prompt. #### 5.3.2. Unsupervised Activities Student use of LLMs in unsupervised learning activities, self-assessments, and other, self-directed scenarios have also been the subject of research. MacNeil et al. (2017), for example, used GPT-3 and Codex to generate three types of code explanations (line-by-line explanations, lists of important concepts, and high-level summaries) from code snippets from an instructor-developed web software development e-book. They found that Codex generated less helpful and more verbose explanations than GPT-3. Moreover, Codex included code in the explanations, even though it was not desired. Students rated these explanations on a 5-point Likert scale, confirming that explanations matched the code and were useful for learning in general, whereas line-by-line explanations were rated as least helpful. An exploration of the potential of LLMs to generate formative programming feedback (Mikolov et al., 2017) suggests that ChatGPT performs reasonably well for some introductory programming tasks and student errors indicating that students can potentially benefit in unsupervised learning scenarios. However, it is possible that it will fall on educators to provide guidance on how to use the generated feedback, as it can contain misleading information for novices. Several of the approaches discussed in the prior section are aimed at mitigating this. Interviewees mentioned unsupervised activities less than in-class activities in general. This could be at least partially due to the fact that this is the first academic year where Generative AI could be considered mainstream and unsupervised activities are more difficult to plan and hypothesise about. Most of the discussion around unsupervised activities centred around ensuring that students are doing their own work (even if using Generative AI as a tool for help) and that students understand the code that these tools produce. Frank Vahid believes that this could lead to increased instructor emphasis on tools that analyse student behaviour. In a high school introductory programming course, Christian Tomaschitz (TU Wien, Austria) had half of his students use ChatGPT and the other half use an e-book. ChatGPT was not introduced to students and students registered OpenAI accounts with their school e-mail addresses. All students had the same exam which was half open-book (including internet access), half closed-book. Tomaschitz noticed that the ChatGPT students had the possibility to interact (with ChatGPT), and these students were faster solving the problems, but it seemed as if they did not critically reflect the output, and understood less. Tomaschitz was concerned that the ChatGPT group was over-reliant on the tool and did not understand the output as well as the e-book group. Viraj Kumar used GitHub Copilot during class for live coding and told students they were free to use it, or any other Generative AI for coding. Students were polled after the course and asked if they were using ChatGPT. Most said that they were not. The consensus explanation seemed to be that they wanted to learn programming themselves and their thinking seemed to be that when they go for technical job interviews they will be expected to code without assistance. Several interviewees including Leo Porter and Michael Caspersen believe that LLMs will cause educators to focus more on code reading and less on writing from scratch. Caspersen noted that students read more than they write when learning to read natural language, but to-date this has not been the traditional approach when it comes to programming. Further, LLMs could support a 'use, modify, create' approach for programming. Briana Morrison envisions more exercises about why certain code is wrong and fewer on writing code from scratch. Kristin Stephens-Martinez mentioned that they are concerned about LLM use by novices pushing more metacognitive practices earlier in the curriculum, perhaps before students are ready for this. For instance, students will have to ask themselves 'am I going to take this shortcut or not?'. Getting students to recognise when it is a shortcut versus when it actually is not a shortcut is something that novices typically are not in a position to assess. ### Assessment Although we previously discussed activities that could have been assessed, in this section we focus specifically on formal assessment. Research on LLMs has demonstrated their ability to answer typical CS1 assessments that involve writing simple functions (Sandel, 2012; Sandel, 2012) as well as more advanced material. Similarly, it has been shown that they perform in the upper quartile of real students on CS1 exams (Kristin, 2012). It has also been shown that they perform just as well on CS2 exams as CS1 exams (Kristin, 2012) suggesting that LLMs may soon be capable of effectively solving even more advanced problems. Indeed, recent results with GPT-4 suggest that it can solve most exercises in introductory programming courses (Sandel, 2012), which is also supported by our benchmarking work presented in Section 8. However, evidence also shows that LLMs do not perform as effectively on computational thinking problems that do not involve code writing (Kristin, 2012). Similarly while LLMs can often correctly answer more than half of coding-based multiple-choice problems, they answer double-digit percentages of multiple-choice problems incorrectly, leading to the hypothesis that either a combination of natural language and a code snippet, and/or chain-of-reasoning steps pose a challenge for LLMs (Sandel, 2012). It might be tempting to think that even though LLMs can do well on relatively simple well-specified functions, longer and more complex problems are beyond the capabilities of LLMs - although LLMs have been shown to perform well on more advanced (coding competition) problems (Sandel, 2012). #### 5.4.1. Exams Some instructors are moving towards invigilated exams and assessments, and these may be worth more marks. For example, one interviewee changed the grade weighting of their programming assignments from 50% of the course grade to 0% of the course grade. They added an assessment category, "coding interviews", which ensures that students are not using LLM tools as part of the assessment. Several educators mentioned oral exams. Jean Mehta plans to use 20-30 minute one-on-one oral exams at the end of every section. This is made possible by a combination of a flipped classroom with many videos and book/autogorader technology such that there is less need for traditional lectures covering the material. Michael Kolling said, "_I think there needs to be at least some assessment that includes an oral element because you know, there is at the moment, the problem is that submission of written work, whether it's text or programs, is taken as a proxy for intellectual achievement, right? And what we actually want to assess is intellectual achievement. We want to see some, you know, sort of intellectual work having happened there and we take the written work as evidence of that. And you know, these tools have removed that connection... The creation of written work is no longer evidence of intellectual work having happened._" Similar to personalising assignments, on open-ended writing assignments, Ewan Tempero requires more specific answers directly related to course materials: "_And it's not that we used strange terms or unusual terms, but we used specific terms in a particular way. And so, we expected answers to reflect that, to demonstrate that, yeah, they did actually understand what the course material was._" #### 5.4.2. Homework There may be less summative unsupervised work because instructors no longer trust that unsupervised work is the students' own. This may be accompanied by a tendency to not grade code assignments going forward. Along with increasing the weight of exams, many teachers are devaluing "homework" assignments altogether. Mark Liffiton said, "_I will sort of be operating in this assumption that some students may end up just getting a tool to do the work. And thus, I don't want to be putting too many points on that and giving an unfair advantage in those cases._" One instructor decreased the grade weighting of their programming assignments significantly and added this text to the assessment description, "_As the programming assignments are intended primarily for practice and learning, your program does not have to be fully correct to receive credit. The final evaluation of whether you have learned from your programming assignments is in your ability to solve problems on quizzes and the exam._" Some teachers no longer require "writing assignments" in the traditional sense. Jan Schneider (Goethe Universitat, Germany) has a course that used to include writing a scientific paper, but it has been changed to allow students to produce it using LLMs. "_I mean, the main objective is to help people to start thinking in a more scientific way. That's the overall objective... I will not ask them to develop a mini paper by scratch where they need to write everything._" #### 5.4.3. Process over product Rather than assessing final solutions as products, some educators are increasingly focusing on assessing learning processes (which is common in other disciplines, e.g., teacher education) such as: submit-in-stages; solution reflection / commentary / critique; interviews; portfolios or learning journals; and presentations. Taking the approach of having students focus on the learning process through a diary or journal, Christian Tomaschitz replaced several previous exercises with reflection assignments in a diary for students to document their learning process. In relation to more open-ended assignments, Leo Porter said, "... _gone are the days of us really just completely describing the exact behaviour of the functions... And then it's going to be a lot more work for us to grade because we're going to have to now look at the code or the PDFs of the code. Or look at the video of them showing how the code works._" This approach also shifts focus from memorising knowledge (or in a note manner, the process that leads to the product without questioning the process) to application of skill and critical thinking. Jeremie Lumbroso applies this shift to his Discrete Mathematics course, explaining that "_the idea is to focus less on the product and to focus more on process, on making sure that the students are able to explain what they are doing._" Some educators have already started using LLMs as part of the learning process. Briana Morrison explains a possible assignment that: "_Here's the problem. Here's the prompt we gave ChatGPT, or Copllot, or whatever. Here's the output it gave us. It's wrong. Tell us why it's wrong._" Several educators worried that some classes do not lend themselves to approaches mentioned above. For example, elementary theory courses are not really amenable to "personalise" a proof. Michael Caspersen, on the importance of basic competencies, even in the face of powerful LLMs stated "_And that means that if you add a good programmer to large language models, you get two good programmers. Basically, that's the equation, right? If you add a mediore programmer to large language models, you get just large language models, so, from that perspective... if you want to really be able to amplify the capabilities of humans, we need to make sure that the basic competencies remain._" Frank Vahid stated: "_The biggest challenge is cheating... you've gotta learn... you've got to work to learn. You can't just let tools do your work for you... and this is true beyond computer science. In English, you've got to learn to write. Even though ChatGPT can do most of your writing for you... you've got to learn to write... that's how you think. That's why you're valuable as a human to a company... and so the same with computer science. So somehow, we've got to get that message out. It's just so tempting for students to save so much time._" ### Institutional initiatives, policy, and context Although some institutions (at least initially) have universally banned the use of LLM tools in student work (Lichtenberg et al., 2017), others are starting to embrace them. There is little doubt that this will lead to an array of policies and initiatives that may be at the university, faculty, or class levels. Given that public awareness about LLMs occurred mid-academic year for many institutions (almost universally for North America and Europe in addition to many other parts of the world) this September marks the first academic year where LLMs are a nearly ubiquitous topic. Given that institutions are typically slow to react mid-year - if they react at all, we have yet to arrive at a steady-state in terms of institutional initiatives. Nonetheless they are beginning to emerge as educators start thinking about the coming academic year. Peter Mawhorter's local policy is that LLMs are not allowed in class. In delivering this message to students Mawhorter aims to have a discussion with students on why that policy exists. On the other side of the coin Leo Porter is embracing LLMs and is aiming to use them, for instance, in quizzes. However Porter's institutional challenge is finding and organising enough computer-based testing facilities, and one example of how LLMs can have knock-on effects that could not only affect the course in question, but others via resource allocation and timetabling. Michael Caspersen set out a middle ground, starting with basic competencies and gradually building-in the use of LLMs letting higher levels of the SOLO taxonomy (Vahid, 2017) come into focus. Sven Strickroth (LMU Munich, Germany) noted another challenge that is caused by institutional policy - significant changes to learning objectives are not easily possible due to the local accreditation cycle which dictates that this occurs only every five years. ### Other challenges and opportunities At the end of the interviews, we asked interviewees for their views on the challenges and opportunities they foresee in terms of the effects that LLMs will present in the introductory programming course. While some of these have already been discussed, several have not - some of these are presented here. #### 5.6.1. Challenges 1. Jan Schneider noted that everyone (including LLMs) have a lot of blind spots, and these can be found by writing - either in code or in natural language - like when other people or a LLM try to understand what we wrote, that is when blind spots appear, often in the form of "_whoa, there's a blind spot... something I didn't know that I was missing... and I have a big fear that if we start using these large language models, we will never acknowledge our blind spots. And we will miss a lot in learning and developing._" 2. Mark Liffiton mentioned that it is a challenge now to make sure students are learning things that they could just have a tool do faster, and the concern of over-reliance - not learning the things they would have if they did not use the tool to do it for them, noting that it is like cheating in a way - not learning the things you would if you had done the work yourself. 3. Frank Vahid mentioned that "you need to think". A human is only useful because they are clever, and thoughtful, and intelligent. That is why we teach them programming - because it is a way to help them learn to solve problems - even though we have calculators we still should know how to do arithmetic, despite the existence of calculators. 4. Michael Caspersen fears that LLMs will enable disciplined students to become better, but for undisciplined students who are seeking the easy way out - danger! Caspersen noted that it should not be considered the student's fault for taking the easy way out. It should be turned into a challenge for educators to come up with assessment systems that do not have an easy way out. Caspersen also noted that in many ways LLMs do not add something qualitatively new, but emphasise issues that we have been dealing with for a long time. This was corroborated by Michael Kolling who stated that it is the scale of these issues exploding that is novel, for instance the illusion of achievement (which is nothing new) and what that could do to learning at scale. Kolling is also concerned with intellectual laziness noting that learning is a struggle - learning only happens when you intellectually struggle with something, and if LLMs offer an easy way out, then is learning happening? Kolling also mentioned cheating as a challenge, but that this is obvious, boring, and solvable. #### 5.6.2. Opportunities 1. Michael Caspersen sees a big opportunity in terms of rethinking what we are doing. _"We should think deeply about what we actually teaching our students - and LLMs are doing that... [LLMs] radically challenge our reflections on what to teach, what to assess, how to assess. So that's a great opportunity."_ 2. Michael Kolling sees individualised learning in terms of progress, interests, feedback, and help as a big opportunity, noting that humans saw similar issues with books and the printing press. There was concern that people would not need to remember anything any more. However, we lived. _"In fact books made things better, right?"_ 3. Dan Garcia sees potential in scaling support which benefits educators, institutions and students, positing _"Imagine an LLM that could examine every exam script and where mistakes were made give a whole concept map of what went wrong in the notional machine and where and how I can try this for a few students but I have over 1,000. This could help scale support for everyone."_ ### Discussion Above we have discussed issues raised by the expert instructors that we interviewed. In addition to those, we would like to discuss a certainly non-exhaustive set of issues we believe are important for instructors and CS program designers to consider presently. The potential biases reinforced by models trained on large datasets (Santos et al., 2017) are concerning. This could be especially important for instructors who use LLMs to create course materials. Another issue is the presence of hidden or implicit learning objectives in existing computing programs - something Leo Porter described regarding problem decomposition in Section 5.2.1. There is little doubt that there are other such hidden learning objectives spanning not only knowledge, and skills, such as reading and tracing code but also dispositions, inter- and intrapersonal competencies, and other aspects relating to the whole person (Kolmogorov, 1999) which the emergence of LLMs might bring to the fore. For example, many programs expect their graduates to be comfortable developing large programs using (new) debugging tools, work in a self-directed manner but also perform well in teams, and overcome challenges to pursue their goals long-term. But curricula often do not explicitly include these program-level outcomes in the learning outcomes for a particular course (Santos et al., 2017; Kling et al., 2018). The same is true for course activities and assessments. As individual instructors respond to the landscape changes induced by LLMs, it becomes more important than ever to consider and implement the constructive alignment (Santos et al., 2017) of individual course activities and desired program-level outcomes. This is not a new concern for educators, but one that has been amplified by the rapid change in teaching and assessment settings we now see happening (or about to happen). Related to this is the potential for LLMs to change the workplaces into which we are graduating students. While a number of researchers have looked at how current developers may use LLM-based tools (Santos et al., 2017; Santos et al., 2017) and how LLM-tools may be incorporated into professional software development tools (Santos et al., 2017), the participants in these studies have been programmers who learned to code initially without using LLMs. Although professional developers may benefit from AI's human-quality suggestions, novice developers lack the expertise to recognise and understand buggy or non-optimal solutions (Santos et al., 2018). The use of AI tools could become a liability if inexperienced developers fail to remove or correct the tool's incorrect suggestions (Santos et al., 2018). Potentially more dangerous is the fact that students can now execute code that they do not understand, yet designed through natural language prompts. It remains to be seen whether students who have LLMs available from the start will have the discipline and dispositions to develop programming competencies deeply or whether this will even matter. Reminiscent of the 1990s discussions of objects-first or objects-later, the opportunity to focus first on top-down design but have a working code for interesting problems completed by the AI is not universally recognised as a positive development. LLMs also impact peripheral and applied computing fields. For example, it was shown that LLMs can successfully solve 97% of the programming problems in a bioinformatics course (Kolmogorov, 1999). The authors conclude that the models perform so well that bioinformatics students in the near future may no longer need to know how to write (and likely not understand) code. As a consequence, several questions arise on the effects of LLMs not only on other applied areas of computing (e.g., data science; digital forensics; security; and games development) but also on programming as one of the core tiers of every computing degree. Related to that are questions about how the introduction of LLMs might affect participation in the computing field. Perhaps reducing the focus on syntax will make the field more attractive to traditionally under-represented audiences and increase retention rates. Additionally the influence of media coverage of computing topics is known to be a large factor in the decisions pre-university students make in terms of what courses to pursue at university. It remains to be seen what effects the intense media hype surrounding LLMs will have on future computing intakes. We anticipate more changes in learning objectives, course contents, learning activities, and assessments, which will, in turn, affect whom we teach and why in the (near) future. This might go hand in hand with long overdue changes in computing's signature pedagogy (Krishnan et al., 2017), and its implementation on the surface, deep, and implicit dimensions. For decades, computing education researchers have presented excellent research on how to better teach our discipline, yet much of this has not made it into practice. We believe that the emergence of LLMs may finally force much-needed (and long ago intended but not yet fully implemented) change. * **Advice for educators:** * Acknowledge the existence of LLMs with your classes regardless of if you embrace them or do not allow their use. * Make clear and discuss institutional and class policy, what it allows, what it does not allow, and why it is that way. * Assume that students are using LLMs even when not permitted. * Do not underestimate the ability of LLMs to produce solutions to your activities (which may be indistinguishable from student-generated solutions). * Consider using an LLM tool to help in generating course materials. If you do this, be aware of possible bias in the output. * Reconsider your learning objectives in terms of their relevance to preparing those students who are aiming for careers in the software development industry (which is increasingly making use of LLMs in day-to-day work). * Reconsider your learning objectives (e.g., reading and understanding code), learning activities, and assessments to assure your courses remain constructively aligned. * Interrogate your learning objectives and ask what might be hidden or implicit and which LLMs might provide a vehicle for more focus. Correspondingly, interrogate your learning outcomes and ask which might be over-emphasised (e.g. code writing) that might need to be balanced with those that LLMs bring to the fore. * Consider using LLMs in your course if only to provide a chance for students to receive more feedback, and practice independently, provided they are equipped to interpret LLM output in a way that facilitates learning. ### Limitations and threats to validity We used three sources to build a list of educators to invite for interviews, the SIGCSE mailing list (via a reply to a message about LLMs), the opportunity for those responding to the instructor survey to volunteer to interview, and authors' own networks. Perhaps as a result of this our geographic representation is skewed. As expected the United States forms the bulk (55%) of responses. Although our interview pool spanned five continents, we had no interviewees from Africa and only one interviewee each from Asia (India), Oceania (New Zealand), and South America (Brazil). Unfortunately, only 14% (3) of the 22 identified as women. Additionally, the interviews were semi-structured and although this is a common approach it can impose interviewer bias, although it also designed to result in a coherent set of interviews which focus on common topics while still allowing interviewers to express their own views and experiences. Finally, we did not attempt to verify claims that interviewees made about their classes, departments, or institutions, taking interviewee statements at face value. ## 6. Ethics Ethics of algorithms (including AI systems such as LLMs) is concerned with the societal context around algorithmic systems and how these systems affect both individuals and society. Our focus must therefore rest on aspects such as the provenance and quality of training data, the usage of AI systems, and associated costs (in the widest sense), rather than on how to "integrate ethics into the system" (Mittelstadt et al., 2017; Mittelstadt et al., 2018) point out that algorithmic systems tend to be large, complex and highly modularised, which makes it difficult in general to assign responsibilities. Hence, with an increasingly opaque training procedure of LLMs and a lack of clear responsibility in terms of authorship (Krishnan et al., 2017), the use of large language models naturally raises a number of concerns pertaining to academic integrity. Furthermore, using a large language model has been shown to affect the user's own opinions (Mittelstadt et al., 2018). While a full discussion of the ethics of large language models in computing education is beyond the scope of this paper, we would like to highlight three crucial aspects: the role and stance of us as professionals in computing education, the policies brought forward by academic institutions, and the question of academic integrity in the context of large language models (which we will discuss in Section 7). ### Ethics in LLM literature In a study on the values encoded in the ML research literature, Birhane et al. identified a number of ethical values (Birhane et al., 2017), from which the papers would draw motivation. In a total of 100 highly cited papers, the study found that _performance_ was clearly the most frequent value and pointed out that performance is not a neutral term but comes with ethical implications. For instance, performance is typically measured with respect to specific benchmarks and data sets, thereby introducing bias--particularly when the dataset is thought to represent the "real world". A full study of all values and their ethical implications found in the computing education literature is beyond the scope of this paper. However, we extracted explicitly stated motivations from the papers in our literature review as a first approximation to a full extraction and coding of values. In line with Birhane et al. we observed that _performance_, _generalisation_, _efficiency_, _novelty_, and _scalability_ were often mentioned. Many papers cited the "impressive" or "human-level" performance of current large language models. In contrast to what Birhane et al. report, the focus on performance in our dataset seems to be secondary as a means to highlight the timeliness of the research. This is particularly the case since the papers in our study did not propose performance improvements, but rather built on available performance. Furthermore, while performance in the ML research community typically relates to specific (and often well-known) benchmarks, we found that performance in the papers we reviewed usually referred to either the LLMs' ability to solving exercises, assignments or passing exams, or the LLMs' production of teaching materials, say, such as exercises. With this focus on application of large language models rather than the design of a new AI system, we also found that several papers pointed out the potential to "save time" or help instructors cope with growing class sizes. While these motivators may be understood as scalability issues, we believe that there is a difference in how this term would be understood in the paper surveyed by Birhane et al. Despite ostensible similarities concerning the underlying values encoded in the research literature, there might be some notable differences. The free availability of large language models (i.e. that students can access LLMs at no cost) was a recurring theme in our surveyed literature. We would like to highlight this notion as an example of an ethically problematic assumption for two reasons. On the one hand, availability at "no cost" is often inaccurate because the costs might be hidden and paid, e.g., through provision of private data. On the other hand, ChatGPT offers a range of models with different performance characteristics, not all of which are free. Some students might therefore have access to more powerful tools than others. Investigating the assumptions, beliefs, and values held by the computing education community itself might therefore be well warranted. We call on the community to do so in future work. ### Code of ethics implications The IEEE Code of Ethics (Birhane et al., 2015) comprises three sections. The first focuses on ethical standards, behaviour and conduct. The second section focuses on ethical treatment of other people, and the third focuses on compliance. The AAAI Code of Professional Ethics and Conduct (Birhane et al., 2015) was adapted from the ACM Code of Ethics and has the same structure. The ACM Code of Ethics (Birhane et al., 2015) comprises four sections. The first section outlines the fundamental ethical principles that all computing professions should use to guide thinking. Section 2 describes the ethical responsibilities of computing professionals, section 3 covers ethical leadership, and section 4 focuses on compliance. In the following sections, we use the ACM general ethical principles to frame the discussion of ethical issues raised by the use of large language models in computing education. #### ACM General Ethical Principles In this section we review policies from major universities around the world on the use of large language models in education. As of this writing, many universities do not currently have an official policy publicly available online. Instead, many universities are still presently working through the policy implications of large language models through task forces and other initiatives, such as at the University of Virginia (Birhane et al., 2015), which is a top-ranked school for computer science in the USA. See Figure 2 for responses to the question "The policies at my university are clear regarding what is allowed and what is not allowed in terms of using GenAI tools", which illustrates this point. To structure our review, we consider the first part of the ACM General Ethical Principles. Parts two through four of these principles were too specific for most university policies and therefore not as relevant to the present work. Most universities did not explicitly mention coding in their policies, with the exception of Yale University (Birhane et al., 2015) and University of Adelaide (Birhane et al., 2015). To select universities for consideration of their LLM policies, we first found popular rankings of universities worldwide for computer science programs. Next, we examined policies at these universities, specifically those from Canada, USA, UK, and Australia: University of Toronto (Birhane et al., 2015), Duke University (Birhane et al., 2015), Yale University (Birhane et al., 2015), Massachusetts Institute of Technology (MIT) (Birhane et al., 2015), University of California Los Angeles (UCLA) (Birhane et al., 2015), University of Adelaide (Birhane et al., 2015), Monash University (Birhane et al., 2015), and Oxford Brookes University (Birhane et al., 2015). We do not consider this a systematic attempt, nor would that be presently possible, given the conditions described above. Instead, this represents a purposeful sampling of top-ranked universities around the world to get an overall picture of how universities are responding to the appearance of LLMs. The ACM General Ethical Principles are divided into the following sections: _Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing._ Only MIT suggested that the arrival of LLMs into education is an opportunity to think about student academic well-being (Birhane et al., 2015). No universities sampled contained anything about general human well-being or the idea that we are all stakeholders in computing, or by extension, education or society in general. It seems the ideas in these policies are limited to the specific implications of using LLMs, rather than the general. We find this to be an unfortunate oversight because institutions can help guide students' concerns beyond themselves and their immediate circumstances to the broader collective human project of the pursuit of knowledge. #### Avoid harm Use of LLMs can lead to poor outcomes due to over-reliance, ease of breaching academic integrity, and use of incorrect information leading to poor products. The universities we sampled all drew attention to the potential to create harm to others, institutions, and even the students themselves. Harm to others could come in the form of using the work of others without citation (Birhane et al., 2015). Harm to institutions can come in the form of students using LLM tools to generate work that is incorrect, offensive, or otherwise inappropriate and therefore dragging the university into potential altercation. It can also call into question the legitimacy of the learning outcomes and verification of them, which can jeopardise accreditation and institutional reputation or the community's trust in that institution. Universities seemed most interested in helping students to understand the potential harm that students could incur upon themselves through inappropriate use of AI resources. Iserman writes that "plagiriarism isn't a bad thing simply because it's an act of intellectual theft -- although it is that. It's a bad thing because it takes the place of and prevents learning." (Birhane et al., 2015). One of the most significant concerns is the ease with which LLMs can produce the output that we ask students to produce. As educators, we are not interested in the output _per se_ - rather we want students to engage in activities and processes that result in learning. Using a tool to produce the required output circumvents that learning, and deprives students from the opportunity to learn (Birhane et al., 2015). Many of the policies we sampled mentioned that students can prevent the growth of critical thinking skills and competencies in crucial areas by taking shortcuts through inappropriate use of these models (Birhane et al., 2015; Birhane et al., 2015; Birhane et al., 2015; Birhane et al., 2015). This is particularly important when writing code (Birhane et al., 2015). Not only does this harm their short-term ability to pass the course, but it robs them of their long-term ability to master the subject matter and preparedness for future work. Finally, harm could come from inadequate understanding or preparation to self, others, and institutions in courses where safety concerns are paramount, such as a chemistry lab. Skipping crucial learning could harm everyone involved (Krishnan, 2017). While most computing courses do not share similar safety concerns, it is possible in industry that inadequate learning or over-reliance on these tools could expose self, others, and institutions to harm, such as writing malformed code for self-driving cars. Be honest and trustworthyMost universities that we sampled wanted to make it clear that LLM tools are not reliable and can produce incorrect or fake results. Duke warned faculty and students that LLM "output is only as good as its input" (Krishnan, 2017). Others warned of the now well-known phenomenon of LLM "hallucination" where they will reply with incorrect data when not enough is available (Bowman, 2017) or create sources and facts even when enough data is available (Bowman, 2017; Krishnan, 2017; Krishnan, 2017). It could also be that the data is incorrect simply because it is set of date (Krishnan, 2017), since many LLMs do not have access to data created after they were trained. It is clear that universities are thinking of honesty and trustworthiness in terms of the output of the tool and when evaluated this way LLMs lack credibility. Be fair and take action not to discriminateAlmost all university policies and guidelines made it a priority to discuss biases that exist in AI in general and LLMs in particular. Duke, UCLA, and Oxford Brookes pointed out that the models themselves often discriminate based on their training data, which means it receives all the stereotypes and misinformation from whatever data it is based on (Bowman, 2017; Bowman, 2017; Krishnan, 2017). Monash University warned faculty and students that LLMs may take data out of context, cannot predict future events with any amount of accuracy, and could present certain sensitive data inappropriately (Krishnan, 2017). Finally, MIT noted that fairness could be at risk based on who has access to the models (Krishnan, 2017). While access to several popular models is currently free, this may not always be the case, and therefore requiring students to use them could become discriminatory in the future. Respect the work required to produce new ideas, inventions, creative works, and computing artefactsUniversity policies seemed to mostly skip this criterion. However, there were a few statements related enough to discuss here. For instance, Duke emphasised that creative writing or coding meant doing the hard work of developing each person's specific voice. Using AI tools could undercut that endeavour and cheat the student of their ability to develop innovative ideas, computing or otherwise (Krishnan, 2017). This could be because, as Adelaide noted, AI tools lack originality and common sense (Krishnan, 2017). Related to new ideas and innovation, UCLA noted in their policy that AI tools will often reflect outdated information that may fail to represent the progress of social movements since the training of the model was completed (Bowman, 2017). If certain sets of rights were not legal when the model was trained, one should not expect the model to suggest that they should be. Respect privacyThe lack of control over these tools, as well as what data they keep about their users, was mentioned in most policies. AI tools could invade users' privacy (Krishnan, 2017; Krishnan, 2017), violate FERPA (a student privacy protection law in the United States) if student records are handed to it (Bowman, 2017), do not assume that users are at least 18 years old (Krishnan, 2017), and are not bound by university ethics rules and policies. Furthermore, AI tools will often take user data and use it to train their models, whether users want that or not. Monash encouraged students and faculty to consider that their data could be stored by the model and used in other contexts (Krishnan, 2017). Honor confidentialityEthical issues identified by universities included threats to confidentiality, though each one took a slightly different approach. Creating an account and using AI tools could bother students who are worried about the models stealing their intellectual property and may therefore be unwilling to use them (Bowman, 2017). The tools also do not respect confidentiality of the people from whom they have taken the data to train the models, which could result in unintentional plagiarism for both students and faculty (Krishnan, 2017). Finally, these models do not respect confidentiality with regard to legal issues and data sent to them may be turned over to law enforcement agencies or other third party vendors and affiliates without user consent (Krishnan, 2017). ## 7. Academic integrity implications Recent work on student use of generative AI tools has raised alarms about academic integrity violations (Krishnan, 2017). Jones et al. summarises several common practices that are deemed to be cheating, distinguishing between plagiarism, collusion, and falsification (Jones et al., 2017). We add contract cheating (as described by Deakin University (Deakin, 2017)), and use of unauthorised resources (as described by University of Auckland (Deakin, 2017)) to this list of practices that breach academic integrity. These terms are described by the respective documents as: **Plagiarism:**: A student incorporates another person's or body's work by unacknowledged quotation, paraphrase, imitation or other device in any work submitted for assessment in a way that suggests that it is the student's original work (Jones et al., 2017). **Collusion:**: The collaboration without official approval between two or more students (or between student(s) and another person(s)) in the presentation of work which is submitted as the work of a single student; or where a student(s) allows or permits their work to be incorporated in, or represented as, the work of another student (Jones et al., 2017). **Contract cheating:**: A student requests another person or service (including, according to Deakin University, artificial intelligence content production tools) to produce or complete all or part of an assessment task to submit as their own work (Deakin, 2017). **Falsification:**: Where the content of any assessed work has been invented or falsely presented by the student (Jones et al., 2017). **Unauthorised resources:**: Using software, websites, materials or devices not explicitly permitted (Deakin, 2017). We discuss each of these academic integrity concerns with respect to generative AI. ### Plagiarism Plagiarism is the use of the work of others without appropriate attribution. This raises the issue of who is the author of work created by generative AI. There are several possibilities: 1. The community that produced the source content used as input to the generative AI model is the author of the work. 2. The generative AI software is the author of the work. 3. The user of the generative AI software is the author of the work. Although some in the literature treat the use of AI tools as plagiarism (Krish, 2016), we argue that although the community providing source material has influenced the generated content, this is similar to the natural process of writing in which authors read source material and use the information to generate new content, based on existing literature. Generative AI is almost always creating content _based on_ the training data. In this case, the community has not authored the work generated by the model, so using AI generated content would not be considered plagiarism of the original authors of work that was used as input to the generative AI model. In some rare cases, GenAI tools can generate the work of someone else exactly, which would in fact be plagiarism. Since this is extremely rare, we do not consider it here other than to acknowledge it. Although it may be tempting to consider generative AI to be the author of the work in all cases, academic publishers take an opposing view. Examples of statements include: "AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work... Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics." (Committee on Publication Ethics) (Krish, 2016) "AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge." (Cambridge University Press) (Bowards, 2017) "Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans." (Elsevier) (Elsevier, 2018) "Artificial Intelligence Generated Content (AIGC) tools -- such as ChatGPT and others based on large language models (LLMs) -- cannot be considered capable of initiating an original piece of research without direction by human authors..... -- these tools cannot fulfil the role of, nor be listed as, an author of an article." (Wiley) (Bowards, 2017) "Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work." (ACM) (Elsevier, 2018). Our position is aligned with those of publishers that the _user_ of generative AI tools should be considered the author of the work. This is consistent with the view of Pamela Samuelson, who states "The pragmatic answer to the AI authorship puzzle,..., the [author is the] user who is responsible for generating the outputs. If anyone needs to be designated as owner of rights in the outputs, it should be the user." (Elsevier, 2018) As such, we do not believe that use of AI-generated content by students should be considered _plagiarism_, and should not be referenced or cited as an independently authored piece of work. The student using the generative AI tool should be treated as the author of the work. It should be noted that, in the apparent effort to combat plagiarism, there have been numerous tools that attempt to detect AI-generated material, such as CopyLeaks, GPTKit, GLTR, and GPTZero. However, given the probabilistic nature of generative AI that is used to both generate the assignment in question and check the assignment, these tools are unreliable and produce many false positives (Krish, 2016). Orenstrakh et al. found that these detectors are even worse when evaluating code (Krish, 2016). ### Collusion Collusion occurs when a student works together with another person to create work that they subsequently claim as their own. This requires both parties to willingly agree to work together and therefore assumes that both parties have agency. As generative AI has no agency, we do not consider a student who submits generated content to be engaged in collusion. ### Contract cheating Contract cheating is traditionally described as a student requesting another person to produce work that they submit as their own. The definition by Deakin University extends this view of contract cheating to explicitly include the use of generative AI tools (Elsevier, 2018) as do some other universities (Elsevier, 2018). However, we disagree with this position, as it contradicts the view of the user of generative AI as the author -- the position taken by academic publishers. We recognise that generative AI models are capable of generating content that is more extensive than other software tools, but as a matter of principle, we consider the user to be the author (as discussed previously). generative AI is a tool which may be used by a student to produce work, much as calculators and other software tools are used. We therefore do not believe that use of generative AI software should be treated as contract cheating. ### Falsification Falsification occurs when a student invents or misrepresents data or results. The possibility that generative AI invents "facts" is well-known, and typically described by the term _hallucination_(M that can be incorrect, incomplete or biased. The authors are ultimately responsible and accountable for the contents of the work." (Elsevier) (Elsevier, 1977) "The author is fully responsible for the accuracy of any information provided by the [generative AI] tool and for correctly referencing any supporting work on which that information depends." (Wiley) (Elsevier, 1977) "Authors are accountable for the accuracy, integrity and originality of their research papers, including for any use of AI" (Cambridge) (Elsevier, 1977) We take the position that students who use generative AI are responsible for the content they include in their work. Inaccuracies, citations for non-existent papers, and other hallucinations that may arise from use of generative AI are the responsibility of the student author. Therefore, the category of _Falsification_ is a relevant academic integrity issue for students using generative AI. As discussed above, this could cause harm to students, anyone publishing their work, and to the university as a whole. ### Use of unauthorised resources In an educational context, students are required to engage in tasks that may have specific constraints. For example, the use of calculators in general is acceptable, and we are comfortable with the notion that using a calculator for data analysis does not impact authorship, or breach any notion of academic integrity. However a student in a calculus class may be asked to solve a differential equation without the use of a calculator, and access to calculators (or types of calculations) may be restricted in secure assessments such as exams. Such constraints are typically imposed to ensure that learning outcomes are met (e.g., that the student can solve a differential equation without outside assistance). A student who used a calculator for a given assessment when it was not permitted would be an issue for academic integrity because the student uses unauthorised resources. Our position is that this is the appropriate category for use of generative AI in computing education. Figure 2 shows results to various ethical questions where it seems that many instructors and students agree that using GenAI tools to create an entire answer is wrong. In some courses, such as introductory programming, the use of generative AI can be undesirable as it can solve problems with minimal intellectual input from students. Not only does this violate academic integrity, but as discussed above, students who utilise these resources inappropriately in lower division courses may cause harm to themselves by preventing their own preparation for upper division coursework. However, in upper division courses it may be an appropriate productivity tool that students would be permitted to use, which the survey data from Figure 2 seems to corroborate given responses to questions about generating pieces of an assignment, help with style, or fixing bugs. However, this requires teaching staff to be explicit in their course syllabus, or assessment description, about which resources students are permitted to use. **Advice for educators:** * We encourage educators to teach students about appropriate ethical use of generative AI throughout the curriculum, and to allow the use of such tools where it is pedagogically appropriate. See Appendix D for an example. * When educators assign assessed work for students, any restrictions on the use of tools such as generative AI should be explicitly stated. * Students who use generative AI to complete assessed work should be required to include a statement about how it was used, consistent with academic publication requirements. * Students who use generative AI when they are not permitted, or in ways that are restricted, are engaged in misconduct by using unauthorised resources. Academic consequences as a result of this behaviour should be made clear in the course syllabus. ### Advice for students Simon et al. (Simon et al., 2017) highlights the importance of educating students about academic integrity, and revealed a wide variety of ways that academic integrity is communicated to students. Given the disruptive nature of generative AI, we recommend that a guide for students is developed by teachers and distributed to students to provide explicit advice about appropriate use of generative AI tools. We recommend that any guidelines developed for students should: * adopt professional practices and standards where possible; and * adapt professional practices where needed to ensure good pedagogical practices are maintained. Publishers typically require acknowledgement where generative AI has been used in development of a manuscript. We believe this would be useful for teachers, and for students, to reflect on how generative AI was used, so we recommend that assessments require students to include a statement about how generative AI was used in assessment tasks (where permitted). After considering the relevant findings from this report, we have developed a resource that provides guidance about the use of generative AI for students. It is by no means comprehensive or complete and reflects our perspectives on what students should know before using these tools. Our recommendations are informed by the risks identified in the literature, the ACM code of ethics, survey results, and the academic integrity documents that we analysed. We offer this to teachers as a resource that may be adapted and/or distributed to students. **Guide for students:** See Appendix D for a sample handout or text that could be adapted and included in a course syllabus. ## 8. Benchmarking large language models for computing education In this section we focus on the performance of LLMs in the context of computing education. Teachers are interested in how good LLMs actually are in conducting tasks such as solving programming problems, explaining code, generating test questions, and providing feedback to students. However, the speed at which new models arrive and old models are deprecated is staggering. As we demonstrate, the currently published literature may underestimate what the newest models, such as GPT-4, can do. Some recent work has found that GPT-4 outperforms earlier models in tasks such as visual programming (Gordes and Gordes, 2017), Socratic questioning of novices (Gordes and Gordes, 2017), solving multiple-choice questions and programming exercises (Socher et al., 2017), and that performance can be close to human tutors for some tasks (Socher et al., 2017) while not for others (Gordes and Gordes, 2017). In addition to being interested in how much the capabilities of LLMs in computing education related tasks have increased, we are interested in analysing the suitability of existing benchmarks for computing education as they might not translate to computing education settings. For instance, we expect that many students will be able to fix small mistakes made by LLMs themselves. Similarly, tasks in existing benchmarks might not match those typically found in computing education courses. Another issue with current papers lies in our ability to validate results. A wide variety of parameters, prompts, and evaluation approaches have been used, and they are not always reported in detail. Furthermore, a slight variation in a prompt might generate quite different results. In this section we explore how we assess LLMs in the computing education context. We choose to focus on the task of generating a solution to a programming problem, because this is a major task for students and is a focus of existing literature. First, we review datasets that are available for evaluating LLMs and tag problems in several of those datasets to assess where they fit in the context of computing education. Second, we take one of the first papers on LLMs that appeared in the computing education context (Gordes and Gordes, 2017) and replicate it using multiple different more recent models. In the replication, we use the state-of-the-art GPT-4 model to give insight into how rapidly the performance of models has increased, the GPT-3.5-turbo model that powers the free version of ChatGPT which many students are likely to use, and GitHub Copilot which is free for students and educators and can be used as a plugin for popular IDEs. In addition, we openly release the problem descriptions and test cases for the dataset used in the original study (Gordes and Gordes, 2017) and our replication to facilitate future replication.5 Finally, we report on our experiences running two analyses using openly available datasets, revealing the difficulties we encountered and their possible effects on results. Footnote 5: The data can be found here: [https://osf.io/buh/yview_only-a165e474be94188b84aa0dcx02041f](https://osf.io/buh/yview_only-a165e474be94188b84aa0dcx02041f) ### Review of empirical datasets Table 7 presents a set of openly accessible datasets that have been or could be used to investigate questions about programming exercises or tasks in computing education contexts. To obtain the datasets in this table, we reviewed all of the papers in our literature review (Tables 1 and 2) to identify any data that they used. We do not claim that this list is exhaustive, but it reflects the datasets in use when we conducted our literature review. In addition to these datasets, we are aware of one other attempt to review existing benchmarks for a particular task that might be performed by LLMs: natural language to code generation (Krishnan et al., 2017). We believe the datasets listed in Table 7 represents a more broad set of applications and illustrates the kinds of questions being investigated and the breadth of educational contexts being examined. A review of Table 7 suggests a number of limitations in the data available for pursuing LLM research in an educational context. Most of the datasets contain exercises (described in more or less structured natural language). This reflects a focus on the question of whether LLMs can solve typical programming problems. In contrast, relatively few datasets contain chat logs where students interact with an LLM or student-submitted code with syntax errors (for code repair tasks), but additional publicly available data for questions beyond code-generation would be beneficial, as that would allow researchers at small educational contexts, where collecting sufficient data may be difficult, to engage in LLM work (Krishnan et al., 2017; Krishnan et al., 2017). Even for code generation tasks, more data would be beneficial. Almost all of the datasets focus on Python (e.g., providing docstrings as input, Python starter or solution code, or student chats featuring Python code), with a smaller number featuring C/C++. Relatively little public data appears to be available for other languages. Also, as described in the next section, many of the available datasets focus on small exercises used in introductory programming, with relatively little data available to examine larger programming problems or content for more advanced courses. Finally, as previously identified by Liu et al. (Liu et al., 2017), many of the published datasets do not provide robust evaluation of the exercises they include. Liu et al. (Liu et al., 2017) provide the EvalPlus dataset, which enhances previously published datasets with additional test cases; they found that the limited tests available meant that incorrect "solutions" were accepted as passing. We also found evaluation to be limited, with some datasets requiring manual intervention to complete evaluation. We provide more detail on issues we encountered when using these datasets in Section 8.4. ### Problem context To categorize the types of exercises present in the datasets, we manually tagged two prominent datasets, HumanEval (Humans et al., 2017) and FalconCode (Zhou et al., 2017). The datasets were tagged by a single author, an experienced instructor who has taught introductory programming, data structures, and advanced systems programming. The author tagged the exercises as being suitable for (a) an introductory programming course (Intro), as they use builtin data structures like strings and do not introduce complex algorithmic logic; (b) an introductory course using classes (OO), as they introduce classes or methods; or (c) a data structures or algorithms course, as they use some abstract data types (e.g., trees, queues, graphs) or complex algorithmic logic (e.g., subsequence matching, linear programming). The results are presented in Table 8. We found that in these two datasets, the vast majority of problems only cover material suitable in an introductory programming setting. Many of the other datasets in Table 7 are similar to the two datasets we analysed, in that they appear to focus on introductory material. We requested access to the data used in Finnie-Ansley et al. (Finnie-Ansley et al., 2017)'s paper, as the topic was a more advanced (CS2) course. This time, the dataset was tagged by two authors (one who had tagged the previously discussed datasets and a second experienced instructor); we calculated Cohen's kappa and found near perfect agreement (0.94). Again, we found that the majority of the content was primarily suitable for an introductory course, with relatively few questions asking about object-oriented code or data structures. In addition, while reviewing the problems, we found that almost all were examples of small exercises, with relatively few requiring multiple functions to solve. The FalconCode (Finnie-Ansley et al., 2017) dataset includes a few exceptions to this general trend. Finally, we examined the Automated Programming Progress Standard (APPS) dataset (Finnie-Ansley et al., 2017), as it explicitly advertises that it includes exercises of various difficulties. This is a large set (10000 problems), so we manually tagged a sample of 200 exercises and then used keyword searching to identify the usage of classes and common data structures. This means that our estimates will underestimate the complexity of the exercises. Many of the problems in this set _are_ more complex, as expected as they were largely drawn from programming contests. However, they may not be problems typically seen in educational contexts. The instructor reviewing the problems would not use many of these problems in any course, as they introduce issues like floating point error, exceeding the maximum representable integer, or linear-time pattern searching. At the same time, relatively few explicitly reference OO topics (4.1%) or common data structures (6.6%). Taken together, this analysis suggests that many of the available datasets - and as a result, the published results - reflect an early introductory programming context (CS1) with relatively few examples of common CS2 material (Cipriano and Alves (Cipriano and Alves, 2016) is a recent exception demonstrating research on OO topics) and even fewer covering any more advanced topics. Some datasets do include more challenging tasks, but these may not reflect the kinds of problems student programmers solve in upper year courses, as they appear to be inspired by (or were drawn directly from) programming contest sites. ### Replication: The robots are coming Finnie-Ansley et al. (Finnie-Ansley et al., 2017) published the first paper that examined the performance of large language models in solving introductory programming exercises. They found that Codex6 had better performance than the median student on the same exam exercises. In order to understand how the performance of large language models in this task has improved in the past two years, we partially replicated their study. Footnote 6: More specifically, the first version of the model ‘code-davinci-001’. **Method:** We contacted the authors of the original study and received the problem descriptions and test cases used in the study. The original study had a total of 30 exercises; 23 used in two exams and seven variants of the Rainfall-problem (S compared to GPT-4. It was able to solve most problems on the first try, although many attempts included trivial formatting issues. For example, inadvertently leaving out a period at the end of a printout such as having "print("The sum is", sum)" when the tests expected a period at the end of the string. Compared to GPT-4, GPT-3.5 was unable to solve one of the Rainfall variants, specifically the one from Simon. Looking into why GPT-3.5 struggled, the biggest issue for the model was that the problem description stated that "A day with negative rainfall is still counted as a day, but with a rainfall of zero." This was not taken into account in the code that GPT-3.5 generated as the code for all ten completions would simply ignore any days with negative rainfall values. Similar to GPT-4, GPT-3.5 was not able to solve the last question of the second test (T2-Q12). **Results (Copilot):** Copilot performed the worst out of the three evaluated models, successfully solving 20 out of the 23 exam problems and four out of the seven rainfall variants. We looked into the issues in code for the problems that Copilot was unable to solve. T1-Q10 involved printing all words in a given sentence that start with a given character. The problem description explicitly stated that "The sentence will end with a full-stop." The code generated by Copilot would not remove the full-stop from the end of the sentence, resulting in failure in the edge case when the last word of the sentence needed to be printed where the tests assumed the full-stop is not included in the word, but Copilot did not take this into account. T1-Q11 involved sorting four numbers given as a parameter using only the "min(0)" and "max(0)" functions (using of lists, if/elif/else, and loops was forbidden). While Copilot took the constraints into account, all ten completions had logical flaws and did not pass all the tests. Finally, similar to GPT-4 and GPT-3.5, Copilot was unable to solve T2-Q12 which involved printing bar graphs made of text. For T2-Q12, many completions from Copilot would not compile (the completion would be incomplete), and when it did, the code was nowhere near correct. For example, many of the completions would always print the same bar graph regardless of input. For the rainfall variants, Copilot was unable to correctly solve the one from Soloway, the one from Simon, and the one from Guzdial. For the variant from Simon, the issue was the same as for GPT-3.5 in that negative values were always ignored entirely, even though the problem description asked for these to be counted as days with a rainfall of 0. For both the Soloway variant and the variant from Guzdial, none of the completions generated by Copilot handled the edge case where the list is empty, leading to division by zero. One interesting finding was that Copilot would sometimes write just the function signature with a comment such as "# write your code here" followed by "pass", or the completion would not have any code at all but only suggestions/hints for how to start work on the problem as comments. **Discussion:** Overall, all models performed quite well, having performance that would have allowed them to pass the exams. Unsurprisingly, GPT-4 outperformed GPT-3.5 and Copilot, which corroborates previous work where GPT-4 has outperformed other LLMs for various tasks (Liang et al., 2017; Li et al., 2018; Li et al., 2019). The problems where any model struggled were either complex (e.g., involving printing bar graphs as text) or had vague problem descriptions, leading to some edge cases being ignored (e.g., rainfall variants that did not specify what should happen when the list is empty). As noted in the original study, LLMs somewhat struggle with trivial formatting issues, such as missing some formatting or having extra printouts not asked for in the problem description. We noticed that there was less variation in the completions generated by Copilot compared to the ones from GPT-4 and GPT-3.5. This might be due to the use of a relatively high temperature value for these models - previous work has found similar results and speculated that Copilot likely uses a lower temperature value (Li et al., 2019). ### Novel analysis #### 8.4.1. Apps Hendrycks et al. created the Automated Programming Progress Standard (APPS) dataset (Hendrycks et al., 2016) as a benchmark for program generation. The dataset consists of 10.000 programming problems of varying difficulty, manually extracted from the online coding websites Codewars, AtCoder, Kattis, and Codeforces. The average \begin{table} \begin{tabular}{l c c c} \hline \hline Problem & \multicolumn{3}{c}{Solved on attempt} \\ & GPT-4 & GPT-3.5 & Copilot \\ \hline T1-Q1 & \(2^{*}\) & 1 & \(2^{*}\) \\ T1-Q2 & 1 & 1 & 1 \\ T1-Q3 & 1 & 1 & 1 \\ T1-Q4 & \(2^{*}\) & \(2^{*}\) & \(2^{*}\) \\ T1-Q5 & 1 & \(2^{*}\) & 1 \\ T1-Q6 & 1 & \(2^{*}\) & 1 \\ T1-Q7 & \(2^{*}\) & 1 & 1 \\ T1-Q8 & 1 & 1 & 1 \\ T1-Q9 & 1 & 1 & 1 \\ T1-Q10 & 1 & 1 & - \\ T1-Q11 & 3 & 1 & - \\ T2-Q1 & 1 & \(2^{*}\) & 1 \\ T2-Q3 & 1 & 1 & 1 \\ T2-Q4 & 1 & 2 & 1 \\ T2-Q5 & 1 & 2 & \(4^{*}\) \\ T2-Q6 & 1 & 1 & 1 \\ T2-Q7 & 1 & 1 & 1 \\ T2-Q8 & \(2^{*}\) & 2 & \(4\) \\ T2-Q9 & 1 & 1 & 1 \\ T2-Q10 & 1 & 1 & 1 \\ T2-Q11 & 2 & 1 & 1 \\ T2-Q12 & - & - & - \\ RF-Soloway & \(2^{*}\) & \(2^{*}\) & - \\ RF-Simon & \(2^{*}\) & - & - \\ RF-Fisler & \(2^{*}\) & \(2^{*}\) & \(10\) \\ RF-Ebrahimi & \(2^{*}\) & \(2^{*}\) & \(2^{*}\) \\ RF-Guzdial & \(2^{*}\) & 3 & - \\ RF-Lakanen & \(2^{*}\) & 2 & \(3^{*}\) \\ RF-Apples & 1 & 1 & 1 \\ \hline \hline \end{tabular} \end{table} Table 9. The replication results for (Hendrycks et al., 2016). An asterisk indicates that the solution required trivial modifications, which was counted as an additional attempt (i.e., “\(2^{**}\) means that the problem was solved on the first try, but had trivial mistakes, e.g. in formatting of strings such as a missing period or extra unnecessary prints). A ‘-’ means that the problem was not solved within 10 attempts. number of lines for the solution is 18. The dataset has been divided into a training set and a test set, both containing 5000 problems. The following elements are provided for each problem: * Problem description, including a description of the expected input and output of the problem and some concrete examples. * A JSON file with inputs and corresponding output. On average, each problem has 21.2 test cases. * A JSON file with metadata, with a difficulty level (introductory, intermediate, competition) and the url to the website where the problem is hosted. * A list of solutions from humans. Hendryck et al. have tested the dataset in 2021 with GPT-2, GPT-3, and GPT-NEO. They found that GPT-NEO performed the best by passing almost 15% of test cases on introductory problems. MethodWe aim to use the dataset from this paper and assess how newer models are able to handle these problems. We use a simplified method, employing the following steps: We run the models (GPT-3.5-turbo16k, GPT-4) with the following parameters: temperature=0.0, max_tokens=4000, and default values for Top P (1), Frequency penalty (0), and Presence penalty (0). The system prompt we use is as follows: You are a highly intelligent coding bot that can easily handle any Python programming task. Given a natural language instructions you can always implement the correct Python solution. Your focus is the solution code only. You are not allowed to provide explanations. Make sure to use input statements for input, and do not give a method definition Example (toy) instructions: Implement a Python program to print "Hello, World!" in the hello.py. Example bot solution: == hello.py == x=input() print(x) == As the user input, we provide the full problem description as described above. We process the generated code by running the code and providing the inputs from the first 25 test cases to each solution. We then compare the output to the expected output with an exact comparison, after stripping the results. After noticing that differences in characters for line ends caused problems ("\(\nu\)" versus "\(n\)"), we fixed this in the output comparison. We consider a test case to fail after detecting a timeout of 5 seconds. For each problem, we store a list of test results and calculate a success rate as the percentage of passed test cases. For simplicity, we skipped the problems that had starter code. We also skipped problem descriptions that could not be read. We ran a sample of 100 interview-level problems for GPT-4, running 25 test cases for each problem. We manually assessed the failing test cases, to check if the problems were actual coding problems, or were caused by formatting mistakes. Minor issues were corrected. The final runs were conducted in September 2023. ResultsTable 10 and Figure 4 show the results. GPT-4 performs quite well overall, with an average score of 51.5% on test cases. Comparing to GPT-3.5, which scored 39.2%, the performance has clearly improved. Keeping a strict pass/fail criterion (all tests should pass), only 36.1% of the GPT-4 solutions pass all tests, and 21.0% for GPT-3.5. We also observe a large difference between problem types, with GPT-4 solutions to introductory problems scoring as high as 72.2% on test case average, but the most difficult, competition level problems score only 28.7%. DiscussionA major downside of the APPS dataset is that it is a public dataset with problems from popular online coding websites. There is a large chance that solutions for it have been included in training recent models. Figure 3. A comparison of the original results and the score achieved by GPT-4 on the two CS1 tests and Rainfall-problem variants presented in [81]. Overall, APPS is a high-quality dataset with extensive test suites for many of the problems. However, in the context of computing education, several problems in this dataset might not be suitable for novice programmers. Even the introductory set contains some complex problems, and are targeted more at people participating in programming competitions. We included an explicit instruction in the prompt to 'use input statements for input, and do not give a method definition'. Omitting this in a first test run showed many tests failing, because the model returned a method definition. This shows we need to be sure the model creates solutions in the exact same format as expected. There are different aspects to be considered when running an analysis of a model on a certain task. The number of attempts to get a solution from the model should be specified. The 'Robots are coming'-replication (Section 8.3) performed multiple attempts, while we only use one attempt for this dataset. #### 8.4.2. FalconCode FalconCode is a novel collection of over 1.5 million Python programs from over 2,000 undergraduate students at the United States Air Force Academy (Kalcon, 2018). The dataset is not available online but is provided to anyone who applies following the instructions at the dataset website.7 The dataset contains 661 introductory Python programming problems used in 4 courses. To understand how well selected LLMs (GPT-3.5/4) perform on these problems we (i) extracted the problem statements and unit tests; (ii) performed de-duplication of problem statements; (iii) utilised the LLMs to generate solutions; (iv) ran the unit tests provided in the dataset against the outputs of the LLMs; and (v) analysed the test results. Footnote 7: [https://falconcode.dfcs-cloud.net/index.php](https://falconcode.dfcs-cloud.net/index.php) Data extractionThe dataset is provided in the convenient tabular format where the table we work with has the 661 problems. Among other fields, there are the _prompt_ and _testcase_ columns that contain the problem statement and unit tests to evaluate the correctness of the solution. The problem statements are in HTML format which we preserved. The unit tests are runnable Python code. Hence, we extracted those into separate Python files. De-duplicationThe original dataset lists 661 problems. Apparently, some problems are re-used across courses. Since the id of the problem appears to be preserved it is possible to de-duplicate the problems based on the id which reduces the number of problems to 422. However, it turns out that the problem set is further reduced to 310 if one de-duplicates based on the problem description (prompt field). Interestingly, de-duplicating only on the prompt field, disregarding the id, yields 385 unique problem descriptions which suggests that there are several problems with the identical id that have different descriptions. Extracting plain text descriptions from the original HTML format and removing superficial white space results in 344 unique problem descriptions. For this study, we opted for de-duplicating on the original (HTML) prompt (just in case the white space difference could be meaningful). Hence, there are 385 problems we use in our experiments. Generating solutionsTo generate a solution to the given problem, we include the problem statement (HTML) into the same prompt as used with the APPS dataset experiment (Subsection 8.4.1). We decided against extracting plain text of the problem statements because the HTML may encode important information through formatting that a state-of-the-art LLM such as GPT-3.5/4 may be capable of leveraging. Note that many of the problem statements were referring to an external resource (e.g., starter code or a file with data) which has not been made available as part of the dataset. Hence, we cannot include the resource into the prompts submitted to the LLMs. We extract the completion of the submitted prompt and save it into a Python file named with the id of the problem (needed for the unit tests to discover the solution). In a non-negligible number of cases, the completion was wrapped in ''python...'' tokens. While this issue could likely be fixed via further prompt engineering, we simply removed these tokens using a regular expression. Leaving the tokens in would make the Python file not runnable (syntax error), which would automatically fail all the unit tests despite the solution being correct. TestingTo test correctness of the automatically generated solutions, we executed the provided unit tests. The unit tests rely on the solutions being present in the same directory in a file named \begin{table} \begin{tabular}{l r r|r r|r r} & & \multicolumn{2}{c|}{**Test case average**} & \multicolumn{2}{c}{**Strict accuracy**} \\ **Type** & **Count** & **Avg nr of tests** & **GPT-3.5** & **GPT-4** & **GPT-3.5** & **GPT-4** \\ \hline Introductory & 974 & 8.9 & 57.3\% & 72.2\% & 44.5\% & 62.9\% \\ Interview & 2972 & 15.3 & 38.8\% & 52.9\% & 18.7\% & 35.8\% \\ Competition & 1000 & 9.3 & 22.6\% & 28.7\% & 5.2\% & 10.8\% \\ _Overall_ & 4946 & 12.9 & 39.2\% & 51.5\% & 21.0\% & 36.1\% \\ \end{tabular} \end{table} Table 10. Average test case score for APPS problems, ran with max. 25 test cases. Figure 4. GPT success rate for different exercise types. with the id of the problem. Additionally, there is an external _cs110_ grading (testing) library that needs to be installed through _pip_.8 Footnote 8: [https://pypi.org/project/cs110/](https://pypi.org/project/cs110/) We saved the output of the unit testing for each individual problem into a separate file. The last line of these had a predictable format: _Unit Test Returned: #_, where # is replaced with number from 0 to 100. We utilised simple regular expressions to extract this final result of the evaluation from each of the output files. Note that assignments with identical problem statements could have been associated with different unit tests. Therefore, for each problem we ran all available unit tests, and took the average result of the tests as the score. #### Results analysis Table 11 shows the performance of GPT-3.5/4 on the 385 FalconCode problems across three different types of assignments: * Small (1\(-\)3 lines) programs focused on specific programming skill. * Medium (10\(-\)50 lines) programs focused on utilisation of one or more skills. * Larger (50\(-\)300 lines) programs solving an open-ended problem. GPT-4 performed markedly better than GPT-3.5. In the subsequent analysis, we focus on the better performing GPT-4 model. The overall performance of 45.4% suggests a rather weak performance of the LLM in handling these introductory programming problems. In order to understand the causes of the low performance we analysed each case where GPT-4 did not achieve the full score. Specifically, we performed a thematic analysis in which causes of each failed assignment were extracted as codes and then collated into higher-level themes [45]. The results of the analysis are shown in Table 12. We first analysed the performance on Projects since it amounted to 0.0%. It turns out that the problem statement included in the dataset only points to an external pdf file that contains the actual instructions, e.g.: Objective: Create a drone simulation that can scan a battlefield for targets and engage them. Instructions: Read writeup (airstrike.pdf) and use the template file to begin work. Since the pdf file has not been released with the dataset we could not provide the LLM with adequate instructions to generate a solution. This is the case for all 9 projects. As Skill assignments are supposed to require only small solutions, not exceeding several lines of code, the performance of 40.0% is rather unexpected. Our analysis revealed that the main causes of the poor performance are related to the same cause detected with the Project tasks. There are a substantial number of situations where the Skill assignment required an external data file, and even more commonly starter code was needed to complete the assignment, e.g.: You have been provided with a list called list_of_animals. Write a program that prints out each of the items in this list (one item per line). As the starter code has not been included in the dataset, the LLM does not have the complete information to produce the desired output. We also detected instances where the LLM used an unexpected library (not part of the Python standard library) and hence the program would crash, i.e., the unit tests would fail. In a few cases, the unit tests would be missing or incorrect. We also identified several instances where GPT-4 generated a genuinely incorrect solution to the problem statement that provided sufficient information. The most common cause of failed unit tests for Lab assignments was also a missing data file and/or starter code (not released with the dataset). Another common cause was an incorrect structure of the (possibly correct) solution. A typical example would be a solution containing a function that returns a value whereas it was supposed to be a script asking a user to provide an input and print the output to the terminal. In the remaining cases GPT-4 generated a genuinely incorrect solution. Based on the above analysis, we report another set of results (Clean) on the subset of 236 FalconCode problems that provide sufficient information for the LLM and are associated with valid test cases. The success rate increases from 45.4% to 74.1%. It is worth emphasising that if we would also disregard the cases where GPT-4 produced the correct solution using an unexpected structure (e.g., a function returning a value instead of a program asking user for an input and printing to a terminal) or utilised an unexpected library, the success rate would increase to 86.8%. Finally, a large \begin{table} \begin{tabular}{l|r r r|r r r} & \multicolumn{3}{c|}{**Full**} & \multicolumn{3}{c}{**Clean**} \\ **Type** & **Count** & **GPT-3.5** & **GPT-4** & **Count** & **GPT-3.5** & **GPT-4** \\ \hline Skill & 162 & 30.3\% & 40.0\% & 81 & 57.6\% & 80.0\% \\ Lab & 214 & 29.8\% & 51.4\% & 155 & 47.5\% & 71.0\% \\ Project & 9 & 0.0\% & 0.0\% & 0 & - & - \\ \hline **Overall** & **385** & **29.3\%** & **45.4\%** & **236** & **51.4\%** & **74.1\%** \\ \end{tabular} \end{table} Table 11. Results for FalconCode. The middle column describes the raw success rates for each category of problem, and the rightmost column describes the success rates after problems where insufficient information is provided were removed. \begin{table} \begin{tabular}{l r r r} **Failure Cause** & **Clean** & **Skill** & **Lab** & **Project** \\ \hline Missing Instructions & & & & 9 \\ Missing Starter Code & & 58 & 7 & \\ Missing Data File & & 20 & 46 & \\ No Unit Tests & & 2 & 2 & \\ Incorrect Unit Tests & & 1 & 4 & \\ Unexpected Library & \(\checkmark\) & 1 & 2 & \\ Incorrect Structure & \(\checkmark\) & & 25 & \\ Incorrect Solution & \(\checkmark\) & 22 & 28 & \\ \hline Overall Failed & & 104 & 114 & 9 \\ Overall Failed (clean) & & 23 & 55 & \\ \end{tabular} \end{table} Table 12. Reasons for LLM (GPT-4) failures on FalconCode problems. We used this analysis to filter the dataset down to a clean version that is appropriate to use for the evaluation. The clean column signifies which reasons were considered as the failure on the LLM part. These data points were included in the clean dataset. portion of the genuinely incorrect solutions are rather superficial problems, such as not rounding to a single decimal as demonstrated in the example provided in the instructions. A simple change to the prompt adding the specific instruction would certainly fix such issues. Hence, one can conclude that we only observed minimal amount of cases where GPT-4 would produce a truly incorrect solution to the problem (most certainly in fewer than 5% of cases). ### Discussion In this section, we provide an overview of the issues we encountered while performing our replication and analyses. Our experiences could provide valuable insights to researchers who want to study LLM performance as well as to teachers who are interested in their performance in an educational context. Higher LLM performance than identified in the literatureOur replication of the Finnie-Ansley et al. (Finnie-Ansley et al., 2017) paper suggests that new LLM models are significantly more capable than is currently reported in the literature. Our experiment with the FalconCode dataset further supports this conclusion, and while success rates are lower on the APPS dataset, those problems are significantly more challenging than those typically provided in the CS1 and CS2 courses typically discussed in the literature. Challenges using publicly available datasetsWe encountered a number of challenges applying LLMs to publicly available datasets, even though the datasets themselves are of high quality. In particular, we anticipate that future researchers will find that very few datasets will have been produced specifically to support LLM code generation research, so they are likely to not include critical information like starter code, data files, or formatting instructions. Researchers will also encounter challenges even with datasets produced for code generation tasks. As suggested by Liu et al. (Liu et al., 2019), existing datasets may need to be augmented to evaluate LLM-generated code accurately. The number and quality of the test cases provided might not completely cover all exercise requirements and edge cases, therefore giving a false positive result. Alternately, test cases can even be incorrect, or too strict, exceeding what is required in the instructions, lowering the potential performance of models. Homogeneity in available datasetsFinally, future researchers may struggle to find appropriate datasets. Most datasets we found could support code generation, using Python, of CS1 problems. To assess model performance on multiple types of tasks, for different programming languages, or at different levels, will require new datasets. The community will need to reward the effort of curating and maintaining such datasets, as providing a complete and well-evaluated dataset is challenging (as noted above) and is important for enabling research by a diverse set of groups. #### 8.5.1. Limitations We focused our efforts on replicating code generation tasks, but there are many other research questions that are potentially even more challenging to replicate. Testing generated code can be easily automated by running test cases, although this might not capture all aspects relevant to computing educators, such as code quality and suitability of the solution with regards to which concepts the student has learned so far. These latter aspects could also be assessed automatically, but we have not attempted to do so in our study. Outside of code generation, assessing solutions for other types of exercises common in CS (e.g. regular expressions, UML-diagrams, automata), LLM-generated exercises, feedback, and explanations require additional datasets and may require a qualitative framework that could be difficult to provide or to transfer to another research team. Advice for users and creators of LLM CSEd Datasets: * Creators: Include full and precise problem descriptions, so that the LLM can be given sufficient information for solving the problem. * Creators: Include full test cases, ideally in a format where they are easy to run for others, so that LLM performance can be easily evaluated. * Creators: Include any resources needed to complete the assignments, e.g., the starter code or data files. * Creators: Make it easy to update or extend the data set, and report issues. * Users: Make LLM parameters clear for replication. * Users: Clearly describe the prompts used with the models, ideally providing example prompts. ## 9. Conclusions This report is the output of an ITiCSE Working Group that explored how the emerging generative AI revolution will impact the future of computing education. The first time that the group met was in April 2023 - three years after the release of the ground-breaking GPT-3 large language model; less than two years after the release of the Codex model (a variant of GPT-3 specifically fine-tuned for coding tasks); less than one year since the Copilot plug-in (for generating code directly within an IDE) was made available for free to students worldwide; five months since the release of ChatGPT (providing a convenient chatbot interface); and just one month after the release of GPT-4, a powerful multi-modal large language model. Against this backdrop of rapid advancements, our working group came together at a time when the computing education community was just beginning to grapple with the widespread use of generative AI tools by students as well as the general public. Many urgent questions were being asked about how to adapt to the challenges and opportunities presented by these new models and tools. In particular, if students are able to generate solutions to all of their programming coursework, how will this impact what is taught, how it is taught, and how students will remain motivated to learn? Our overarching goal is for this report to serve as a focal point for researchers and practitioners who are exploring, adapting, using, and evaluating LLMs and LLM-based tools in computing classrooms. We now return to the list outlined in Section 2 to summarise our main contributions: 1. **A review of the literature:** We provide a detailed review of the literature on LLMs in computing education, current as of August 2023. Using a keyword search of relevant databases and two rounds of forward and backward snowballing, we synthesise findings from 71 primary articles. Due to rapid changes in the field and the slow pace of publishing in traditional venues, much of this work was available only as pre-prints on platforms such as arXiv. We included all such literature in our review and assessed every article with respect to a set of quality metrics. The most common type of paper to date involves evaluating the performance of LLMs when applied to tasks such as programming exercises. A key finding, which justifies some of the widely voiced concerns around academic misuse of LLMs, is that current models tend to perform at least as well as most students on typical introductory-level programming tasks. We also reviewed papers that discussed possible opportunities and challenges of LLMs, that studied how end-users (including students) interacted with LLMs, and that used LLMs to generate high-quality learning resources. Among the risks that were identified, the most common concern expressed by authors was that students would become overly-reliant on using LLMs to generate and debug code. 2. **Prevailing attitudes:** To understand how LLMs are currently being perceived and used, we conducted a survey involving 171 students and 57 instructors from computing courses spanning 20 distinct countries. We found that, in general, students and instructors had similar perceptions about LLMs with respect to questions around experiences, expectations and beliefs. However, they differed in their perceptions of how clear course policies were about the allowed use of LLMs, with instructors - somewhat surprisingly - finding these policies to be less clear than students. Many of the respondents to our survey had very little experience using generative AI tools at the current time, although we expect familiarity to grow rapidly in the coming years. Some instructors were concerned that their students were using such tools inappropriately, and a small fraction of students refused to use generative AI tools for ethical reasons and due to concerns about harming their learning. In many cases perceptions were well-aligned - both students and instructors felt strongly that there should be some restrictions on the allowed usage of generative AI tools for coursework. 3. **New instructional approaches:** Although many instructors are only just beginning to think about the impacts on their teaching, some have already made concrete changes to their curricula and assessments. In order to document these recent adaptations, we conducted 22 in-depth interviews with instructors on five continents who already had concrete plans in place to change some aspect of their teaching. We found that some instructors were beginning to place a greater emphasis on 'process over product'. That is, instead of just grading a final artefact, there is an evaluation of the processes used by students when working on a product. In addition, there was also a trend towards placing a greater emphasis on invigalted assessments such as exams, with a reduction to the grade weighting placed on unsupervised homework assignments. We anticipate further changes to learning objectives, course content and assessment practices in the near future, and we see an important need to swiftly disseminate best practices that emerge. 4. **Academic integrity: Policies & recommendations** We reviewed academic integrity policies that mentioned generative AI from major universities around the world and found that these explicitly addressed many of the principles stated in the ACM code of ethics. However, it is unclear how students are being educated about the ethical use of generative AI in the classroom. This appears to be an important area for future work given the findings from our survey which revealed that students and instructors have quite different views regarding the clarity of current policies. Further work in this area is needed to understand how to effectively embed these principles in computing classrooms. We follow our review of policies with concrete recommendations for both students and instructors. We agree with the position articulated by many publishers that the user of the LLM should be considered the author of the generated text. This has implications for academic integrity, in that _plagiarism_ is not usually a concern but instead users would be responsible for any _falsification_ produced from uncritical use of LLM-generated artefacts. We encourage instructors to teach students about the ethical use of generative AI throughout their courses, clearly stipulating any restrictions on use for assessed work. Should students use such tools for graded tasks, we recommend they include a statement detailing its usage, and any violations should be regarded as academic misconduct with the penalties explained clearly. Institutions and faculty will need to communicate these expectations explicitly, and thus it is imperative that we provide students with resources to understand how to use LLMs appropriately. To this end, we have prepared a sample handout that can be adapted and included in a course syllabus on the ethics of using generative AI tools for assignment work (see Appendix D). 5. **Encouraging replication:** Given that instructors are naturally interested in how well LLMs can solve typical tasks that are set for students, a common thread of work to date has been to evaluate the performance of various models. However, replicating prior work using newer models is difficult, given that a wide variety of parameters, prompts, and evaluation approaches have been used, and not all methods are reported with sufficient detail. Producing a dataset that contains everything necessary for high-quality LLM research (in particular, accurate evaluation of the artefacts generated by LLMs) is challenging and needs to be encouraged by the community. We therefore identify a seminal paper on LLM evaluation for programming tasks, and prepare and release the problem descriptions and test cases in order to facilitate future replication work. Our own replication of this prior work, using a state-of-the-art model, shows an extraordinary performance improvement over the span of two years since the original work was carried out. As we collectively face the changes being ushered in by the AI revolution, it is clear that LLMs present significant challenges but also new opportunities for computing educators. We present this report not only as a snapshot of the current state at this relatively early stage, but also as a call to action: to encourage broad exploration of the use and impacts of LLMs and LLM-based tools in computing classrooms, to adapt teaching methods and update academic integrity policies, and to develop best practice and to share them widely with the computing education community. The future of computing education is rapidly evolving, and shaping it must be a collective effort. ## Acknowledgments Thank you to the following: * Michael Caspersen for joining this effort in the early days and graciously bowing out when more prestigious matters intervened. * All of the students and educators who responded our surveys. * All of the educators who participated in interviews (in alphabetical order by surname): * Austin Cory Bart (University of Delaware, USA) * Michael Caspersen (It-vest & Aarhus University, Denmark) * James Davenport (University of Bath, UK) * Rodrigo Duran (Federal Institute of Mato Grosso do Sul Brazil) * Dan Garcia (UC Berkeley, USA) * Michael Kollling (King's College London, UK) * Viraj Kumar (Indian Institute of Science, Bengaluru, India) * Mark Liffiton (Illinois Wesleyan University, USA) * Jeremic Lumbroso (University of Pennsylvania, USA) * Peter Mawhorter (Wellesley College, USA) * Jean Mehta (Saint Xavier University, USA) * Briana Morrison (University of Virginia, USA) * Leo Porter (University of California San Diego, USA) * Jan Schneider (Goethe Universitat, Germany) * David H. Smith IV (University of Illinois, Urbana-Champaign, USA) * Kristin Stephens-Martinez (Duke University, USA) * Sven Strickroth (LMU Munich, Germany) * Ewan Temperp (University of Auckland, New Zealand) * Christian Tomaschitz (TU Wien, Austria) * Frank Vahid (University of California, Riverside, USA) * those who wished to be de-identified in this report.
2307.03946
Superconducting Gap Structure of Filled Skutterudite LaOs$_4$As$_{12}$ Compound through $μ$SR Investigations
Filled skutterudite compounds have gained attention recently as an innovative platforms for studying intriguing low-temperature superconducting properties. Regarding the symmetry of the superconducting gap, contradicting findings from several experiments have been made for LaRu$_{4}$As$_{12}$ and its isoelectronic counterpart, LaOs$_{4}$As$_{12}$. In this vein, we report comprehensive bulk and microscopic results on LaOs$_{4}$As$_{12}$ utilizing specific heat analysis and muon-spin rotation/relaxation ($\mu$SR) measurements. Bulk superconductivity with $T_C$ = 3.2 K was confirmed by heat capacity. The superconducting ground state of the filled-skutterudite LaOs$_{4}$As$_{12}$ compound is found to have two key characteristics: superfluid density exhibits saturation type behavior at low temperature, which points to a fully gapped superconductivity with gap value of $2\Delta/k_BT_C$ = 3.26; additionally, the superconducting state does not show any sign of spontaneous magnetic field, supporting the preservation of time-reversal symmetry. These results open the door for the development of La-based skutterudites as special probes for examining the interplay of single- and multiband superconductivity in classical electron-phonon systems.
A. Bhattacharyya, D. T. Adroja, A. D. Hillier, P. K. Biswas
2023-07-08T10:00:56Z
http://arxiv.org/abs/2307.03946v1
Superconducting Gap Structure of Filled Skutterudite LaOs\({}_{4}\)As\({}_{12}\) Compound through \(\mu\)Sr Investigations ###### Abstract Filled skutterudite compounds have gained attention recently as an innovative platforms for studying intriguing low-temperature superconducting properties. Regarding the symmetry of the superconducting gap, contradicting findings from several experiments have been made for LaRu\({}_{4}\)As\({}_{12}\) and its isoelectronic counterpart, LaOs\({}_{4}\)As\({}_{12}\). In this vein, we report comprehensive bulk and microscopic results on LaOs\({}_{4}\)As\({}_{12}\) utilizing specific heat analysis and muon-spin rotation/relaxation (\(\mu\)SR) measurements. Bulk superconductivity with \(T_{C}\) = 3.2 K was confirmed by heat capacity. The superconducting ground state of the filled-skutterudite LaOs\({}_{4}\)As\({}_{12}\) compound is found to have two key characteristics: superfluid density exhibits saturation type behavior at low temperature, which points to a fully gapped superconductivity with gap value of 2\(\Delta/k_{B}T_{C}\) = 3.26; additionally, the superconducting state does not show any sign of spontaneous magnetic field, supporting the preservation of time-reversal symmetry. These results open the door for the development of La-based skutterudites as special probes for examining the interplay of single- and multiband superconductivity in classical electron-phonon systems. ## I Introduction Due to their potential as thermoelectric materials for either refrigeration or power generation applications, many filled skutterudite compounds with RT\({}_{4}\)X\({}_{12}\) stoichiometry (R = alkali metals, alkaline earth metals, lanthanides, or light actinides; T = Fe, Os, Ru; X = P, As, Sb) have lately been the focus of several investigations [1; 2; 3]. With two formula units RT\({}_{4}\)X\({}_{12}\) per unit cell, these compounds form a body-centered cubic structure (space group _Im_\(\bar{3}\), No: 204). The structures consist of rigid covalently bonded cage-forming frameworks T\({}_{4}\)X\({}_{12}\) that encapsulate various bonded guest atoms R. This leads to local anharmonic thermal vibrations (rattling modes), which would reduce phononic heat conduction and open the door to their potential as promising thermoelectric materials. Because of the significant hybridization between the 4\(f\) band manifold and electronic conduction states, as well as the degree of freedom provided by the R-\(f\)-derived multipole momenta of the cubically symmetric X\({}_{12}\) cages, those compounds may include a variety of distinct electronic and magnetic ground states. For examples, consider unconventional superconductivity [4; 5; 6; 7; 8], Kondo effect [9; 10; 11; 12; 13], heavy fermions [14], non-Fermi liquid behavior [9], etc. The majority of the Pr- and Ce-based filled skutterudite compounds are hybridized gap semiconductors or show magnetic transitions, however PrOs\({}_{4}\)Sb\({}_{12}\)[4; 5], PrRu\({}_{4}\)Sb\({}_{12}\)[6] and PrRu\({}_{4}\)As\({}_{12}\)[15] show superconducting transitions at 1.8 K, 0.97 K and 2.4 K, respectively. PrOs\({}_{4}\)Sb\({}_{12}\) is highly intriguing for a variety of reasons [16], including: (i) it is the first known example of a heavy-fermion superconductor containing Pr; (ii) it shows unconventional strong-coupling superconductivity that breaks time-reversal symmetry; and (iii) instead of magnetic fluctuations, electric quadrupole fluctuations may be involved in the superconducting pairing process. The unique band structure of these compounds and the hybridization effects between localized \(f\) electrons and conduction electrons appear to play a crucial role, in addition to the fact that the origin of the majority of those unconventional phenomenologies is unknown. It was recently revealed that the Fermi level of La compounds is placed at a prominent peak arising from the T-\(d\) band manifold, which might contribute to electronic instability [17; 1]. Several La-based compounds LaT\({}_{4}\)X\({}_{12}\) are especially remarkable within the filled skutterudite class due to their remarkable superconducting properties. For examples, LaFe\({}_{4}\)P\({}_{12}\) (\(T_{C}\) = 4.1 K) [18], LaOs\({}_{4}\)P\({}_{12}\) (\(T_{C}\) = 1.8 K) [18; 19], and LaRu\({}_{4}\)Sb\({}_{12}\) (\(T_{C}\) = 3.6 K) [9; 20], with a special attention to the LaRu\({}_{4}\)As\({}_{12}\) (\(T_{C}\) = 10.3 K, \(H_{c2}\) = 10.2 T) - with the highest superconducting transition temperature. [15; 19; 21]. The ratio of the heat capacity jump \(\Delta C\) to \(\gamma\)T\({}_{C}\) is \(\Delta C\)(/\(\gamma\)T\({}_{C}\))=1.75 for LaRu\({}_{4}\)As\({}_{12}\) comparison to the BCS value of 1.43 [15]. While the majority of La-based filled skutterudites are completely gapped superconductors, past research has shown numerous unique aspects of LaRu\({}_{4}\)As\({}_{12}\), such as a positive curvature of \(H_{c2}\), nonexponential behavior of the electronic heat capacity, and square root field dependency of the Sommerfeld coefficient (\(\gamma\)) [22]. We recently reported unambiguous evidence of multiband \(s+s\)-wave superconductivity in LaRu\({}_{4}\)As\({}_{12}\) using muon-spin rotation measurements, with 2\(\Delta_{1}/k_{B}T_{C}\) = 3.73 for the larger gap and 2\(\Delta_{2}/k_{B}T_{C}\) = 0.144 for the smaller gap [23]. Furthermore, inelastic X-ray scattering experiments indicated essentially temperature-independent phonon modes between 300 K and 20 K, with the exception of 2 K, where a weak softening of the specific phonon modes is detected [23]. All of these results demonstrate the relevance of the electron-phonon interaction in the superconductivity of LaRu\({}_{4}\)As\({}_{12}\), and they accord well with the DFT-based phonon simulations [24]. Another isostructural La-based filled skutterudite compound, LaOs\({}_{4}\)As\({}_{12}\), has been reported by Shirotani et al. to exhibit superconductivity with \(T_{C}\). = 3.2 K [21]. LaOs\({}_{4}\)As\({}_{12}\) has also shown some signs of multiband superconductivity, such as the upward curving of the upper critical field around the transition temperature and unusual behavior in the electronic specific heat data [25]. A single-gap, s-wave superconducting ground state, however, is suggested by a recent study of the temperature dependency of lower critical field [26]. Another study found that the high-amplitude lanthanum phonons dominate the vibrational eigenmodes at low energies based on the phonon dispersion relation determined from inelastic neutron scattering experiments [27]. We have thus performed systematic muon-spin rotation and relaxation (\(\mu\)SR) measurements to examine the superconducting pairing process in the LaOs\({}_{4}\)As\({}_{12}\) compound. Contrary to prior experimental work asserting two-band superconductivity [25], we demonstrate that the low-temperature behavior of the superfluid density points to a fully gapped superconducting Fermi surface. Furthermore, the preservation of time-reversal symmetry is confirmed by the lack of spontaneous magnetic fields in the superconducting state, ruling out unusual pairing processes. The transition from two-band to single-band superconductivity in LaRu\({}_{4}\)As\({}_{12}\) to LaOs\({}_{4}\)As\({}_{12}\) is caused by differences in interband coupling strength in the Fermi surface, as evidenced by the different degrees of hybridization and electronic properties observed in the Fermi surfaces of both compounds [28]. These results underline the significance of LaRu\({}_{4}\)As\({}_{12}\) and LaOs\({}_{4}\)As\({}_{12}\) compounds as an important platform for investigating filled skutterudites for the competition between single-band and multiband superconductivity in electron-phonon driven systems. ## II Experimental details The high-temperature molten-metal-flux technique, described in [29], was used to grow single crystals of LaOs\({}_{4}\)As\({}_{12}\). In a quartz ampule, elements with purities higher than 99.9% and a molar ratio of La:Os:Cd:As \(\rightarrow\) 1:4:12:48 were combined. The details on the single crystal growth can be found in [29]. The relaxation approach was used to measure the heat capacity in a Quantum Design physical properties measurement (PPMS) system. Temperatures as low as 0.38 K were attained utilizing a He-3 attachment to the PPMS [25]. The \(\mu\)SR measurements were carried out using small size unaligned single crystals (0.1 mm 0.1 mm 0.1 mm, total mass 1 g), which gave powder average muon signal, of LaOs\({}_{4}\)As\({}_{12}\). The MuSR spectrometer at the Rutherford Appleton Laboratory, ISIS Neutron and Muon Source in the UK was used to perform the \(\mu\)SR measurements [30]. In a \(\mu\)SR experiment, the sample is injected with 100% spin-polarized muons. Each implanted muon thermalizes, at which point it decays (lifetime \(\tau_{\mu}\) = 2.2 \(\mu\)s) into a positron (and two neutrinos) which is preferentially released in the direction of the muon spin at the moment of decay. Utilizing detectors carefully placed around the sample, the decay positrons are detected and time-stamped. It is possible to calculate the asymmetry in the positron emission as a function of time, \(A(t)\), using the collected histograms from the forward (F) and backward (B) detectors, \(A(t)=\frac{N_{\mathrm{F}}(t)-nN_{\mathrm{B}}(t)}{N_{\mathrm{F}}(t)+nN_{ \mathrm{B}}(t)}\), where \(\alpha\) is a calibration factor for the instrument and \(N_{\mathrm{F}}(t)\) and \(N_{\mathrm{B}}(t)\) are the number of positrons counted in the forward and backward detectors, respectively. Detectors are placed longitudinally during ZF-\(\mu\)SR, and a correction coil is used to cancel out any stray magnetic fields up to 10\({}^{-4}\) mT. To investigate the time reversal symmetry ZF-\(\mu\)SR measurements were carried out [31]. In the vortex state, TF-\(\mu\)SR measurements were performed with applied fields of 20, 30, 40, 50, and 60 mT, which is greater than the lower critical field \(H_{\mathrm{c1}}\) (\(\sim\)5 mT) and lower than the upper critical field \(H_{\mathrm{c2}}\) (\(\sim\)1 T) [21]. The sample was covered using a thin silver foil after being mounted onto a high purity (99.995%) silver sample holder using diluted GE-varnish. The sample was cool down to 300 mK using a dilution refrigerator. To generate the vertex lattice by trapping the applied TF, we applied field above \(T_{\mathrm{C}}\) and then sample was cooled in the field to the base temperature of 300 mK. We used WiMDA [32] software to analyze the \(\mu\)SR data. Figure 1: A unit cell of the body-centered cubic LaOs\({}_{4}\)As\({}_{12}\) structure with the space group \(Im3\) that crystallizes within a CoAs\({}_{3}\)-type skutterudite structure packed with La atoms. Green: As, Orange: Os, and Blue: La. ## III Results and Discussion ### Crystal Structure & Physical Properties LaOs\({}_{4}\)As\({}_{12}\) crystallizes in a CoAs\({}_{3}\)-type skutterudite structure packed with La atoms and has a body-centered cubic structure with the space group \(Im\bar{3}\) (No. 204) as shown in Figure 1. The large icosahedron cage made of As atoms is located around the electrospositive La sites, which lack four-fold rotational symmetry. Between the cages, a transition metal ion called Os forms a cubic sublattice. The low temperature specific heat measurements \(C_{P}\) as a function of temperature at zero magnetic field are shown in the inset of Figure 2a. Using the equations \(C_{P}=\gamma T+\beta T^{3}\), the normal state heat capacity is fitted. We calculated the lattice contribution to the specific heat \(\beta\) = 0.613 mJ/mol K\({}^{4}\) and the electronic parameter (Sommerfeld's coefficient) \(\gamma\) = 90.47 mJ/mol K\({}^{2}\) from this. The Debye temperature is determined using the Debye model as \(\Theta_{D}=\left(\frac{12\pi^{2}nR}{59}\right)^{1/3}\), where \(R\) is the universal gas constant, which is 8.314 J/mol-K, and \(n\) denotes the number of atoms in the compound (n = 17). The value of \(\Theta_{D}\) is thus calculated to be approximately 377 K, which agrees with the previous measurement [22, 25]. Figure 2a displays the low-\(T\) electronic specific heat \(C_{e}\) that was produced after the phonon contribution was taken into account. The heat capacity jump at \(T_{C}\) (\(\Delta C_{e}/\gamma T_{C}\)) is calculated to be 1.2, which is less than 1.43 the value expected for a weak-coupling BCS superconductivity. The fit to the exponential temperature dependency of \(C_{e}(T)\) yields \(\Delta(0)=0.40\) meV, which is close to the 0.45 meV value obtained from the TF-\(\mu\)SR data analysis (discussed in section-B). Thus, the value of \(2\Delta(0)/k_{B}T_{C}=2.9\), which is less than the 3.53 anticipated for weak-coupling BCS superconductors. However, the linear fitting shown in Figure 2b shows that this material exhibits BCS behavior with a single isotropic gap. ### Superconducting Gap Structure: TF-\(\mu\)SR The pairing mechanism and superconducting gap structure of the LaOs\({}_{4}\)As\({}_{12}\) were investigated by TF-\(\mu\)SR experiments down to 0.3 K. The TF-\(\mu\)SR asymmetry time spectra in the presence of 20 mT and 50 mT applied magnetic fields at above and below \(T_{C}\) are shown in Figures 3a-d. Because of the extra inhomogeneous field distribution of the vortex lattice generated inside the superconducting mixed state of LaOs\({}_{4}\)As\({}_{12}\), the spectrum in Figure 3a,c in the superconducting state at 0.3 K demonstrate a greater relaxation. Using the Gaussian damped decay function, the asymmetry spectra were fitted [33, 34, 35] using the following equation, \[\begin{split} A_{\text{TF}}(t)=A_{\text{sc}}\exp{\left(-\frac{ \sigma_{TF}^{2}t^{2}}{2}\right)}\cos(\gamma_{\mu}B_{\text{sc}}t+\phi)+\\ A_{\text{bg}}\cos(\gamma_{\mu}B_{bg}t+\phi).\end{split} \tag{1}\] The gyromagnetic muon ratio is \(\gamma_{\mu}/2\pi\) = 135.53 MHz/T, and the initial asymmetries of muons stopping on the sample and on the silver holder are \(A_{sc}\) and \(A_{bg}\), respectively (constant across the entire temperature range). The local fields B\({}_{sc}\) and B\({}_{bg}\) represent muons stopping on the sample and on the sample holder, respectively, whereas \(\phi\) represents initial phase value and \(\sigma_{TF}\) represents the Gaussian depolarization rate. We calculated the values of \(A_{sc}\) = 76% and \(A_{bg}\) = 24% of the total asymmetries by fitting 0.3 K data. When additional temperature data were analyzed, \(A_{bg}\) was kept constant and \(A_{sc}\) was found nearly temperature independent. The emergence of bulk superconductivity is indicated by an increase in the \(\sigma_{TF}\) rate as the system approaches the superconducting state. With the use of the following formula, the superconducting contribution to the relaxation \(\sigma_{sc}\) was determined, \(\sigma_{\text{sc}}=\sqrt{\sigma_{\text{TF}}^{2}-\sigma_{\text{nm}}^{2}}\), where the nuclear magnetic dipolar contribution, is denoted by the symbol \(\sigma_{mn}\), which is derived from high-temperature fits and is temperature independent. Figure 3e depicts the temperature dependence of \(\sigma_{sc}\) in several applied TF fields. Due to low \(H_{c2}\) value, as seen in Figure 3f, \(\sigma_{\text{sc}}\) depends on the applied field. Brandt demonstrated that the London penetration depth \(\lambda_{L}(T)\) is linked to \(\sigma_{\text{sc}}\) for a super Figure 2: **(a)** Low-temperature specific heat of the filled skutterudite compound LaOs\({}_{4}\)As\({}_{12}\), expressed as \(C_{e}\) vs \(T\) in a zero magnetic field. The specific heat data is shown in the inset. **(b)** The normalized electronic specific heat (\(C_{e}/\gamma T_{C}\)) versus inverse reduced temperature (\(T_{C}/T\)). The heat capacity data are from [25]. conductor with \(H_{\rm ext}/H_{\rm c2}\leq 0.25\)[36; 37]. \[\begin{split}\sigma_{\rm sc}[\mu s^{-1}]=4.83\times 10^{4}(1-H_{\rm ext }/H_{\rm c2})\\ \times[1+1.21\left[1-\sqrt{(H_{\rm ext}/H_{\rm c2})}\right]^{3}] \lambda_{L}^{-2}[nm].\end{split} \tag{2}\] This relationship has been used to compute the temperature dependency of \(\lambda_{L}(T)\). As demonstrated in Figure 3f, isothermal cuts perpendicular to the temperature axis of \(\sigma_{sc}\) data sets were utilized to estimate the \(H\)-dependence of the depolarization rate \(\sigma_{sc}(H)\). The normalized \(\lambda_{L}^{-2}(T)/\lambda_{L}^{-2}(0)\) temperature variation, which is directly proportional to superfluid density, is shown in Figure 4a. The data were fitted using the following Figure 4: **Left panel :** (**a**) The inverse magnetic penetration depth squared as a function of temperature is shown here. The lines show the fits using \(s\)-wave (red), \(s+s\)-wave (light green) and \(d\)-wave (blue) gap functions. **Right panel :** (**b**) Shows the ZF-\(\mu\)SR spectra for LaOs\({}_{4}\)As\({}_{12}\) at 0.3 K (blue) and 4 K (light green). The solid line fits to the experimental data points, as stated in the text. Figure 3: **Left panel:** Asymmetry spectra of the TF-\(\mu\)SR in the low time region obtained in 20 mT and 50 mT applied magnetic fields at (**a–c**) \(T=0.3\) K (i.e., below \(T_{\rm C}\)) and (**b–d**) \(T=3.5\) K (i.e., above \(T_{\rm C}\)). **Right panel:** (**e**) The superconducting depolarization rate \(\sigma_{sc}\) as a function of temperature in the presence of an applied field of \(20\leq\mu_{0}\) H \(\leq 60\) mT. (**f**) The magnetic field dependence of the muon spin depolarization rate is shown for a range of different temperatures. The solid lines are the results of fitting the data using Brandt’s equation as discussed in Equation (2). equation [38; 39]: \[\frac{\sigma_{sc}(T)}{\sigma_{sc}(0)} = \frac{\lambda_{L}^{-2}(T)}{\lambda_{L}^{-2}(0)}\] \[= 1+\frac{1}{\pi}\int_{0}^{2\pi}\int_{\Delta(T)}^{\infty}\left( \frac{\delta f}{\delta E}\right)\times\frac{EdEd\phi}{\sqrt{E^{2}-\Delta(T, \phi)^{2}}},\] where \(f=[1+\exp(\frac{E}{k_{B}T})]^{-1}\) is the Fermi function. We take \(\Delta_{k}(T,\phi)=\Delta(T)\mathrm{g}_{k}(\phi)\), where we assume a temperature dependence that is universal \(\Delta(T)=\Delta_{0}\tanh[1.82[1.018(T_{\mathrm{C}}/T-1)]^{0.51}]\). The magnitude of the gap at 0 K is \(\Delta_{0}\), and the function g\({}_{k}\) denotes the gap's angular dependence, which is equal to 1 for one isotropic energy gap \(s\), 1 for two isotropic \(s+s\) wave energy gap and \(\cos(2\phi)\) for d-wave gap, where \(\phi\) is the azimuthal angle along the Fermi surface. Figure 4a illustrates our comparison of three distinct gap models: employing a single isotropic \(s\)-gap wave, a multigap \(s+s\)-wave gap, and a nodal \(d\)-wave gap. As seen in the figure, the superfluid density saturates at low temperatures, which is a characteristic of the \(s\)-wave model with a single gap. An isotropic single-band \(s-\)wave model with a gap value of 0.45 meV provides the best representation of the data, with a gap to \(T_{\mathrm{C}}\) ratio \(2\Delta(0)/k_{\mathrm{B}}T_{\mathrm{C}}=3.26\), which is less than the BCS weak-coupling limit (=3.53). On the other hand, the substantial rise in the \(\chi^{2}\) value puts the \(d\)-wave model and \(s+s\)-wave (multigap) model inappropriate for this system. A two-gap \(s+s\)-wave model of multiband superconductivity has been shown to be compatible with the temperature dependence of magnetic penetration depth of LaRu\({}_{4}\)As\({}_{12}\). The higher gap to \(T_{C}\) ratio computed in the \(s+s\)-wave scenario, \(2\Delta_{1}(0)/k_{\mathrm{B}}T_{C}=3.73\), is fairly comparable to the value of 3.53 for BCS superconductor in case of LaRu\({}_{4}\)As\({}_{12}\)[23]. For LaRu\({}_{4}\)As\({}_{12}\), 2 K specific phonon modes exhibit modest softening when compared to 20 K, demonstrating that the electron-phonon interactions causing the superconductivity have an audible impact on the vibrational eigenstates [23]. Using McMillan's relation, it is also possible to determine the electron-phonon coupling constant (\(\lambda_{\mathrm{e-ph}}\)) [40]: \[\lambda_{\mathrm{e-ph}}=\frac{1.04+\mu^{*}\ln(\Theta_{\mathrm{D}}/1.45T_{ \mathrm{C}})}{(1-0.62\mu^{*})\ln(\Theta_{\mathrm{D}}/1.45T_{\mathrm{C}})-1.04}. \tag{4}\] where \(\mu^{*}\) is the repulsive screened Coulomb parameter usually assigned as \(\mu^{*}\) = 0.13. The calculated value of the \(\lambda_{\mathrm{e-ph}}\) is 0.534. The London model is described as \(\lambda_{\mathrm{L}}^{2}=m^{*}c^{2}/4\pi n_{\mathrm{s}}e^{2}\). It connects the effective mass enhancement m\({}^{*}\) [\(=(1+\lambda_{e-ph})*m_{\mathrm{e}}\)], superconducting carrier density \(n_{\mathrm{s}}\) [\(=m^{*}c^{2}/4\pi e^{2}\lambda_{L}(0)^{2}\)], and London penetration depth. By employing the \(s\)-wave model, we determined the London penetration depth of \(\lambda_{L}(0)\) = 168 nm. The effective mass enhancement is calculated to be \(m^{*}=1.53~{}m_{\mathrm{e}}\), and the superconducting carrier density is predicted to be \(n_{\mathrm{s}}=1.53\times 10^{27}\) carriers m\({}^{-3}\). References [41; 42; 41] include a description of the computations in detail. The calculated values of \(\lambda_{L}(0)=240\) nm, \(n_{\mathrm{s}}=8.6\times 10^{27}\) carriers m\({}^{-3}\) and \(m^{*}=1.749~{}m_{\mathrm{e}}\) for LaRu\({}_{4}\)As\({}_{12}\)[23]. The fitted parameters for LaOs\({}_{4}\)As\({}_{12}\) and LaRu\({}_{4}\)As\({}_{12}\) (for comparison) are shown in Table 1. To explain the observed nature of the superconducting gap structures, it is important to comprehend the electronic structures of these compounds, which have been carried [28] and the results suggest that the single-band order parameter in LaOs\({}_{4}\)As\({}_{12}\) seems to be associated with the hybridized As-p and Os-d electronic character of the Fermi surface. On the other hand, the lack of hybridization for the disjointed Fermi surface of LaRu\({}_{4}\)As\({}_{12}\), may explain its multiband superconducting nature. ### Preserved Time Reversal Symmetry: ZF-\(\mu\)SR In order to determine if there is a spontaneous magnetic field present in the superconducting ground state, we conducted the ZF-\(\mu\)SR experiment. Figure 4b shows the time evolution of the asymmetry spectra for \(T\) = 0.3 K \(<T_{\mathrm{C}}\) and \(T\) = 3.5 K \(>T_{\mathrm{C}}\). The ZF-\(\mu\)SR spectra recorded in the normal and superconducting states show the same relaxations that can be found in overlapping ZF-\(\mu\)SR spectra, indicating that the superconducting state does not shows any spontaneous magnetic field or spin fluctuations. This result suggests that the time-reversal symmetry is preserved in LaOs\({}_{4}\)As\({}_{12}\) superconducting state. The strong resemblance of the ZF-\(\mu\)SR spectra (above and below T\({}_{C}\)) suggests that the time-reversal symmetry is also retained in the superconducting state of LaRu\({}_{4}\)As\({}_{12}\). In order to fit the ZF data, a Lorentzian function was used [43], \[G_{\mathrm{ZF}}(t)=A_{\mathrm{sc}}(t)\exp{(-\lambda_{ZF}t)}+A_{\mathrm{bg}}, \tag{5}\] where \(\lambda_{ZF}\) is the electronic relaxation rate, \(A_{\mathrm{sc}}\) stands for the sample asymmetry, \(A_{\mathrm{bg}}\) for the constant nondecaying background signal. The red line in Figure 4b indicates the fits to the ZF-\(\mu\)SR data. The ZF-\(\mu\)SR asymmetry data fitting parameters are \(\lambda_{ZF}\) = 0.754(4) \(\mu\)s\({}^{-1}\) at 0.3 K and \(\lambda_{ZF}\) = 0.744(5) \(\mu\)s\({}^{-1}\) at 3.5 K. No conclusive evidence of TRS breaking can be found since the relaxation rate change is within the error bar. ## IV Summary We employed TF-\(\mu\)SR to determine the gap symmetry of the superconducting state of LaOs\({}_{4}\)As\({}_{12}\). An isotropic BCS-type \(s\)-wave gap model explains the temperature dependence of the superfluid density. The gap to \(T_{\mathrm{C}}\) ratio, which was determined from the \(s\)-wave gap fit to the superfluid density, is 3.26; nonetheless, this is smaller than 3.53 expected for conventional BCS systems. The ZF-\(\mu\)SR spectra at 0.3 K and 3.5 K are strikingly similar, indicating that the time-reversal symmetry is intact. These results open up the possibility of using the compounds LaRu\({}_{4}\)As\({}_{12}\) and LaOs\({}_{4}\)As\({}_{12}\) as special research platforms for investigating filled skutterudites for the interplay between single- and multiband superconducting order parameters in conventional systems. ## Acknowledgements We thank T. Cichorek and J. Juraszek for providing LaOs\({}_{4}\)As\({}_{12}\) sample and the ascii heat capacity data. We would like to thank T. Cichorek, P. P. Ferreira, R. Lucrezi, J. Juraszek, C. Heil and L. T. F. Eleno for interesting discussions. AB expresses gratitude to the Science and Engineering Research Board for the CRG Research Grant (CRG/2020/000698 & CRG/2022/008528) and CRS Project Proposal at UGC-DAE CSR (CRS/2021-22/03/549). DTA appreciates the support provided by the Royal Society of London for the Newton Advanced Fellowship between the UK and China, the International Exchange between the UK and Japan, and EPSRC-UK (Grant number EP/W00562X/1). We thanks the ISIS Facility for the beam time, RB1520431 [44].
2310.18017
Some properties of plasma surrounding brown dwarfs
Recently, brown dwarfs have emerged as a new topic for the astrophysical studies. These objects are intermediate between solar-type stars and giant gaseous planets. In this article, the analogies between brown dwarfs and the planet Jupiter are considered with a focus on the surrounding plasma. I consider the magnetohydrodynamic version of the Rayleigh-Taylor instability (or so called ``interchange instability'') as a minimal model of the expansion of the plasma disc surrounding Jupiter. By comparing the theoretical prediction for the radial expansion rate of the disc with the observations I quantitatively confirm the existing qualitative result, which predicts that the Rayleigh-Taylor instability provides too quick expansion. Therefore, in the realistic plasma disc yet another mechanism must operate which slows down the expansion. I suggest that similar mechanisms take place in the observed radiation belts of brown dwarfs.
Dmitry Kobyakov
2023-10-27T09:47:13Z
http://arxiv.org/abs/2310.18017v1
# Some properties of plasma surrounding brown dwarfs ###### Abstract Recently, brown dwarfs have emerged as a new topic for the astrophysical studies. These objects are intermediate between solar-type stars and giant gaseous planets. In this article, the analogies between brown dwarfs and the planet Jupiter are considered with a focus on the surrounding plasma. I consider the magnetohydrodynamic version of the Rayleigh-Taylor instability (or so called "interchange instability") as a minimal model of the expansion of the plasma disc surrounding Jupiter. By comparing the theoretical prediction for the radial expansion rate of the disc with the observations I quantitatively confirm the existing qualitative result, which predicts that the Rayleigh-Taylor instability provides too quick expansion. Therefore, in the realistic plasma disc yet another mechanism must operate which slows down the expansion. I suggest that similar mechanisms take place in the observed radiation belts of brown dwarfs. _Introduction_. Brown dwarf is a stellar-type celestial body with mass \(M_{\rm s}\) in the range \(13M_{\rm lup}<M_{\rm s}<80M_{\rm lup}\), or, in solar masses, \(1.241\times 10^{-2}M_{\sun}<M_{\rm s}<7.636\times 10^{-2}M_{\sun}\), where the lower limit corresponds to the minimum mass suitable for the stellar deiterium combustion and the upper limit corresponds to the minimum mass suitable for the stellar hydrogen combustion. Here, \(M_{\rm lup}=1.8913\times 10^{30}\) g is the Jupiter mass. The spectral type of brown dwarf is in the range M7-M9, L, T, Y. Its temperature is between 300 and 2500 K. The dipolar magnetic field on the surface is typically of the order of \(10^{3}-10^{4}\) G. The possible emission types are radio, infrared, optical, ultraviolet and X-ray [1; 2; 3; 4; 5; 6]. Observations [4; 5] of the brown dwarf 2MASS J18353790+3259545 (equivalently denoted as LSR J1835+3259) with mass \(\sim 77M_{\rm Jup}\), radius \(\sim 1.07R_{\rm Jup}\) and rotation period \(1.008\times 10^{4}\) s, have revealed a radiation belt surrounding the star. The radiation belt has radius \(\sim 17R_{\rm lup}\), where \(R_{\rm lup}=7.1492\times 10^{9}\) cm is Jupiter's radius [4]. The existence of the radiation belt, relatively strong magnetic field and rapid rotation observed from LSR J1835+3259 indicates that there are analogies between the radio emission mechanisms in its magnetosphere and the physics of the radiation belt of Jupiter. At present, the origin of the plasma in the radiation belt of LSR J1835+3259 is unclear but it is likely that in analogy with the Jupiter-Io system there is a planetary satellite [4]. An elementary physical picture of the radiation belt is based on the model of the uniform (solid-like) rotation of the magnetosphere. The mechanism maintaining the rotation of the plasma surrounding a rotating magnetic dipole with electrically conducting surface has been considered in [7]. The Alfven radius defines the radial distance from the star center to the point where the configuration of the magnetic field lines changes from closed to open (Fig. 1). The black dot in Fig. 1 is the source of plasma (Io in case of Jupiter's magnetosphere). With \(R_{\rm K}<R_{\rm A}\), the magnitosphere is centrifugal [8], where \(R_{\rm K}=(GM_{\rm s}/\Omega^{2})^{1/3}\) is the Kepler radius (Fig. 1), \(\Omega\) is the rotational angular frequency. Formation of a plasma disc (Figs. 1,2) as a result of the magnetosphere rotation has been first shown for the magnetic star \(\sigma\) Ori E [9]. The same mechanism leads to the formation of Jupiter's plasma disc. The standard model of the radial expansion of Jupiter's plasma disc is the convective (or so called interchange) plasma instability of the plasma disc [10]. However, there remains an open question [10]: _why is the observed expansion of the plasma disc is significantly slower than the expansion rate predicted theoretically in the framework of the interchange plasma instability?_ Figure 1: The notion of the "interchange mode" has appeared in the beginning of studies of the laboratory plasma. It implies that the plasma and the confining magnetic field switch their spatial locations as a result of action of the external forces. In dealing with the interchange plasma instability I will follow the book [11]. The problem of the interchange plasma instability is analogous to the Rayleigh-Taylor instability. Figure 3 shows a schematic picture of the plasma slab in a uniform external force field supported by the magnetic field. Linearization of the equations of motion of ideal isothermic plasma with a perturbation of a fluid element \(\mathbf{\xi}\), the external free-fall acceleration \(\mathbf{g}=(-g,0,0)\) and the perturbation wave vector \(\mathbf{k_{0}}=(0,k_{y},k_{z})\), (Fig. 3), leads to the resulting potential energy \(W\) of the system: \[W=\frac{\xi_{x}(0)}{2k_{0}}\left[\frac{\left(\mathbf{k_{0}}\cdot\mathbf{B_{0}} \right)^{2}}{\tanh k_{0}a}-\rho_{0}k_{0}g+\frac{\left(\mathbf{k_{0}}\cdot \mathbf{\hat{B_{0}}}\right)^{2}}{\tanh k_{0}b}\right]. \tag{1}\] Equation (1) shows that (i) the external force \(\mathbf{g}\) (\(g\geq 0\)) always destabilizes the plasma, (ii) the magnetic induction may stabilize the plasma. In case when the plasma is inhomogeneous along \(x\) axis, the instability is described by the equation found for the first time in [12]. If the conditions \(\mathbf{B_{0}}\times\mathbf{\hat{B_{0}}}=0\) and \(\mathbf{B_{0}}\cdot\mathbf{\hat{B_{0}}}>0\) are satisfied, the dispersion equation has the form \[\omega^{4}-\Omega_{1}^{4}+\Omega_{2}^{4}=0, \tag{2}\] where \(\Omega_{1}^{4}=\frac{b^{2}+2c^{2}}{b^{2}+c^{2}}k_{\parallel}^{2}b^{2}+\frac{k _{0}^{2}}{k_{0}^{2}+q^{2}}N_{m}^{2};\Omega_{2}^{4}=\frac{c^{2}}{b^{2}+c^{2}}k_ {\parallel}^{2}b^{2}\left(k_{\parallel}^{2}b^{2}+\frac{k_{\parallel}^{2}}{k_ {0}^{2}+q^{2}}N_{B}^{2}\right);\)\(c=\gamma[p(x=0)]/[\rho(x=0)];\)\(b=B_{0}/\sqrt{\rho(x=0)};\)\(\gamma\) is the adiabatic index; \(k_{\parallel}\) is the component of \(\mathbf{k_{0}}\) which is parallel to \(\mathbf{B_{0}}\); \(\xi\sim e^{iqx}\), \(qL\gg 1\), \(L=(p+B^{2}/2)/\rho g\) is the size of equilibrium variations. The frequencies (Brunt-Vasilana and its magnetic modification [11]) are given by \[N_{b}^{2}=-\frac{1}{\rho}\left(\rho^{\prime}g+\frac{\rho^{2}g^{2}}{\gamma p} \right),\quad N_{m}^{2}=-\frac{1}{\rho}\left(\rho^{\prime}g+\frac{\rho^{2}g^ {2}}{\gamma p+B^{2}}\right), \tag{3}\] where \(\rho^{\prime}\equiv\partial_{x}\rho|_{x=0}\). The relation between the growth rates is defined by four quantities: \[\Gamma=-\frac{\rho^{\prime}}{\rho}g,\quad\Gamma_{B}=\frac{\rho g^{2}}{\gamma p },\quad\Gamma_{m}=\frac{\rho g^{2}}{\gamma p+B^{2}},\quad\Gamma_{0}=\frac{ \Gamma_{m}^{2}}{\Gamma_{B}}. \tag{4}\] It has been known that (i) the plasma is stable when \(\Gamma_{B}\leq\Gamma\); (ii) at \(\Gamma_{0}\leq\Gamma<\Gamma_{B}\) the most unstable mode is the quasiinterchange mode (\(k_{\parallel}\neq 0\)) and its growth rate is \(\omega^{2}=-\frac{\rho g^{2}}{B^{2}}(1-\sqrt{\Gamma/\Gamma_{B}})^{2}\); at \(\Gamma\leq\Gamma_{0}\) the most unstable is the interchange mode with the growth rate \(\omega^{2}=\Gamma-\Gamma_{m}\). _Numerical results._ For Jupiter's plasma disc, the parameters entering Eq. (2) are known from observations, and thus, the most unstable mode can be easily found. Using figure 4 of [13] I find the characteristic distance of the outer edge of the plasma disc (\(x=0\)) from Jupiter's center (\(x=x_{2}\)) (Fig. 2): \[x_{2}\approx 20R_{\rm Jup}. \tag{5}\] Figure 3: The mass density of the electron-ion plasma \(\rho=Am_{p}n(x=0)\), where \(A\sim 48\) is the atomic mass (assuming that the sulfur oxide is the ion component of plasma), \(m_{p}\) is the proton mass, \(n\propto(x_{2}-x)^{-3}\), from figure 4 of [13] \[n(x=0)\approx 1\ {\rm cm}^{-3}, \tag{6}\] \[g\approx r_{2}\Omega_{\rm Jup}^{2}=4.422\times 10^{3}\ {\rm cm}\,{ \rm s}^{-2} \tag{7}\] where \(\Omega_{\rm Jup}=1.759\times 10^{-4}\ {\rm rad}\ {\rm s}^{-1}\). From these parameters I find \[\Gamma=-9.277\times 10^{-8}\ {\rm s}^{-2},\quad\Gamma_{B}=6.821 \times 10^{-4}\ {\rm s}^{-2}, \tag{8}\] \[\Gamma_{m}=9.9\times 10^{-7}\ {\rm s}^{-2},\quad\Gamma_{0}=1.437 \times 10^{-9}\ {\rm s}^{-2}. \tag{9}\] It follows from Eqs. (8)-(9) that the case \(\Gamma<\Gamma_{0}\) (since \(\Gamma<0\)) is realized. Therefore, the expansion of the plasma disc of Jupiter should occur due to the interchange mode with the characteristic growth rate from Eq. (2): \[\tau_{theory}\sim 1.056\times 10^{3}\ {\rm s}. \tag{10}\] This result implies that the theoretical prediction for the growth rate is significantly smaller than it is expected from observations. The latter has the order of 20-80 days [10], or in case of 20 days, \[\tau_{observ}\sim 1.728\times 10^{6}\ {\rm s}. \tag{11}\] _Conclusions._ The quantitative estimate for the expansion rate of Jupiter's plasma disc, Eq. (10), agrees with the qualitative prediction known from the literature [10]. Specifically, the theory predicts a growth rate, Eq. (10), which is a few orders of magnitude smaller than it is inferred from the observations, Eq. (11). In case when a brown dwarf possess a plasma disc, the analogous situation is expected. Such a discrepancy between the theory and observations indicates that a significant piece of theoretical understanding of the plasma surrounding those celestial bodies is missing. In the future work it is therefore necessary to identify possible physical mechanisms, which are responsible for the practical increase of the duration of the loss of matter. It is necessary to analyze the following possible reasons. (i) Nonzero shear of the magnetic field, which has not been included in the linear analysis in Eq. (2). (ii) Account for the Birkeland currents and the corresponding electric current in the plasma disc. (iii) The action of the Kelvin-Helmholtz instability on the nonlinear stage of the interchange instability found in Eq. (2). _Acknowledgements._ I thank P. A. Bespalov for helpful comments and discussions. This research was supported by the Russian Science Foundation under grant No. 20-12-00268. _Translated by the author._
2308.02205
GEMRec: Towards Generative Model Recommendation
Recommender Systems are built to retrieve relevant items to satisfy users' information needs. The candidate corpus usually consists of a finite set of items that are ready to be served, such as videos, products, or articles. With recent advances in Generative AI such as GPT and Diffusion models, a new form of recommendation task is yet to be explored where items are to be created by generative models with personalized prompts. Taking image generation as an example, with a single prompt from the user and access to a generative model, it is possible to generate hundreds of new images in a few minutes. How shall we attain personalization in the presence of "infinite" items? In this preliminary study, we propose a two-stage framework, namely Prompt-Model Retrieval and Generated Item Ranking, to approach this new task formulation. We release GEMRec-18K, a prompt-model interaction dataset with 18K images generated by 200 publicly-available generative models paired with a diverse set of 90 textual prompts. Our findings demonstrate the promise of generative model recommendation as a novel personalization problem and the limitations of existing evaluation metrics. We highlight future directions for the RecSys community to advance towards generative recommender systems. Our code and dataset are available at https://github.com/MAPS-research/GEMRec.
Yuanhe Guo, Haoming Liu, Hongyi Wen
2023-08-04T08:45:02Z
http://arxiv.org/abs/2308.02205v2
# Towards Personalized Prompt-Model Retrieval for Generative Recommendation ###### Abstract. Recommender Systems are built to retrieve relevant items to satisfy users' information needs. The candidate corpus usually consists of a finite set of items that are ready to be served, such as videos, products, or articles. With recent advances in Generative AI such as GPT and Diffusion models, a new form of recommendation task is yet to be explored where items are to be created by generative models with personalized prompts. Taking image generation as an example, with a single prompt from the user and access to a generative model, it is possible to generate hundreds of new images in a few minutes. How shall we attain personalization in the presence of "infinite" items? In this preliminary study, we propose a two-stage framework, namely _Prompt-Model Retrieval_ and _Generated Item Ranking_, to approach this new task formulation. We release GEMRec-18K, a prompt-model interaction dataset with 18K images generated by 200 publicly-available generative models paired with a diverse set of 90 textual prompts. Our findings demonstrate the promise of generative model recommendation as a novel personalization problem and the limitations of existing evaluation metrics. We highlight future directions for the RecSys community to advance towards generative recommender systems. Our code and dataset are available at [https://github.com/MAPS-research/GEMRec](https://github.com/MAPS-research/GEMRec). Recommender Systems, Generative Recommendation, Image Generation Authors' address: Yuanhe Guo, [email protected]; Haoming Liu, [email protected]; Hongyi Wen, [email protected], NYU Shanghai, China. ## 1. Introduction Modern Recommender Systems are built on the concept of information retrieval, where the main objective is to fetch the most relevant items from a large corpus for end-users and help them discover new interests. This type of personalization task can be referred to as _Retrieval-based Recommendation_. Inspired by recent advances of generative models in various application domains such as image (Hongyi et al., 2018; Guo et al., 2019; Guo et al., 2019), language (Golovolov et al., 2013; Liu et al., 2019) and audio (Golovolov et al., 2013; Liu et al., 2019), we envision a new form of recommendation task to emerge: items are to be created by generative models thus the size of the candidate corpus is "infinite"; users have individual preferences towards generated items and even the generative models themselves. We refer to this novel task formulation as _Generative Recommendation_ throughout the paper. A key challenge to interact with generative models at scale is the huge time and computational costs - these large pre-trained models need to be deployed on GPUs with enough capacities. To elicit preferences towards these models, users need to check the generated items case by case in order to get a sense of what each model is specialized at. Such a workflow is infeasible and unsustainable given the scale of the available generative models. A possible solution is to identify a set of relevant models for users' personalized prompts, i.e., _Prompt-Model Retrieval_. With a smaller set of retrieved models, users are able to interact more intensively with their generated items and express feedback that can be used train a ranking model to learn more nuanced user preferences, i.e., _Generated Item Ranking_. As a preliminary study to illustrate the challenges and opportunities for such novel tasks, we focus on image generation models due to their variety and availability in large-scale from the web. For example, as of now there are more than 5K open-source text-to-image models available on HuggingFace. Platforms such as Midjourney and Civitai have attracted millions of users to upload images generated by publicly-available models as well as fine-tune specialized models. These numbers keep increasing rapidly and are expected to reach the scale that is in need of personalized recommendations in a relatively short time. The contributions of this work are threefold: * We propose a two-stage framework to approach the _Generative Model Recommendation_ problem. Our framework allows end-users to effectively explore a diverse set of generative models to understand their expressiveness. It also allows system developers to elicit user preferences for items generated from personalized prompts (Sec. 3). * We release GEMRec-18K, a dense prompt-model interaction dataset that consists of 18K images generated by pairing 200 generative models with 90 prompts collected from real-world usages, accompanied by detailed metadata and generation configurations. This dataset builds the cornerstone for exploring _Generative Recommendation_ and can be useful for other tasks related to understanding generative models (Sec. 4). * We take the first step in examining evaluation metrics for personalized image generations and identify several limitations in existing metrics. We propose a weighted metric that is more suitable for the task and opens up directions for future improvements in model training and evaluations (Sec. 5). ## 2. Related Work Personalized Text-to-Image GenerationText-to-image generation is a typical multi-modal machine learning task that aims to generate images according to textual inputs. Classical approaches leverage GANs (Goodfellow et al., 2016) and VAEs (K ### Prompt-Model Retrieval The main task is to understand user preference for the compositions and styles of candidate models and to retrieve the most preferable ones from a large corpus. To demonstrate this process, we built an interactive web interface to display images generated by candidate models and basic information such as model names and version IDs. Through this interface, users can easily examine model outputs from pre-defined prompts (Fig. 1 left). To facilitate navigation, we implemented three ranking baselines, including popularity-based, relevance-based, and distinctiveness-based. Images shown in the gallery are ranked by the weighted sum of the normalized scores of the three baselines. The calculation of these scores is explained in Sec. 5.2 in detail. We also set a threshold on distinctiveness, which can filter out some highly biased models that generate images that are not relevant to the prompts. We allow users to experiment with these parameters freely according to their own preferences. We expect to extract coarse user preference towards generative model candidates from positive user feedback such as selection of a model. With enough user feedback data, state-of-the-art algorithms such as (Sutskever et al., 2017) can be applied for personalized prompt-model retrieval. ### Generated Item Ranking After selecting a small candidate set of models from the retrieval stage, the objective of this ranking stage is to accurately learn the ranking of models using pairwise feedback from users on the generated images. The interface consists of three components (Fig. 1 right): The sidebar on the left indicates customizable parameters for image generation, such as prompts and samplers; the middle module displays images corresponding to the user prompt that are generated by candidate models from the retrieval stage, where users can change the order of images through dragging to indicate their preferences; at the end of the session, statistics of users' overall preference and tag-wise model preferences will be presented on the dashboard. Such user preference data can be leveraged to train Learning-to-Rank (LTR) algorithms such as Bayesian Personalized Ranking (Sutskever et al., 2017) and to develop novel ranking algorithms for _Generative Model Recommendation_. ## 4. The Gemrec-18k Dataset To validate the feasibility of our proposed framework, we collected and analyzed 90 prompts and 200 generative models from publicly-available sources, resulting in a prompt-model interaction dataset of 18K images and the associated metadata, namely the **GE**nerative **M**odel **Rec**ommendation (**GEMRec**) Dataset. The model checkpoints were downloaded Figure 1. User interface of the two-stage interactive framework. **Left:** Images generated by prompt-model pairs during the _Prompt-Model Retrieval_ stage. Users can adjust weights on different metrics that affect the ranking of models; **Right:** Collection of users’ feedback on generated images from their preferred models and personalized prompts in the _Generated Item Ranking_ stage. from Civitai 1, a popular platform for publicly sharing images and generative models find-tuned on Stable Diffusion. We randomly sampled a subset of 197 models from the full model set according to the popularity distribution (i.e., download counts). Examples of model metadata are shown in Table. 1. In addition, we also added three Stable Diffusion model checkpoints (v1.4, v1.5, v2.1) accessed from HuggingFace as the baselines for image generation. All the model checkpoints were converted to the same format to fit the diffusers pipeline 2 for conducting batch image generations. To make the generated images diverse and representative of real-world usage, we consider prompts from three sources: 60 prompts were sampled from Parti Prompts [30], where the original dataset includes 1.6K English prompts across 12 categories, and we randomly sampled 5 prompts from each category; 10 prompts were sampled from Civitai with the most user interactions; we also handcrafted 10 prompts with detailed descriptions on the subjects of images following prompting guide from DreamStudio 3, and then extended them to 20 by creating a shortened and simplified version following prompting tips from Midjourney 4. To sum up, we curated a set of 90 prompts that cover diverse domains and utility. Examples of the prompt set are presented in Table. 2. Footnote 1: [https://github.com/civitai/civitai/wiki/REST-API-Reference](https://github.com/civitai/civitai/wiki/REST-API-Reference) Footnote 2: [https://huggingface.co/docs/diffusers/api/pipelines/overview](https://huggingface.co/docs/diffusers/api/pipelines/overview) Footnote 3: [https://beta.dreamstudio.ai/prompt-guide](https://beta.dreamstudio.ai/prompt-guide) Footnote 4: [https://docs.midjourney.com/docs/prompts](https://docs.midjourney.com/docs/prompts) To simulate a large corpus in which a non-expert user can hardly identify the most relevant models to the prompt, we generate an image for each prompt-model pair (18K images in total). The resolution of the generated image is \(512\times 768\) for 'art', 'animal', 'people' categories, and \(768\times 512\) for the others. We adopt the Euler Ancestral Discrete Scheduler and a CFG scale of 7.0 by default. Note that the dataset can be easily scaled up using our batch conversion and generation scripts. Through the interface for the retrieval stage (Fig. 1 left), users can interactively retrieve a set of preferable models based on the offline generation results ranked by retrieval metrics (Sec. 5). We believe that this \begin{table} \begin{tabular}{l l l l} \hline \hline Model Name & Download Count & Model Tags & Trained Words \\ \hline CyberRealsitic & 102076 & photorealistic, highly detailed, base model, beautiful, photorealism, realistic & - \\ kisaragi\_mix & 12011 & 3d, person, photorealistic, mix, base model, model, photo, japanese, realistic & - \\ DreamlabsOil\_v2 & 1770 & renaissance, medieval, oil painting, style, painting, oil pastel, oil & oil painting style \\ Nothing Clay Mann & 316 & anime, base model, western, clay mann & Clay Mann \\ djz Arizona Sunset & 39 & sunset, style, djz, arizona & arizonasunset \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of popular generative models from Civitai. Other metadata has been omitted for simplicity. \begin{table} \begin{tabular}{l l l} \hline \hline Source & Tag & Prompt \\ \hline \multirow{2}{*}{Parti-prompts} & \multirow{2}{*}{food} & milk pouring from a glass into a bowl & \\ & & a glass of orange juice with an orange peel stuck on the rim & \\ \hline \multirow{2}{*}{Parti-prompts} & \multirow{2}{*}{illustration} & a square with an angry face & \\ & & a red cube on top of a blue cube & \\ \hline \multirow{2}{*}{Parti-prompts} & \multirow{2}{*}{people} & an old man with a long grey beard and green eyes & \\ & & a man eating a glared donut and a woman eating a chocolate cake & \\ \hline \multirow{2}{*}{Parti-prompts} & \multirow{2}{*}{scenery} & a tidal wave approaching a coastal road & \\ & & a fall landscape with a small octage next to a lake & \\ \hline \multirow{2}{*}{civitai} & \multirow{2}{*}{people} & colorwater, negative space, girl, woman, lips, trees, flowers, birds, bamboo, lakes, Hangzhou \\ \hline \multirow{2}{*}{original} & \multirow{2}{*}{art} & oil painting, art, painting, violet flower, jar, table, white cloth, brush, classic, impressionism, artwork by Monet \\ \hline \multirow{2}{*}{original-extended} & \multirow{2}{*}{art} & oil painting of a violet flower in a jar, resting on a table covered with a white cloth a classic and impressionistic \\ & & artwork reminiscent of Monet’s style & \\ \hline \hline \end{tabular} \end{table} Table 2: Examples of tags and prompts we used for batch image generation. Some standardized portions of the prompt (e.g., ”masterpiece, best quality, best shadow, intricate”) and negative prompts (e.g., ”disfigured, blurry, bad art, lowers, low quality, weird colors, duplicate, NSFW”) have been omitted for simplicity. All the prompts presented in this paper follow the same fashion. dataset with dense prompt-model interactions can serve as a cornerstone for advancing personalized generative model recommendation. Besides, this dataset can be used to investigate the correlations between vast generative models and their generation results. ## 5. Results By performing batch image generations on the set of prompts and models from our dataset, we observe distinctive patterns that can be instrumental in realizing _Generative Model Recommendation_. In this section, we present our findings through qualitative and quantitative analysis of the generated images. We investigated the heterogeneity of generated images on different prompt domains and identified a few limitations of existing metrics in evaluating the image quality and relevance to the prompts. On top of that, we propose a simple yet effective metric to retrieve candidate images and their associated models by balancing the relevance and diversity of the generated images. ### Heterogeneity of Generated Images We closely examine the diversity of generated images across different prompt domains. In particular, we examined the cosine similarities between the image embeddings extracted from clip-vit-large-patch14 (Krizhevsky et al., 2017) under the same prompt. As shown in the left portion of Fig. 2, the brighter regions in the heat maps suggest that the associated models generate homogeneous images. Taking heat map (c) as an example, most models simply output a normal "sport car" and fail to capture "the style of Dali", whereas the darker rows and columns correspond to the models that are not following this mainstream fashion. Some generated images can be found in col.(c) of Fig. 3. In addition, we compute the Average Pairwise Similarity (APS) and standard deviations by each prompt tag and plot them in a bar chart (Fig. 2 right). Overall, the model candidates in our dataset tend to generate similar images for concrete physical objects, such as vehicles and food. In contrast, the models exhibit various compositions and styles for domains such as illustration, abstract concepts, or people. Note that it is crucial to perform effective model retrieval in both scenarios, as we can reduce homogeneous results for the former and filter out low-quality results for the latter. Figure 2. **Left: Similarity heat maps of the generated images from 200 models. Darker heat maps and higher ranks indicate more diverse images. Models are indexed by their download counts, from high to low. Prompts for the four heat maps: **(a)** The words ‘KEEP OFF THE GRASS’ [tag: illustration]; **(b)** A bunch of laptops piled on a sofa [tag: architecture]; **(c)** A painting of a sport car in the style of Dali [tag: art]; **(d)** Red car, bright, motor vehicle, ground vehicle, sports car, vehicle focus, road, need for speed, moving, wet, cyberpunk, tokyo, neon lights, drift [tag: vehicle]. **Right**: Diversity of generated images across all categories of prompts. Pairwise cosine similarities of the associated 200 images for each prompt were averaged and then aggregated within each category. A lower score means more diverse images in that category. The error bar reflects the deviations of different prompts in the same category. ### Limitations of Existing Metrics Similar to the item retrieval task in a classical recommender system, a proper evaluation metric is needed to evaluate the accuracy and diversity of generative models with descent image generation capabilities. To do so, one natural idea is to start with popular models, where the **popularity** can be directly measured by the download counts and other metadata. However, popular models usually have their own specializations and may not generalize well to other domains. This can result in generating homogeneous (Fig. 3, row1 col.(b)) or even inaccurate (Fig. 3, row1 col.(c)) candidate images. Another potential solution is to retrieve by **accuracy**, namely the consistency between prompts and images. CLIP-Score[(9)] is a classic metric that measures the cosine similarity between an image and a text prompt using CLIP[(19)], and we adopt it as an accuracy metric in this work. From the second row of Fig. 3, we can see that the consistency between prompts and images has been ensured, yet this metric still suffers from homogeneity, such as the style in col.(a) and the viewing perspective in col.(b). To make the retrieval results diverse enough for recommendations, we propose a metric of **distinctiveness** via the mean Cosine similarity (m**Cos**) within the image set for a single prompt. Formally, the m**Cos** metric can be computed as: \[\texttt{m}\texttt{Cos}(i)=\frac{1}{|\mathcal{I}|-1}\sum_{j\in\mathcal{I}\setminus \{i\}}\texttt{CosineSimilarity}(i,j), \tag{1}\] where \(i\) is the image of interest, \(\mathcal{I}\) is the image set for the same prompt, and CosineSimilarity(\(i,j\)) returns the cosine similarity for the embeddings of image \(i\) and \(j\). In general, an image falls into the mainstream style when the m**Cos** Figure 3. Comparison of the generated images ranked by different evaluation metrics. Prompts for each column: **(a)** 1girl, solo, closeup portrait, standing, pink hair, long hair, hair ornament, floating hair, blue eyes, looking at viewer, looking away, smile, closed mouth, floral print milkmaid dress, beautiful background [tag: people]; **(b)** A fall landscape with a small octtage next to a lake [tag: scenery]; **(c)** A painting of a sport car in the style of Dali [tag: illustration]. metric is closer to 1, whereas it is more distinct when the value is closer to -1. As can be seen from the retrieval results in the third row of Fig. 3, the mCos metric can easily retrieve Out-Of-Distribution (OOD) images at the tail, including desirable images (col.(b) img3 & col.(c) img1), inaccurate images (col.(a) img2 & col.(b) img2), and scribbled images (col.(b) img2). To sum up, all the aforementioned metrics have their own pros and cons - using any of these alone cannot give satisfactory results. Hence, we propose a simple yet effective way to aggregate multiple evaluation metrics. ### A Scalable Metric for Prompt-Model Retrieval We propose the Generative Recommendation Evaluation Score (GRE-Score) as follows: \[\text{GRE-Score}=\sum_{k}\lambda_{k}\tilde{q_{k}}, \tag{2}\] where \(k\) is the number of evaluation metrics and \(\tilde{q_{k}}\) is the normalized score for the \(k\)-th metric. Note that all the aggregated scores are expected to be the larger the better. Through metric ensemble, the drawbacks of each metric can be alleviated, resulting in a more comprehensive and reliable evaluation of image quality. For our case, we compute the GRE-Score by accounting for the accuracy, distinctiveness, and popularity, through the normalized CLIP-Score, mCos, and log10(download_count), respectively. We empirically set \(\mathbf{\lambda}=(1.0,0.8,0.2)\) by default, and the overall performance can be further boosted with more sophisticated hyper-parameter tuning based on images, model metadata, and user preferences. Notably, the set of images has been filtered by NSFW scores to avoid inappropriate content. As shown by rows 4-6 in Fig. 3, the proposed GRE-Score metric is able to retrieve a diverse set of high-quality images (Fig. 3, row4), leaving homogeneous, inaccurate, and scribbled images in the torso (Fig. 3, row5) and tail (Fig. 3, row6). We believe that GRE-Score is a pioneering metric that is more suitable for the prompt-model retrieval task, and it is scalable for future improvements in model training and evaluations. ## 6. Conclusions and Future Work In this work, we propose a framework for personalized prompt-to-model retrieval as a step towards _Generative Recommendation_. We break down the task into two stages: (1) Generative model retrieval from a large corpus, and (2) fine-grained generated item ranking. Through an interactive interface and analyzing a real-world prompt-to-image dataset, we observe the heterogeneity of the generated images across various domains, which lays the foundation for prompt-model recommendations. In addition, we highlight the inherent limitations of current metrics used to assess generated images quality and argue for user-centric evaluation metrics for achieving personalized image generations. Our work opens up a few directions for future work: First of all, the scale of the GEMRec dataset can be extended. We plan to compile a more comprehensive set of prompts and generative models, such as those trained with LoRAs (Kang et al., 2019) and different combinations of samplers and hyper-parameters. Secondly, we aim to conduct user studies to understand how end-users interact with our proposed framework and to collect large-scale user preference data for training personalized retrieval and ranking models as proposed in Sec. 3. Moreover, an important challenge is to standardize the evaluation of generative recommendations. Existing accuracy and diversity based metrics might not be enough to capture users' individual aesthetic tastes. We propose a generic evaluation metric to mitigate this issue, but we leave a more rigorous study of how these metrics align with user preference for future work. Last but not least, although this study focuses on image generation, the scope of this work shall generalize to other domains such as personalized text or music generation. It is worth investigating how to extend our proposed framework in those contexts.
2303.10573
Extracting Incidents, Effects, and Requested Advice from MeToo Posts
Survivors of sexual harassment frequently share their experiences on social media, revealing their feelings and emotions and seeking advice. We observed that on Reddit, survivors regularly share long posts that describe a combination of (i) a sexual harassment incident, (ii) its effect on the survivor, including their feelings and emotions, and (iii) the advice being sought. We term such posts MeToo posts, even though they may not be so tagged and may appear in diverse subreddits. A prospective helper (such as a counselor or even a casual reader) must understand a survivor's needs from such posts. But long posts can be time-consuming to read and respond to. Accordingly, we address the problem of extracting key information from a long MeToo post. We develop a natural language-based model to identify sentences from a post that describe any of the above three categories. On ten-fold cross-validation of a dataset, our model achieves a macro F1 score of 0.82. In addition, we contribute MeThree, a dataset comprising 8,947 labeled sentences extracted from Reddit posts. We apply the LIWC-22 toolkit on MeThree to understand how different language patterns in sentences of the three categories can reveal differences in emotional tone, authenticity, and other aspects.
Vaibhav Garg, Jiaqing Yuan, Rujie Xi, Munindar P. Singh
2023-03-19T05:22:12Z
http://arxiv.org/abs/2303.10573v1
# Extracting Incidents, Effects, and Requested Advice from MeToo Posts ###### Abstract _Warning: This paper may contain trigger words for some readers, especially survivors of sexual harassment._ Survivors of sexual harassment frequently share their experiences on social media, revealing their feelings and emotions and seeking advice. We observed that on Reddit, survivors regularly share long posts that describe a combination of (i) a sexual harassment incident, (ii) its effect on the survivor, including their feelings and emotions, and (iii) the advice being sought. We term such posts MeToo posts, even though they may not be so tagged and may appear in diverse subreddits. A prospective helper (such as a counselor or even a casual reader) must understand a survivor's needs from such posts. But long posts can be time-consuming to read and respond to. Accordingly, we address the problem of extracting key information from a long MeToo post. We develop a natural language based model to identify sentences from a post that describe any of the above three categories. On ten-fold cross-validation of a dataset, our model achieves a macro F1 score of 0.82. In addition, we contribute MeThree, a dataset comprising 8,947 labeled sentences extracted from Reddit posts. We apply the LIWC-22 toolkit on MeThree to understand how different language patterns in sentences of the three categories can reveal differences in emotional tone, authenticity, and other aspects. ## 1 Introduction In the United States, 81% of women and 43% of men have reported some form of sexual harassment or assault in their lifetime.1 In 2006, Tarana Burke, an activist, coined the _MeToo_ phrase for survivors to share their experiences of sexual harassment. This led to what's known as the MeToo movement, which seeks to report sexual harassment and help survivors know they are not alone. Reddit is a popular social media platform that hosts multiple forums called subreddits (r/meToo2, r/SexualHarassment3, and r/sexualassaault4) for survivors to share their MeToo posts. Footnote 2: [https://www.reddit.com/r/meToo/](https://www.reddit.com/r/meToo/) Footnote 3: [https://www.reddit.com/r/SexualHarassment/](https://www.reddit.com/r/SexualHarassment/) Footnote 4: [https://www.reddit.com/r/sexualassault/](https://www.reddit.com/r/sexualassault/) Prior studies on MeToo posts (Karlekar and Bansal, 2018; Hassan et al., 2020; Khatua et al., 2018; Ghosh Chowdhury et al., 2019) focus on classification. For instance, Ghosh Chowdhury et al. (2019) identify posts describing MeToo personal stories, Karlekar and Bansal (2018) identify the type of sexual harassment, and Hassan et al. (2020) detect the type of sexual violence. All these existing studies identify relevant MeToo posts from a massive stream of social media text. The expectation is that a prospective helper (e.g., the concerned authority) can provide support to the survivor of identified post. However, merely identifying relevant posts is not enough. A prospective helper must understand (i) what happened, (ii) how sexual harassment has affected the survivor, including the feelings and emotions they are going through Field-Springer et al. (2021), and (iii) the advice that the survivor is seeking Andalibi et al. (2016). Reddit allows a higher number of characters (40k per post) than platforms such as Twitter (250 per post). The MeToo-related subreddits too see long posts (with mean and maximum of 1,881 and 33,432 characters, respectively). For a prospective helper (e.g., the concerned authority), reading long posts that regularly appear on multiple subreddits O'Neill (2018) can be demanding and time consuming. To facilitate this process, we built a natural language model that extracts (from a MeToo post) sentences describing a sexual harassment incident, its effects on the survivor, and the advice requested. We describe these three sentence categories as follows: **Sexual harassment incident:**: Sentences describing unwelcome sexual advances, sexual behavior, requests for sexual favors, verbal or physical acts of sexual nature, offensive jokes or remarks that are either sexual or based on someone's gender.5 Footnote 5: [https://www.eecoc.gov/sexual-harassment](https://www.eecoc.gov/sexual-harassment) **Effects on the survivor:**: Survivors describe how they are affected by revealing their feelings and emotions that arise during or after the harassment incident. Examples of effects include the survivor feeling uncomfortable due to the abuser's actions, or being angry or upset due to the harassment. **Requested advice:**: Sentences in which survivors seek advice from other platform users. Some examples of advice include asking if the survivor's experience is harassment, how to pursue a legal case, and how to confront the abuser. Example 1 shows a MeToo post6 and the three categories of sentences that we extract from it.7 The extracted text describes inappropriate touching and the survivor's uncomfortable feeling. Moreover, it reveals that the survivor is confused and asks if they are overthinking the incident. Footnote 7: The extremely personal MeToo post is paraphrased so that it’s not identifiable or searchable. In other cases, survivors may ask for advice such as how to report harassment, how to deal with trauma, and so on. Prior research (Field-Springer et al., 2021; Andalibi et al., 2016) shows that it's important to understand and address the effects and the requested advice. ### Research Questions Accordingly, we address the following research questions. **RQ\({}_{\text{extract}}\):**: How can we extract sentences describing the harassment incident, its effects on the survivor, and the requested advice from a MeToo post? RQ\({}_{\text{extract}}\) is important because automatically extracting the three categories of sentences will help a prospective helper understand the incident, the effects on the survivor, and the requested advice without having to read the whole post. Hence, the prospective helper (e.g., the concerned authority) may timely address the survivor and provide some advice or support. Traditional text summarization models are trained or evaluated on specific tasks but not on MeToo corpora (Jadhav and Rajan, 2018; Cheng and Lapata, 2016; See et al., 2017; Li et al., 2011; Zhang et al., 2012). **RQ\({}_{\text{psycholinguistic}}\):**: How do sentences in three categories differ in psychological aspects? While writing sentences of different categories, the survivor may choose different set of words representing distinct language patterns. RQ\({}_{\text{psycholinguistic}}\) is important to understand how such patterns can reveal psychological aspects such as emotional tone, authenticity, and type of emotion. ### Contributions and Novelty We make the following contributions. * To address both questions, we curate MeThree, a dataset containing 8,947 sentences, labeled for the three categories. Constructing a sufficiently natural and precise dataset turns out to be nontrivial. We leverage active learning for labeling with tractable manual effort. * To address RQ\({}_{\text{extract}}\), we train a natural language model to extract these three categories of sentences from long MeToo posts. Our approach incorporates modern Natural Language Processing (NLP) techniques to achieve strong results. * To address RQ\({}_{\text{psycholinguistic}}\), we apply the LIWC-22 toolkit Boyd et al. (2022) on MeThree, analyze aspects (such as emotional tone, authenticity, type of emotion) for three sentence categories, and compare their LIWC-22 scores (Section 4). This analysis provides a psychological understanding of the essential parts of MeToo posts. Existing summarization tools are domain-specific and can't be applied on the MeToo corpora. To the best of our knowledge, we are the first ones to study sentence level extraction from long MeToo posts. Moreover, we conduct a comparative study for sentences of the three categories (not done before), based on psychological aspects. ### Key Findings Our model for extracting three categories of sentences yields a macro F1 score of 82%. Our psycholinguistic analysis on MeThree reveals the following: (1) sentences describing effects are more negative and emotional than sentences describing incidents and requested advice, (2) the requested advice sentences express a more positive tone than the sentences in the other two categories, and (3) within the effects category, anxiety is prominent, followed by sadness and positive emotion. A small qualitative study provides additional validation for our contributions (Section 3). For 17 of 20 randomly selected MeToo posts, the extracted text is coherent to understand incident, effects, and requested advice. For 16 of 17 posts, we can construct a helpful response without missing out on any crucial information about the survivor's situation. ## 2 The MeThree Dataset and Classifier We consider the problem of extracting three categories of sentences as multilabel classification task. From a post, the sentences predicted as any of the three categories are extracted. In our adopted active learning approach, the preparation of the dataset and the development of a classifier happen hand-in-hand. For classification, we follow a pool-based active learning approach which is known for training robust models while reducing effort on manual labeling Hanneke (2014). We curate MeThree, a dataset comprising 8,947 labeled sentences (from subreddits: r/meToo, r/sexualassault, and r/SexualHarassment), and train an XLNet model on MeThree. Pool-based active learning Settles (2012) starts with an initial dataset (denoted by L) that we curated by selecting and labeling sentences, most of which contain certain keywords (Section 2.1). After curating L, in the active learning process, four steps shown in Figure 1 are followed and repeated multiple times. First, a model (denoted by M) is trained on the set L. To do so, we compared the performance of multiple models on the curated L and chose the best-performing one as model M (Section 2.2). Second, an unlabeled dataset U is labeled by the predictions of the trained model M. In our case, since most of the sentences in L contained certain keywords, to avoid bias, we selected U from sentences without those keywords and labeled U through M's predictions. Third, from U, data points whose risk of being mispredicted is sufficiently high are queried (using a query method) and labeled manually. Fourth, U is added to L. For the last two steps, we queried misclassified sentences from the set U and labeled them manually (Section 2.3). We also selected an appropriate query method for our approach (Section 2.4). We repeated the active learning cycle five times to curate the final dataset of 8,947 labeled sentences. We called this dataset as MeThree. In the end, we trained the final model on MeThree to extract sentences (from a long post) that are classified as the incident, its effects, and the requested advice. ### Initial Training Data for Active Learning To curate our initial training data (set L) for the active learning approach, we followed four steps. First, we scraped MeToo posts from subreddits. Second, we filtered relevant posts from them. Third, we found candidate sentences for each category. Fourth, we labeled a sample of candidate sentences along with other sentences. #### Collecting MeToo Posts We scraped MeToo posts from three subreddits: r/meToo, r/sexualassault, and r/sexualHarassment, for the period 2016-01-01 to 2021-07-18, using Reddit's Pushshift API.8 In this process, we collected a total of 9,140 posts. Of these 9,140 posts, there were 263 posts whose content was deleted by the time of our scraping. That's how we were left with 8,877 MeToo posts. Footnote 8: [https://psaw.readthedocs.io/en/latest/](https://psaw.readthedocs.io/en/latest/) #### Filtering Relevant MeToo Posts Some MeToo posts don't share survivors' experiences but share news articles, seek opinions about allegations against celebrities, or promote other platforms. Such posts are irrelevant to our study. We applied the following heuristics to focus on posts containing survivors' personal experiences: * First-person pronouns: Many survivors while describing their personal experiences, use first-person pronouns in the title of the post. For example, "**I** started to do something about **my** past assault, but instead of feeling better, it actually gets worse" and "**My** mom's boyfriend tried to get **me** to do things to him". Thus, we checked the presence of first-person pronouns: i, me, my, and mine in the title to extract relevant MeToo posts. * Advice-related keywords: We observed that survivors also use advice-related keywords in the title. For example, "Need **advice**, or support" and "pls someone read this and **help** me figure out if i was assaulted or not". We used the keyword, advice, as seed and queried its synonyms from the Oxford dictionary. We obtained 25 synonyms and manually filtered four of them based on their relevance to our problem. The final list Figure 1: Active learning involves four iterative steps. contained help, suggestion, advice, guide, and counsel. We referred to this list of keywords as _advice keywords_. To filter relevant posts, we checked the presence of these keywords in the title. For extracting synonyms, we also explored other corpora such as WordNet Miller (1995) but did not find synonyms that were commonly used. * Advice-related questions: We observed that many relevant posts ask a question (related to sexual harassment) in the title. For example, "Was this rape?" and "Is this sexual harassment?". Such questions are seeking advice without mentioning any of the advice keywords. Using Part-Of-Speech (POS) tagging Manning (2011), the titles that have an interrogation form and include rape, harassment, assault, and abuse as the object, were filtered. Posts with titles satisfying one of the above rules were filtered out. We checked random 50 filtered posts for relevancy. Of 50 posts, 47 (94%) were relevant because they either sought support or advice related to their case of sexual harassment. Among these 47 posts, we also found one post written by the survivor's friend but the post still expressed the effects on the survivor and sought advice. In total, we obtained 4,933 relevant posts using the above heuristics. We might have missed some relevant posts, but the objective here is to filter posts with high precision. This is because high precision (94% in our case) means we can build a dataset of relevant sentences without further pruning. Similar heuristics are used in other studies too Hassan et al. (2020). In this study, out of 4,933 filtered posts, 74.29% (3,665) are from r/sexualassault, followed by r/meToo (17.23%; 850) and r/SexualHarassment (8.47%; 418). #### Finding Candidate Sentences We split each of the 4,933 relevant posts into sentences, using sentence tokenizer of Natural Language Toolkit (NLTK) library9. That's how we obtained 102,204 sentences. However, a random sample of these sentences was inefficient in getting sentences that describe incidents, or its effects, or requested advice. Thus, we first found candidate sentences of each category using following keywords: Footnote 9: [https://www.nltk.org/api/nltk.tokenize.html](https://www.nltk.org/api/nltk.tokenize.html) * Sexual harassment incident: Hassan et al. (2020) create a list of 27 MeToo-related verbs (such as molest, touch, rape, masturbate). We expanded the list by querying synonyms of these verbs through the Oxford dictionary. The resulting list contained 652 verbs. We manually checked them and found 539 relevant verbs, of which only 313 were unique. We called this final set of 313 verbs as _harassment keywords_. We identified candidate sentences by the presence of one or more harassment keywords in them. That's how we found 30,927 candidate sentences for the incident category. * Effects on the survivor: For identifying candidate sentences in this category, we leveraged two types of keywords. First, we leveraged the NRC word emotion lexicon (Mohammad and Turney, 2013, 2010), which contained a list of words associated with eight emotions. We considered four emotions: anger, disgust, fear, and sadness, that a survivor can express, and use lexicons associated with them. In this process, we found 37,271 emotional candidate sentences. Second, we leveraged synonyms of the word, feel, that are extracted from the Oxford dictionary. We identified 14 synonyms, out of which, eight were relevant to the survivor's feelings. We referred to the final set of keywords (feel, perceive, sense, experience, undergo, bear, endure, suffer) as _feel keywords_. We found 8,617 candidate sentences containing one or more feel keywords. * Requested advice: We observed that many questions in MeToo posts are advice seeking. For example, "Was it actually just a mistake and should I forgive him?" and "Am I blowing it out of proportion?". Hence, we considered all questions as candidates for advice seeking sentences. We found 6,354 such candidates. Moreover, we leveraged advice keywords to find an additional 2,678 candidates. We extracted synonyms from the Oxford dictionary. To find such synonyms, we tried corpora such as WordNet Miller (1995) and PyDictionary10 but did not find many keywords. For example, while creating the feel keywords, PyDictionary produced no synonyms, and WordNet produced one word, palpate, which was uncommon to describe feelings. Thus, we leveraged the Oxford dictionary to extract relevant and commonly used keywords. Footnote 10: [https://pypi.org/project/PyDictionary/](https://pypi.org/project/PyDictionary/) #### Labeling Sentences Due to presence of keywords (such as harassment keywords, feel keywords, and so on), the candidate sentences are likely to be relevant to the three categories. However, only including candidate sentences can make the training set (set L) biased toward the chosen keywords. Thus, for labeling at this step, we included random 500 sentences not having any keywords, along with 6,900 sampled candidate sentences (including sentences from all sources: harassment keywords, feel keywords, and so on). After discarding duplicates, we were left with 5,947 sentences. Since a majority of 5,947 sentences still contained chosen keywords, labeling them could still form a biased dataset. Note that this was only the initial training data (set L) in the active learning approach. Later, to mitigate bias, we kept including sentences without any keywords (set U) through multiple repetitions of active learning cycle, as described in Section 2.3. For 5,947 sentences, three of the authors of this paper were the annotators. Before labeling, they were aware of the uncomfortable and disturbing text present in these sentences. For each sentence, the annotators were asked the following questions: 1. Does this sentence describe a sexual harassment incident? 2. Does this sentence describe the effects of the incident on the survivor? 3. Does this sentence ask for any advice? The annotators read each sentence and answered the above questions as either yes or no. Initially, two annotators labeled 200 sentences as per their understanding of the problem statement. Later, they discussed their disagreements and defined the final labeling instructions for all the annotators to follow. The final labeling instructions are described below: 1. Sexual harassment incident: We followed the definition given by the United States Equal Employment Opportunity Commission (EEOC).11 Any unwelcome sexual advances, sexual behavior, requests for sexual favors, verbal or physical acts of sexual nature, offensive jokes, or remarks that were either sexual or based on someone's gender were labeled as sexual harassment. Sexual harassment is not limited to, and we considered harassment cases with all genders. Footnote 11: [https://www.eeco.gov/sexual-harassment](https://www.eeco.gov/sexual-harassment) 2. Effects on the survivor: We considered survivors' feelings and emotions that arose during or after the incident. Examples range from feeling uncomfortable (due to the abuser's actions) to being afraid (emotion: fear) of reporting sexual harassment. 3. Requested advice: We considered sentences in which survivors asked for suggestions on topics related to harassment, e.g., whether to report the incident, where to get therapy from, and how to face the abuser again. Table 1 illustrates examples of each category.12 The first example describes inappropriate physical behavior and is considered sexual harassment. The second example describes that the survivor is sexually exploited (sexual harassment) and suffers from depression and anxiety (effects). In the third example, the survivor expresses fear (by mentioning "freak out") and seeks advice about dealing with it. In the last example, the survivor seeks advice relating to the legal process. Footnote 12: For anonymity, we have masked abusers’ details. All 5,947 sentences were divided among the three annotators (let's denote them by A\({}_{1}\), A\({}_{2}\), and A\({}_{3}\)) such that two of the annotators labeled each sentence. After labeling all the sentences, we obtained Cohen's kappa scores Cohen (1960) of 0.772 (for sexual harassment incident), 0.774 (for effects), and 0.865 (for requested advice). These scores indicated that we achieved substantial agreement for two categories: sexual harassment incident and effects, and almost perfect agreement for the requested advice category. Table 2 also shows Cohen's kappa scores for each pair of annotators. Moreover, the first author resolved all the disagreements. The labeled 5,947 sentences form the initial training data (set L) for active learning. ### Initial Model to Extract Sentences After set L is curated, the next step is to train model M. We consider our problem as a multilabel classification task in which each sentence is an input to the model and the output has three binary labels (one label for each category). We trained and evaluated multiple methods on 5,947 labeled sentences (set L) as described below. For each of 5,947 sentences, we computed embeddings such as Sentence-BERT Reimers and Gurevych (2019), TF-IDF Cahyani and Patasik (2021), GloVe,13 Pennington et al. (2014) Word2Vec14 Mikolov et al. (2013), and Uni \begin{table} \begin{tabular}{l c c c} \hline \hline Sentence & Incident & Effects & Requested advice \\ \hline \begin{tabular}{c} \(\ldots\)_he_ \\ _into my shorts._ \\ _\end{tabular} & & & \\ \hline \begin{tabular}{c} \(\ldots\)_I_ \\ _<abuser>_ \\ _\end{tabular} & & & \\ \hline \begin{tabular}{c} \(\ldots\)_I_ \\ _<abuser>_ \\ _\end{tabular} & & & \\ \hline \begin{tabular}{c} \(\ldots\)_I_ \\ _\(\ldots\)_I_ \\ _\end{tabular} & & & \\ \hline \begin{tabular}{c} \(\ldots\)_I_ \\ _one to talk to because no one_ \\ _knows about him or what happened_ \(\ldots\)_What do I do?_ \\ \end{tabular} & & & \\ \hline \begin{tabular}{c} _Does anyone know how a legal_ \\ _advocate works and what you_ \\ _perienced with them?_ \\ \end{tabular} & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Relevant examples according to labeling instructions. \begin{table} \begin{tabular}{l c c c} \hline \hline Annotators & Incident & Effects & Requested advice \\ \hline \begin{tabular}{c} A\({}_{1}\), A\({}_{2}\) \\ A\({}_{2}\), A\({}_{3}\) \\ A\({}_{3}\), A\({}_{1}\) \\ \end{tabular} & \begin{tabular}{c} 0.798 \\ 0.720 \\ 0.795 \\ 0.795 \\ 0.795 \\ \end{tabular} & \begin{tabular}{c} 0.793 \\ 0.843 \\ 0.801 \\ \end{tabular} \\ \hline Total & 0.772 & 0.774 & 0.865 \\ \hline \hline \end{tabular} \end{table} Table 2: Cohen’s kappa scores for each pair of annotators. versal Sentence Encoder (USE) Cer et al. (2018). For each embedding, the sentence vector was used as an input to a multilabel classifier. For GloVe and Word2Vec, we averaged word vectors to form the sentence vector. For classification, we tried Logistic Regression (LR) Dreiseitl and Ohno-Machado (2002), Support Vector Machine (SVM) Cervantes et al. (2020), and Random Forest (RF), and report the best method. In addition to embedding-based methods, we also applied transformer-based approaches such as RoBERTa Liu et al. (2019) and XLNet Yang et al. (2019). We fine-tuned RoBERTa and XLNet on set L by adding an output layer in the forward direction. The output layer contained three units, one dedicated to each category. Both the models minimized binary cross entropy over five epochs. Moreover, the training batch size and tokenizer length were set to 32 and 256, respectively. We computed average F1, precision, and recall scores for the approaches described above over ten-folds of set L. We also test keywords approach in which we search sentences by category-wise keywords (keywords used in Section 2.1). The sentences containing keywords were predicted 1 for that category and others were predicted 0. TF-IDF, GloVe, Word2Vec, Keyword search, and USE underperform as compared to other methods. Sentence-BERT followed by SVM achieves the highest macro precision (0.84). However, it shows lower macro recall (0.66) than RoBERTa (0.84) and XLNet (0.87). Overall, XLNet outperforms all other methods by achieving the highest macro F1 score (0.82). Thus, we choose XLNet as our active learning model (model M). ### Completing Active Learning Cycles After model M is trained, it's time to make predictions on the set U and label it. To mitigate the risk of a biased dataset, we chose the set U to be a random sample of 500 sentences not containing any keywords. The already trained model M labeled set U through its predictions. From U, we queried potentially misclassified sentences for manual labeling, using a query method described in Section 2.4. Further, the first active learning cycle (Figure 1) was completed by adding labeled U to L. We repeated this for four more cycles that involves the training XLNet on the new L, predicting on new U, labeling new U (through M's predictions and manually labeling the queried sentences), and adding new U to L. Overall, a total of five cycles added total 2500 labeled sentences (each time U having 500 sentences without keywords) to the initially 5,947 labeled ones. As a result, the final L became to be of size 8,447. Moreover, while selecting appropriate query method for our approach, as discussed in Section 2.4, we labeled additional 500 sentences without keywords. By including all these labeled sentences, we formed the final dataset, MeThree, of size 8,947. In MeThree, there are 4,331 (48.4%) sentences that belong to at least one category, and 4,616 (51.6%) others. Figure 2 shows the Venn distribution of 4,331 sentences among three categories. Finally, we trained XLNet on MeThree which is used to extract sentences from long posts. Over ten cross validation of MeThree, the model achieved 0.82 macro F1 score (0.78 for incident, 0.79 for effects, and 0.89 for requested advice), 0.86 macro recall (0.82 for incident, 0.83 for effects, and 0.92 for requested advice), and 0.78 macro precision (0.74 for incident, 0.76 for effects, and 0.85 for requested advice). ### Selecting Query Method Uncertainty sampling (Culotta and McCallum, 2005; Dagan and Engelson, 1995), a widely used querying method, finds uncertain predictions based on the model's prediction probability on the set U. Such uncertain data points are queried for manual labeling. However, uncertainty sampling methods (such as least confidence and entropy) did not work in our case. This is because in the first active learning cycle, model M (XLNet trained on 5,947 sentences; Section 2.2) predicted low probabilities on the sentences without any keywords (set U). We validated this by predicting on 100 such sentences, where the mean prediction probability was 0.08 (deviation= 0.23) for incident category, 0.10 (deviation= 0.27) for effects, and 0.05 (deviation= 0.21) for requested advice. Due to most of the probabilities being low, uncertainty sampling methods (such as least confidence and entropy) could not discriminate between misclassified and other sentences. Figure 2: Venn diagram showing the distribution of sentences across the three categories. To select an appropriate query method, we found a threshold on the prediction probability, using which we could query from U. To find that threshold, we used a set U', another random sample of 500 sentences not having any keywords. On U', we plotted the Receiver Operating Characteristic (ROC) curve and computed Youden's J-statistic Youden (1950) as described below. Since this query method was selected during the first active learning cycle, the model M referred below is XLNet trained on 5,947 sentences (Section 2.2). 1. The first author labeled the set U'. On 5,947 initially labeled sentences (Section 2.1), we achieved substantial agreement for the incident and effects category and almost perfect agreement for the requested advice. Hence, we assumed that all the annotators (three authors of this paper) stick to the same labeling definitions and used one of the annotators (the first author) for this small task. 2. The model M made predictions on the set U'. 3. We split the set U' into a set of 400 sentences (set V) and another set containing remaining 100 sentences (set T). 4. We leveraged the set V to fine-tune the threshold. For each category, we considered the misclassified sentences in set V and found a threshold (on the predictions' probability) that could retrieve them. We found that out of 400 sentences, model M misclassified 28 sentences for the incident category, 38 for effects, and 9 for the requested advice category. Since we needed to retrieve these sentences, for each category, we considered misclassified sentences under positive class and others under negative class, while plotting true positive and false positive rates in ROC. Figures 5- 5 show ROC and the area under the curve for each category. From each ROC curve, we found the best threshold (using Youden's J-statistic Youden (1950)) that maximized recall (for positive class) and minimized false positive rate. For the incident category, we found 0.038177 as the threshold, above or equal to which sentences can be queried. Similarly, we found a threshold of 0.008476 for the effects category and 0.007874 for the requested advice category. We also tried combining these three thresholds into a single threshold but that did not query more misclassified sentences than the individual threshold case. 5. For each category, to ensure that we did not miss out on the misclassified sentences, we also queried 30 sentences below the threshold for every 100 predictions. For example, the model M predicted on the set V which has 400 sentences (4 times 100), we also queried 4*30 = 120 sentences below the threshold for each category. _To sum up, our query method is: for each category, query (i) sentences with prediction probability above or equal to the threshold, and (ii) 30 sentences below the threshold for every 100 predictions. Using the above query method, we could query the following number of misclassified sentences from V: 25 (89.28%) of 28 for the incident, 35 (92.10%) of 38 for effects, and 9 (100%) of 9 for requested advice. Since set V was used for fine-tuning, we also tested our query method on the unseen set T. 6. We leveraged the set T to test our query method. In the set T, M misclassified 10 sentences in the incident category, 4 in the effects, and 3 in the requested advice. Our query method retrieved 9 (90%) misclassified incident sentences and all misclassified cases (100%) in the other two categories. For each time the active learning cycle was repeated (discussed in Section 2.3), we used the above query method to retrieve potential misclassified sentences from U and manually labeled retrieved sentences. For manual labeling, each of the three annotators (authors of this paper) labeled retrieved sentences for a category. ## 3 Qualitative Analysis We applied the final XLNet model (trained on MeThree as described in Section 2.3) on random 20 posts (containing at least a thousand characters) and followed the below steps for each post. First, we split the post into multiple sentences, using the sentence tokenizer of the NLTK library.15. Second, we provided all the sentences as input to model M and arranged the extracted sentences in the order they were present in the post. Third, along with the post title, we read the extracted sentences in the arranged order and checked if extracted sentences are coherent to understand the incident, effects, and requested advice. For 17 out of 20 posts, the extracted text was coherent. Further, we divide 17 posts and their extracted text among three annotators (the same authors A\({}_{1}\), A\({}_{2}\), and A\({}_{3}\)) such that each post was read by one annotator and its extracted text was read by the other. Each annotator was asked to construct a supportive or advice-offering response based on details present in the given text. Providing such responses is one kind of help to the survivor Schneider and Carpenter (2019); Andalibi et al. (2016). For each post, the first author analyzed the difference between the response to the post (R\({}_{\text{p}}\)) and the response to the extracted text (R\({}_{\text{e}}\)). Only in 1 of 17 cases, a crucial detail (about the survivor's situation) was missed by R\({}_{\text{e}}\) that was part of R\({}_{\text{p}}\). This was because that detail was also missing from the extracted text. However, for 16 of 17 cases, R\({}_{\text{e}}\) did not miss out any crucial details that were part of R\({}_{\text{p}}\). Our model can potentially be used to understand the essential details (without reading long posts) and construct a helpful response based on the extracted text. In turn, this can speed up the process of providing help on a large scale. ## 4 Psycholinguistic Analysis LIWC-22 toolkit Boyd et al. (2022) contains 100 in-build dictionaries, where each dictionary consisted of lexicons, emoticons, and other verbal constructs to identify the psychological aspects from a post. We applied LIWC-22 on MeThree dataset and compared the sentences of the three categories on two types of scores: (i) summary and (ii) affect scores. We show our analysis below. ### LIWC-22 Summary Analysis For a sentence, LIWC-22 Boyd et al. (2022) yielded four types of summary scores: analytic (depicting analytical and logical thinking patterns), clout (depicting social status, confidence, or leadership), authentic (depicting how much people reveal about themselves, without any filter), and tone (depicting the emotional tone. The lower the score, the more negative the tone). On MeThree, we computed these summary scores (using the LIWC-22 toolkit) for each sentence that belongs to the incident, effects, and advice categories. Further, for each category, we averaged these scores. Figure 6 shows category-wise average summary scores. All three categories of sentences have low average scores for analytic and clout. This indicates less leadership, confidence, and logical thinking patterns in all such sentences. The same low trend is visible for the tone variable, meaning that all three categories of sentences possess negative tone. Moreover, the most negative tone is observed in the effects category, leading to the lowest tone score. However, the three categories have high average scores for authenticity. We deduced that the survivors share their MeToo experiences without any filter. They are open to reveal about themselves, especially through the effects sentences (having the highest authenticity score). Through effects sentences, survivors reveal their feelings and emotions, which can be the reason for the highest authenticity score. Some examples of effects sentences having high authenticity include: _"I feel worthless," "I'm livid," and "I'm scared."_ ### LIWC-22 Affect Analysis For a sentence, LIWC-22 Boyd et al. (2022) yielded four types of affect scores: tone_pos (positive tone), tone_neg (negative tone), emotion, and swear. Figure 7 shows average affect scores for all three categories. It is evident that **effects sentences are more negative (\(\mu=7.16\)) and emotional (\(\mu=5.56\)) than incident** (tone_neg \(\mu=2.50\) and emotion \(\mu=1.27\)) **and requested advice** (tone_neg \(\mu=4.06\) and emotion \(\mu=2.01\)). Also, note that requested-advice sentences show a more negative tone than incident sentences. In such negative and advice seeking sentences, the survivors sometimes blame themselves and seek validation from others. For example, "Is it my fault for drinking too much?". Despite such negative cases, the same category shows many other positive tone cases. As a result, **in terms of positive tone, sentences requesting advice** (\(\mu=4.54\)) **are ahead of incident** (\(\mu=2.01\)) **and effects sentences** (\(\mu=1.84\)). Moreover, we found that all three categories, in general, don't contain swear words, as depicted by their low swear scores. As effects sentences are more emotional than the other two categories, we delved into what types of emotions are reflected by effects sentences. For a Figure 6: LIWC average summary scores for the incident, effects, and requested advice categories. Analytic and clout scores are low for each category. The low trend is also present in tone, indicating a negative tone in all three categories. However, high scores for authenticity indicate that the survivors openly share their experiences, especially through effects sentences. sentence, LIWC-22 yielded four types of emotional scores: emo_pos (positive emotion), emo_anx (anxiety), emo_anger (anger), and emo_sad (sadness). We found that **effects sentences show more anxiety (\(\mu=1.53\)), followed by sadness (\(\mu=0.70\)), anger (\(\mu=0.43\)), and positive emotion (\(\mu=0.39\)).** ## 5 Related Work There has been extensive research in analyzing MeToo posts and finding useful insights (Manikonda et al., 2018; Gautam et al., 2020; Deal et al., 2020; Field et al., 2019; Reyes-Menendez et al., 2020). However, only a few studies have looked MeToo experiences from classification perpective. Karlekar and Bansal (2018) leverage the MeToo experiences posted on the SafeCity website16, an online forum to report sexual harassment. They collect 9,892 MeToo experiences that convey one of the three types of harassment: (i) groping or touching, (ii) staring or ogling, and (iii) commenting. Further, they train a deep neural network to identify the type of harassment experienced by the survivor. Yan et al. (2019) improve the performance of this classification by proposing a quantum-inspired density matrix encoder. Liu et al. (2019) leverage the same dataset and annotate it for attributes such as the abuser's age (below 30 or older), the abuser's relation with the survivor (for example, relative or teacher), location of harassment (for example, park or street). They propose a framework to identify these attributes from a MeToo experience. Bauer et al. (2020) also leverage the SafeCity dataset and build a chatbot system to help survivors. The SafeCity Figure 7: LIWC average affect scores. Effects sentences are more negative and emotional than incidents and requested advice. In terms of a positive tone, sentences requesting advice are ahead of incidents and effects. In general, all three categories of sentences don’t contain swear words. dataset contains concise experiences (typically 3-4 sentences long) and is unfit to extract sentences in our case. Moreover, Hassan et al. (2020) train a model on 520,761 #MeToo hashtag tweets to identify tweet level attributes, such as the category of sexual violence reported, the survivor's identity (tweeter or not), the survivor's gender. They also achieve 80.4% precision and 83.4% recall in identifying sexual violence reports. Ghosh Chowdhury et al. (2019) label 5,119 tweets for types: (i) disclosure and (ii) nondisclosure. The tweets that include a survivor's personal experience are annotated as disclosure and others as non-disclosure. Out of 5,119, they find 1,126 (22%) tweets under disclosure category. Moreover, they propose a language model to classify tweets into two types. Moreover, other studies such as Khatua et al. (2018) and Ghosh Chowdhury et al. (2019) also focus on similar classification tasks. All these works perform classification tasks on each MeToo post. However, our work focuses on sentence-level extraction of sexual harassment incident, its effects on the survivor, and requested advice. Traditional text summarization works (Jadhav and Rajan, 2018; Cheng and Lapata, 2016; See et al., 2017; Li et al., 2011; Zhang et al., 2012) are trained or evaluated on other domain specific datasets such as news datasets but are not built for MeToo context. Our work is the first attempt to extract text from long MeToo posts to the best of our knowledge. ## 6 Discussion We now discuss our conclusion, our work's limitations, and possible future directions. ### Conclusion The survivors of sexual harassment frequently share their long MeToo posts on subreddits. Using the active learning approach, we trained XLNet model to extract sentences describing (i) the sexual harassment incident, (ii) the effects on the survivor, and (iii) the requested advice, from such posts. We also curated MeThree, a dataset of 8,947 sentences labeled for the three categories, and conducted psycholinguistic analysis on it. On ten-fold cross-validation of MeThree, our model achieved a macro F1 score of 0.82. The sentences extracted by our model can help a prospective helper understand essential details without having to read the entire post. As a result, it can potentially speed up the process of providing help to the survivors. ### Limitations and Future Work Our work suffers from some limitations, and a few of them also motivate future directions of improvement. First, sometimes the extracted sentences may not be coherent or miss some details about the survivor's situation. That's why we don't claim our model to be a summarization tool. However, according to our analysis in Section 3, non coherent cases and the cases requiring details beyond the extracted text are only a few (4 of 20). In the future, our work could be extended to extract other important sentences which can summarize the whole post. Second, our model is trained on the sentences scraped from only three subreddits. We expect the nature of sentences in MeThree to be similar to sentences present on other MeToo-related subreddits. However, we plan to fine-tune the model before applying it on the other subreddits. We can also extend our work to generate an automated response based on the extracted incidents, effects, and requested advice. The automated response after slight corrections by human interventions can offer support and advice to the survivor. Moreover, the similarity between the advice-seeking sentences and users' responses can assess how relevant each response is. This way platform will be able to show highly relevant responses above the less helpful ones. ## 7 Broader Perspective and Ethical Considerations Although we extracted text from long posts to help survivors, we acknowledge some limitations and possible misinterpretations that may occur, especially with the data on such a sensitive topic. We discuss them below. 1. **Consent:** Our data was scraped from the public Reddit posts. Hence, we did not take the consent of the survivors writing such posts. Moreover, as described by Ghosh Chowdhury et al. (2019), some survivors may get uncomfortable if they are reached out for consent. 2. **Anonymity:** We did not save survivors' personal information such as usernames, or users' history of posts. For the example sentences presented in this paper, we also removed potentially identifying information, such as the survivor's age, job title, and location. Moreover, we paraphrased the example MeToo post. We don't plan to release MeThree publicly. 3. **Labeling disturbing text:** The sentences from MeToo posts can be disturbing to read, especially for people who have gone through a similar experience. Therefore, we didn't hire crowd workers or volunteers to for any labeling task. Instead, the three authors of this paper did it. 4. **Potential misinterpretation:** We were extremely aware of the sensitivity of this research before labeling sentences. However, we may have misinterpreted some MeToo experiences. That's why we don't claim that our labeling is fully accurate.
2301.06568
Ankh: Optimized Protein Language Model Unlocks General-Purpose Modelling
As opposed to scaling-up protein language models (PLMs), we seek improving performance via protein-specific optimization. Although the proportionality between the language model size and the richness of its learned representations is validated, we prioritize accessibility and pursue a path of data-efficient, cost-reduced, and knowledge-guided optimization. Through over twenty experiments ranging from masking, architecture, and pre-training data, we derive insights from protein-specific experimentation into building a model that interprets the language of life, optimally. We present Ankh, the first general-purpose PLM trained on Google's TPU-v4 surpassing the state-of-the-art performance with fewer parameters (<10% for pre-training, <7% for inference, and <30% for the embedding dimension). We provide a representative range of structure and function benchmarks where Ankh excels. We further provide a protein variant generation analysis on High-N and One-N input data scales where Ankh succeeds in learning protein evolutionary conservation-mutation trends and introducing functional diversity while retaining key structural-functional characteristics. We dedicate our work to promoting accessibility to research innovation via attainable resources.
Ahmed Elnaggar, Hazem Essam, Wafaa Salah-Eldin, Walid Moustafa, Mohamed Elkerdawy, Charlotte Rochereau, Burkhard Rost
2023-01-16T19:04:45Z
http://arxiv.org/abs/2301.06568v1
# Ankh +: Optimized Protein Language Model ###### Abstract As opposed to scaling-up protein language models (PLMs), we seek improving performance via protein-specific optimization. Although the proportionality between the language model size and the richness of its learned representations is validated, we prioritize accessibility and pursue a path of data-efficient, cost-reduced, and knowledge-guided optimization. Through over twenty experiments ranging from masking, architecture, and pre-training data, we derive insights from protein-specific experimentation into building a model that interprets the language of life, optimally. We present Ankh, the first general-purpose PLM trained on Google's TPU-v4 surpassing the state-of-the-art performance with fewer parameters (\(<\)10% for pre-training, \(<\)7% for inference, and \(<\)30% for the embedding dimension). We provide a representative range of structure and function benchmarks where Ankh excels. We further provide a protein variant generation analysis on High-N and One-N input data scales where Ankh succeeds in learning protein evolutionary conservation-mutation trends and introducing functional diversity while retaining key structural-functional characteristics. We dedicate our work to promoting accessibility to research innovation via attainable resources. **Keywords:** Protein, Language Model, Transformer, Deep Learning, High-Performance Computing ## 1 Introduction The analogy between the syntax-semantics of natural languages and the sequence-function of proteins has revolutionized the way humans investigate the language of life [1, 2, 3, 4, 5, 6, 7, 8]. Although this analogy is intrinsically valuable when viewed as a precedent step in history leading to the adaptation of NLP's advances on the domain of proteins (e.g., language models), conclusions from the field of NLP do not translate, fully, to protein language. Not only are NLP's model sizes are pursued, it is even proposed that scaling-up protein language models may be significantly more impactful than scaling-up natural language models [9]. The proportionality between the model size and the richness of its learned representations is rather -falsely- encouraged by observing language models of a massive number of parameters trained on a massive number of steps still undergoing notable learning gradient and hence perceived as under-fitted [3, 9, 10]. As a result, opting for more meaningful protein representations or more accurate modeling has gradually shifted to opting for larger models and accordingly, more computational power - less accessibility. Notably, PLM sizes have jumped from \(\sim 10^{6}\)[4] to \(\sim 10^{9}\)[10] parameters recently. Shedding the light, chronologically, on protein language model state-of-the-art (SOTA), we baseline our size-performance benchmark using ProtTrans's ProtT5-XL-U50, an encoder-decoder transformer pre-trained on UniRef50 database whose number of parameters is 3B for training and 1.5B for inference [3]. The evolution of model performance with respect to its size was then demonstrated via RITA, a family of language models taking a first step towards establishing scaling principles for protein sequence modeling. RITA showcases 4 different models with a performance-proportional increase in size from 85M, to 300M, to 680M, to 1.2B parameters [9]. The same trend was then reinforced by ProGen2, a suite of protein language models that are trained on different sequence datasets and whose number of parameters is scaled up to 6.4B [11]. Finally and up to the publication date of this manuscript, the latest contribution promoting model up-scaling is ESM-2, a poll of general-purpose protein language models that also showcase a performance-proportional increase in size from 650M, to 3B, to 15B parameters [10]. The simplified relation between bigger and seemingly-better PLMs, ignores several aspects, including computational costs, task-agnostic model design, and implementation. This raises the research innovation entry barrier and constrains it to scalability. Although model size is, without a doubt, a high impact attribute in pursuing the aforementioned objectives, it is not the only one. The same direction in up-scaling pre-training datasets has proven to be conditional (i.e., bigger datasets are not necessarily better than smaller datasets of higher quality) [3]. We build upon the same direction arguing that up-scaling language models is, too, conditional (i.e., bigger models are not necessarily better than smaller models of protein knowledge-guided means of optimization). In this work, our main objective is to integrate knowledge-guided optimization in an iterative empirical framework that promotes accessibility to research innovation via attainable resources. We title our work "Ankh" (i.e. an Ancient Egyptian symbol denoting the key of life) in analogy to how our model "unlocks" the language of life via learning superior representations of its "letters", the amino acids. This is expanded into two pieces of evidence in evaluating Ankh in terms of optimization and generality. Firstly, surpassing the performance of the SOTA in a representative range of structure and function benchmarks combined with a generation analysis for protein engineering on High-N (family-based) and One-N (single sequence-based) applications, where N refers to the number of input sequences. Secondly, fulfilling this performance via a poll of optimized attributes that not only include the model design but also its development, training, and deployment software and hardware. We provide two pre-trained models referred to as \(Ankh\_large\) and \(Ankh\_base\), offering two modes of computation depending on the application demands. For convenience, we refer to our main model, \(Ankh\_large\), as \(Ankh\). ## 2 Results ### Utmost Computational Efficiency and Utmost Performance We promote visualizing the performance-size correlation as a trade-off also encompassing computational power. On average, \(Ankh\) improved the PLM SOTA performance by 4.8% while \(Ankh\_base\) improved it by 3.4% with \(<\)10% & 3% of the training parameters and 30% & 15% of the embedding dimension, for \(Ankh\) and \(Ankh\_base\) respectively (Fig. 1). Since feature extraction is the basis of any subsequent modeling, we compared the time needed in milliseconds to extract the features of a sequence with increasing length up to 1024 residues (Fig. 1). Although the Ankh models support sequence lengths even beyond the maximum length of the pre-defined relative positional embedding dimension, we chose the upper limit of 1024 to accommodate the maximum length supported by the ESM-1b model. We can see that ESM-2 (15B) takes minimally 2.2x & 2.0x and maximally 11.7x & 7.1x the feature extraction time for \(Ankh\_base\) and \(Ankh\), respectively. As for the storage, we needed four A100 80GB Tensor Core GPUs for the feature extraction using ESM-2 (15B) compared to a single A100 40 GB Tensor Core GPU for each of the Ankh models. These computational attributes showcase that besides achieving the top average and median downstream performance, Ankh offers a significantly more accessible computational demand. For the reported results on the protein benchmarking tasks in Table 1, the contextualized embeddings of the protein sequences of each dataset are extracted from the last hidden states of all the investigated models. Our work promotes embedding extraction over attention extraction as means of transfer learning in light of promoting computational optimization. Therefore, we optimized our experimentation with respect to embedding-based predictions. However and as attention maps are reported as the better indicator of contact prediction for the ESM PLM family in [2] and [10], the two representations are tested, separately. To elaborate, the attention maps are extracted and compared with the contextualized embeddings as input for the contact prediction task per every model to opt for fair comparison and demonstrate the SOTA best indicator with what we deem as the best indicator. Indeed, the results show significant out-performance via embedding-based prediction against attention-based prediction (the full results of attention-based predictions can be found in Table 13. As shown in 1, the Ankh suite consistently outperformed the rest of the investigated models. We also observe that ESM-2 (15B) did not outperform the smaller models from the ESM family in all of the tasks in addition to showing inconsistent results across different runs, affirming our hypothesis that bigger models are not necessarily better in all protein modeling tasks and that extensive model sizes bring out its own challenges. ### Protein Generation Made Attainable #### Auto-Regressive Fine-Tuning for High-N Generation We propose an auto-regressive fine-tuning generation framework for the High-N (protein family-based generation) scale as it offers an accessible Figure 1: **Performance-Size Trade-off Comparison: (a), we plot the number of parameters of different protein language models on the x-axis vs. the mean and median of the performance scores of seven different benchmarking tasks on the y-axis. (b), we plot the embedding dimensions of the investigated models on the x-axis vs. the same y-axis as (a). In diagram (c), we plot the increasing sequence length up to 1024 amino acids on the x-axis and on the y-axis, we plot the corresponding feature extraction time in ms for all of the investigated models.** approach for protein variant generation that can be easily scaled across different protein families. Moreover, the framework offers easy control over the generation's exploration-exploitation trade-off by manipulating the logit warping temperature, a parameter used NLP models to increase/decrease the model's confidence in its most likely response [12]. To validate the framework and select the best temperature value range, the same model, fine-tuned on malate dehydrogenase (MDH) natural variants, was utilized to generate three initial sets of 500 sequences with three different temperatures (1.0, 1.5, and 2.0). For the three sets, the Shannon entropy variations between the generated set and a multi-sequence alignment (MSA) of the fine tuning data are reported. Shannon Entropy aims to characterize the generated sequences' preservation of the evolutionary properties of the natural dataset by comparing their representative statistics of amino acid variation. Low entropy values reflect conserved regions governing retained functionality whereas high entropy values reflect less conserved regions with higher mutation rates. In Figure 2 (a), we can observe that the three generated sets show high similarity to the distribution of the natural sequences with almost identical positions for both peaks and valleys. The mean square error (MSE) between entropy values of natural and generated sets are quantified as 0.1, 0.09, and 0.08 for a generation with temperatures 1.0, 1,5, and 2.0 respectively. We emphasize that the reported similarity is calculated based on generated set of 500, which is less than 3% of the total sequences of the natural set (16,706). In other words, the model can mimic the distribution of the fine-tuning dataset with a small portion of its original sequences. To further investigate the effect of different temperatures on the exploration-exploration trade-off, we focus on the generated sequences via temperatures 1.0 and 2.0 due to their direct correlation to introducing functional diversity whilst maintaining conserved functional regions as observed. With this regard, we first report a comparison of the global alignment based identity between the generated sequences and the fine-tuning dataset. We compute the identity via BioPython's pairwise2 module where we obtain the global alignment between the generated variants (gen) and the original ones (nat) [13]. We can observe in Figure 2 (d) that the generated \begin{table} \begin{tabular}{l l c c c c c c c} \hline **Task Dataset** & **Ankh** & **Ankh\_base** & **ProtT5-** & **ESM-** & **ESM-2** & **ESM-2** & **ESM-2** \\ & & & & **XL-U50** & **1b** & **(650M)** & **(3B)** & **(15B)** \\ \hline \multirow{2}{*}{**SSP**} & _CASP12_[41] & **83.8\(\pm\)**3\% & 80.8\(\pm\)4\% & 83.4\(\pm\)4\% & 79.6\(\pm\)4\% & 82.3\(\pm\)4\% & 83.3\(\pm\)4\% & 83.2\(\pm\)3\% \\ & _CASP14_[42] & **77.6\(\pm\)**3\% & 76.8\(\pm\)3\% & 74.1\(\pm\)3\% & 75.1\(\pm\)4\% & 77.0\(\pm\)3\% & 76.8\(\pm\)3\% & 76.8\(\pm\)4\% \\ \hline \multirow{6}{*}{**CP**} & _ProteinNet L/1_[34] & **49.0\(\pm\)**8\% & 43.2\(\pm\)8\% & 44.7\(\pm\)8\% & 30.0\(\pm\)6\% & 29.6\(\pm\)6\% & 30.7\(\pm\)6\% & 33.3\(\pm\)6\% \\ & _ProteinNet L/5_ & **73.2\(\pm\)**11\% & 66.6\(\pm\)11\% & 69.2\(\pm\)11\% & 50.1\(\pm\)10\% & 50.2\(\pm\)10\% & 52.7\(\pm\)10\% & 54.7\(\pm\)10\% \\ & _CASP14 L/1_ & **30.2\(\pm\)**8\% & 28.8\(\pm\)7\% & 26.9\(\pm\)7\% & 24.6\(\pm\)6\% & 25.0\(\pm\)6\% & 24.8\(\pm\)7\% & 25.9\(\pm\)7\% \\ & _CASP14 L/5_ & **50.7\(\pm\)**11\% & 48.0\(\pm\)11\% & 42.4\(\pm\)14\% & 40.0\(\pm\)11\% & 38.4\(\pm\)13\% & 41.9\(\pm\)14\% & 40.4\(\pm\)15\% \\ \hline **EAT** & & 71.7\(\pm\)6\% & **74.8\(\pm\)**6\% & 71.0\(\pm\)6\% & 64.5\(\pm\)7\% & 55.5\(\pm\)7\% & 65.6\(\pm\)6\% & 65.4\(\pm\)7\% \\ \hline **FolP** & & **61.1\(\pm\)**4\% & 58.8\(\pm\)4\% & 57.6\(\pm\)4\% & 57.6\(\pm\)4\% & 56.3\(\pm\)4\% & 60.5\(\pm\)4\% & 56.7\(\pm\)4\% \\ \hline **FluP** & & **0.62\(\pm\)**0.004 & 0.61\(\pm\)0.004 & 0.58\(\pm\)0.004 & 0.5\(\pm\)0.005 & 0.48\(\pm\)0.005 & 0.48\(\pm\)0.005 & 0.55\(\pm\)0.004 \\ \hline **SolP** & & **76.4\(\pm\)**2\% & 74.2\(\pm\)2\% & 74.4\(\pm\)2\% & 67.3\(\pm\)2\% & 75.0\(\pm\)2\% & 74.9\(\pm\)2\% & 60.4\(\pm\)2\% \\ \hline **GB1P** & & 0.84\(\pm\)0.008 & **0.85\(\pm\)**0.008 & 0.78\(\pm\)0.01 & 0.81\(\pm\)0.0090.82\(\pm\)0.009 & 0.81\(\pm\)0.009 & 0.57\(\pm\)0.02 \\ \hline **LocP** & & **83.2\(\pm\)**2\% & 81.4\(\pm\)2\% & **83.2\(\pm\)**2\% & 80.0\(\pm\)2\% & 81.8\(\pm\)2\% & 82.4\(\pm\)2\% & 81.8\(\pm\)2\% \\ \hline \end{tabular} \end{table} Table 1: Results Summary. sequences span a wide range of global alignment-based identity scores with sequences as different as 70% and 55% for the generation with temperatures 1.0, and 2.0, respectively. Moreover, over 95% of the generated variants were unique (i.e., only 5% of the variants were duplicates of the fine-tuning dataset sequences). Furthermore, we report the internal identity of each set where we obtain the global alignment between the sequences of each set and themselves. We can observe in Figure 2 (e) that the generated set at 1.0 temperature show less internal variability than the natural sequences, while the generated set at 2.0 shows higher internal variability. Consequently, generation at temperature 1.0 tends to be more conservative, favoring similarity to the most dominating clusters in the fine-tuning dataset. On the other hand, generation at 2.0 tends to be less conservative, covering rare clusters and presenting more diversity in the generation. This allows the user control over the generation process according to the nature of the fine-tuning protein family, and the interest in the generation of global or local variants. Focusing on comparing natural and generated domains with known structural annotations, we observe that the CATH domains in natural variants dominantly belong to three homologous super-families. In all of the generated sets, a significant percentage of the generated sequences includes domains from the three major super-families as shown in Fig. 2 (f) and detailed numerically in Table 15. The domains with known functional classifications are further investigated to compare the functional diversity of the generated sequences. Natural domains from only two of the three homologous super-families (3.90.110.10 and 3.40.50.720) are annotated with functional-family numbers. The functionally annotated CATH domains belonging to the 3.90.110.10 super-family are visualized in Fig 2 (g). All of the generated sets contain domains belonging to the common functional-family numbers: 2, 11, and 3. However, domains belonging to the rare family number 1 (only 6 occurrences in the natural set) can be only observed in sequences generated at temperature 2 (155 and 95 occurrences in epochs 1, and 2 respectively). The same distribution trend is also conserved in the functionally annotated domains belonging to the super-family 3.40.50.720 as can be seen in Table 16 #### Masked Language Modeling (MLM) for One-N Generation Since we utilize the Masked Language Modeling (MLM) generation framework on the One-Shot (single-sequence generation) scale, it is infeasible to evaluate the generated sequences w.r.t. a specific dataset. Instead, we evaluate the retention of their experimental 3D structural -and accordingly functional- characteristics. To fulfill this, we use ColabFold's \(colabfold\_batch\) with 2 models and 3 cycles to predict the 3D structure of the generated variants [14]. Per every generated sequence, we plot its identity to the original unmasked sequences vs. the root mean square deviation (RMSD) between its predicted structure and the experimental 3D structure of the original sequences. We compute the RMSD from the C-alpha atom via BioPython's SVDSuperimposer [13]. Ideally, we would want our model to retain the semantics of the sequence while changing its syntax. In other words, we would want to have variants with low sequence identities as well as low RMSD. To test this assumption, we generate 179 synthetic variants utilizing two Masking Probability 40% and 50% of the input dataset. Indeed, we can see in Figure 3 that the model was utilized to generate many sequences with low sequence similarity while maintaining high structural similarity compared with the input sequences. In the figure, we can notice that sequences generated by 50% masking probability maintain a steeper slope, meaning that the bigger the unmasked context, the better-generated variants, as expected. #### Knowledge-Guided Optimization in Action To fulfill the demonstrated utmost computational efficiency coupled with the top downstream performance, our model design was preceded with knowledge-guided experimentation. We define knowledge-guided experimentation as protein-specific experimentation retaining a single independent variable traversing masking (strategy and probability), architecture, and pre-training dataset. This pre-design experimentation retained training each variation for 2 epochs while also abiding by approximately the same total number of parameters per experiment to avoid complexity biases. The definitions of the experimental Figure 2: **Auto-Regressive Fine-Tuning Generation of MDH variants:** (a), (b), and (c) Shannon entropy curves between the MSA of natural and generated sets at logit warping temperatures of 1.0, 1.5, and 2.0, respectively: Manipulating the temperature during the generation affects the variability of the generated sequences. (d) Distribution of the Sequence Identity of the generated variants of temperatures 1.0 and 2.0 vs. natural variants: The generated sequences span a wide range of sequence identity scores going as low as 60%. (e) Internal identities of the natural/fine-tuning set vs. the generated sets of temperatures of 1.0 and 2.0: The generated variants span wide ranges of variability, with the set of temperature 2.0 showing the widest range. (f)CATH domain distribution of the three dominating homologous super-families in the natural set: All the generated variants retained CATH domains from the three dominating super-families. (g) The distribution of CATH domains with known functional annotations in the 3.90.110.10 super-family (responsible of the dehydrogenase function) in the natural and generated sets: The generated sets managed to maintain domains from the three functional families of the natural set while expanding to domains from functional family number 1, maltase dehydrogenase (MDH), with only 6 examples in the natural set. Abbr., Nat: Natural sequences; t1.e1: generated sequences at temperature 1 by the first epoch; t1.e2: Generated sequences at temperature 1 by the second epoch; t2.e1: Generated sequences at temperature 2 by the first epoch; t2.e2: Generated sequences at temperature 2 by the second epoch independent variables, their values, evaluation metrics, and the experimentation baseline can be found in the Methods section. The design of the final models integrates the top-performing experimental versions of each independent variable's set of experiments and sub-experiments. We fully pre-trained two models, \(Ankh\) and \(Ankh\_base\), that can be visualized in Figure 4. The results of the masking strategy experiments promoted utilizing a 1-gram span de-masking strategy (elaborated on in _Exp. 4_). Regarding the masking probability, we used 20% as promoted by _Exp. 8_. For the number of layers, we chose 48 layers for the encoder and 24 layers for the decoder as per the results of _Exp. 11_. For the activation function, we proceeded with Gated-GELU as per the findings of _Exp. 14_ and _Exp. 15_. We adopted relative positional embeddings with an embedding offset of 128 and an embedding dimension of 64 as per the results of _Exp. 20_. However, the only dimensions where the two models differ are the embedding dimension, number of attention heads, and number of feed-forward layers. \(Ankh\_base\) has an embedding dimension of 768 as recommended by _Exp. 13_ while \(Ankh\) has an embedding of 1536 as we found that double dimensions are often better performing as verified by _Exp. 20_. Furthermore, the number of attention heads for \(Ankh\_base\) and \(Ankh\) is 12 and 16, respectively. Finally, the number of feed-forward layers for \(Ankh\_base\) is 3072 and for \(Ankh\) is 3840. The full configurations for the two models can be found in Table 11. ## 3 Discussion **Results Summary - More Efficient Models Can Also Generalize Better:** To fulfill a holistic analysis of our model, we ensured an evaluation of Ankh that spans the most principal and valuable categories of protein modeling via deep learning. From a deep learning perspective, we have tested our model on regression, classification, and generation. From a biology perspective, we have tested our model on residue-level and protein-level where the measured attributes spanned structure and function. From a scale and sparsity perspective, we have tested our model on both High-N and One-N scales. In all of the Figure 4: **Architecture of Ankh Models**: Arrows show the information flow in the network starting from the input sequences, transformer input pre-processing, transformer, and then either a residue-level prediction network or a protein-level prediction network that only differs in being preceded by a global max pooling layer. Both \(Ankh\_base\) and \(Ankh\_large\) agree on the demonstrated architecture. However, they differ in the length of the context vector. Figure 3: **Sequence Identities vs Structural Similarities of the Generated Sequences at Masking Probability of 40% (Fig 3.a.) and 50% (Fig 3.b.)**: The figures show the model was able to generate sequences with RMSD lower than 1.5 A white sequence identity as low as 80% in both cases. The generated sequences at masking probability= 50% show more negative Pearson correlation, suggesting more sequence context facilitates the generation of similar structures with low sequence identities. aforementioned benchmarks, Ankh surpassed the performance of state-of-the-art with no exceptions while our base model Ankh base either reached the same or achieved very comparable with significantly less computational power, offering two modes of computation depending on the application demands. In our generation analysis, Ankh was able to learn the evolutionary conservation-mutation trends and introduce diversity while retaining key structural and functional characteristics in both High-N and One-N scales. **Results Interpretation:** Our results reinforce the compatibility of PLMs with the language of life but signify protein knowledge-guided model design. Moreover, our results fortify the use of sequence contextualized embeddings as a mere input to downstream models but shed light on task-specific architectures/layers. **Results Implications:** Our results suggest that state-of-the-art performance can be reached and surpassed by significantly less computational power. This suggestion implies the necessity of criticizing the highlighted correlation between model performance and needed computational power, embodied in either model or data sizes. Instead, our work suggests visualizing this correlation as a trade-off, highlighting the immense cost of directly scaling model/data size to improve model performance. On the other side of the trade-off, we propose knowledge-guided means of optimization whose prerequisites revolve around needed protein knowledge as well as the extra mile in optimizing both the software and hardware components of the model life cycle. **Results Limitation:** We report several limitations of our work. Firstly, changes of the activation function impacted the optima in the number of layers for encoder and decoder, as well as, the embedding dimension. However, this setting is traced to the nature of the Gated-GELU activation function denoting a significantly larger number of trainable parameters that, in turn, forced us to compensate increases in width by decreases in depth to retain the same total number of trainable parameters and avoid any possible complexity-bias [15]. Consequently, testing different combinations of dimensionality required utilizing an activation function needing fewer parameters. Secondly, our optimization propagated the top-performing model version for each prediction task to the next one. _Top performance_ was not achieved through the version numerically performing top in all tasks, because none such version existed. Instead, we selected the version that outperformed others for most of the most standard data sets, and performed top for diverse objectives. The downside of this seemingly simple algorithm is that we failed to define any single formula to compute "best" that stands out from amongst the population of such formulas. Specifically, we documented the reasoning behind the choice of the top performer for each prediction task. Generally, we justified our choices by opting for holistic predictions via a general-purpose language model. This entailed promoting generalization across different tasks, especially when the difference amongst results remained numerically negligible, when acknowledging that task-specific customization is of impact, and when all the experimental versions are only trained for two epochs. **Recommendations:** Our results implicitly suggested that the choice of the data used to pre-train the pLMs might have to be coordinated with that of data sets used for testing subsequent protein prediction tasks. Although such an endeavor is beyond this work's scope, we encourage efforts toward this end. For instance, we have reported the superiority of pre-training with UniRef50 over UniRef90, UniRef100, and BFD due to lower redundancy. Although the details of what exactly constitutes redundancy relate to the application (e.g. using all available human proteins constitutes less redundancy when wondering about the length distribution of human proteins than when trying to predict binding residues in these), too much redundancy is often easy to spot [16]. Furthermore, we shed light on the value of incorporating synthetically-generated-experimentally-characterized sequences. **Future Work:** We present Ankh as an initial version of our optimized general-purpose protein language model. This version is meant to serve as a pre-training base that shall then be specialized into high-impact and high-value protein modeling tasks in our future work (e.g., full atomic resolution 3D-structure prediction, protein generation, etc.) with task-specific optimization and detailed analysis. ## 4 Methods ### Self-Supervised Learning #### Pre-training Datasets Deep learning-based protein modeling, likewise NLP, is data-driven. However, this hunger for data is proving to be both constrained and constraining. We draw upon previous experimentation done in ProtTrans, in general, and Prot-T5, in particular, where the performance and associated computational power of three datasets ranging in size, identity-based clustering, and origin were analyzed. The analysis promoted utilizing UniRef50 [17] over UniRef100 [17] and BFD [3, 18]. We build upon the same results by pre-training our baseline on UniRef50. UniRef (UniProt Reference Clusters) databases offer variable clustering of UniProtKB sequences (including isoforms) and selected UniParc records [17]. This variable clustering denotes different sequence similarity thresholds that can fulfill non-redundancy and intra-cluster homogeneity. In UniRef100, a single cluster/entry denotes identical sequences and sub-fragments originating from any organism. To build UniRef90 and UniRef50, UniRef100 sequences are clustered at 90% and 50% sequence identity thresholds, respectively [17]. Hence, the results in ProtTrans can be traced to having more variability and representation in UniRef50 [3]. Nevertheless, we wanted to test an intermediate value between the 50% and 100% thresholds that was not tested in ProtTrans, hence the justification for having the pre-training dataset amongst our experimentation independent variables later on with a value that is, UniRef90. We can refer to the pre-training data statistics in Table 2 Sequences in both UniRef50 and UniRef90 are tokenized with a single space between each token in analogy to word boundaries and each sequence is stored on a separate line in analogy to sentences. #### Pre-trained Model: Encoder-Decoder Transformer For our baseline model and throughout our experimentation, we utilize the encoder-decoder transformer originally proposed for machine translation and then for mapping an arbitrary input domain to a target domain [19]. The encoder learns to project the input domain sequences into a latent/embedding space, representing the "context". The decoder, however, learns to generate the target domain sequences given this context. Although later transformer releases abandoned the encoder-decoder combination and only utilized either of them, we draw upon ProtTrans's experimentation promoting T5 [20] (the only encoder-decoder transformer analyzed) over encoder-only (e.g., BERT [21], ALBERT [22], and Electra [23]) and decoder-only transformers (e.g., TransformerXL [24] and XLNet [25]). However, the choice of encoder-decoder transformers in this work is not only motivated by the aforementioned top performance but also their compatibility with our experimentation independent variables such as, but not limited to, masking and architecture. Due to retaining both the encoder and decoder, T5-like transformers offer more compatibility with different masking strategies that are dependent on both masking and de-masking techniques. Furthermore and due to learning relative positional embeddings that are shared across layers for each attention head, T5-like transformers offer robustness against predictions surpassing the maximum length of the relative positional embedding dimension as it learns to combine the relative offset between lower layers' amino acid subsets [20]. For the masking strategy adopted by the baseline, we performed a 1-gram random token masking according to the default probability of 15% (i.e., 15% of the sequence tokens are masked at random) and performed a full demasking/reconstruction of the sequence (i.e., all tokens are reconstructed as individual tokens). For the number of encoder-decoder layers, we used 36 layers for each. For the activation function, we used Gated Gaussian Error Linear Unit (Gated-GELU) [15]. Regarding relative positional embedding, we have been using an embedding dimension of 32 and an embedding offset of 128. Finally, we pre-trained the baseline on UniRef50. The remainder of the baseline configurations can be found in Table 11. Our baseline model represents _Exp. 0_ that is later subjected to a single variable change at every class of experimentation. ### Downstream Tasks To provide a holistic analysis of Ankh's performance -relatively and ultimately-, we conduct a protein downstream benchmark as well as a generation analysis for protein engineering applications. For the protein downstream benchmark, we measure the performance of Ankh in comparison to the top reported protein language models via selected downstream tasks covering various aspects of protein understanding and involving protein structure and function. For the generation analysis, we analyze the use of Ankh via two generation frameworks on two scales of data: High-N (Family-based) and One-Shot (single sequence-based) generation in terms of conservation-mutation trends, introducing diversity while retaining structural -and accordingly functional- identity, and sparsity-robustness. We unified the experimental training and testing settings and procedures of all the downstream tasks investigated in this study. Although we acknowledge task-specific optimization, this unification aims to specifically compare the level of protein understanding embodied in the protein representations generated by the studied models while avoiding any bias that can result from task-specific means of optimization. We can observe a demonstration of the adopted downstream tasks in Figure 5. The following subsections are dedicated to the description of the tasks, task databases, and the predictive models/settings that utilized them. #### Tasks and Datasets The emergence of transformers as powerful self-surprised models for proteins has motivated significant efforts in designing comprehensive benchmarking databases for protein sequence understanding, like TAPE [26], and PEER [27]. These efforts are necessary to drive the progress of transformers toward protein understanding, parallel to how comprehensive benchmarks have driven the progress of transformers in natural language understanding. We provide a summary of the downstream dataset statistics in Table 12. We selected a set of commonly-utilized downstream tasks that fall into three groups: Protein Function Prediction, Protein Structure Prediction, and Protein Localization Prediction. We further extended the validity of the comparison by adding independent testing databases to some of the downstream tasks. Besides the benchmarking tasks, we also investigated the protein variant generation capabilities of Ankh in two settings: protein family-based generation and single protein-based generation. #### Protein Function Prediction This group of tasks aims to evaluate the ability of protein embeddings in capturing the functional scores of two critical design parameters of protein engineering, fluorescence, and solubility. **Fluorescence Prediction (FluP):** This regression task evaluates the fluorescence intensity of green fluorescent protein mutants annotated by Sarkisyan et al [28]. Fluorescence is an important biological function as it allows researchers to infer the existence of proteins in cell lines and living organisms [27]. Prediction of the effect of mutations on the green fluorescent protein is a common example to investigate, representing the protein genotype/phenotype fitness landscape prediction problem in protein engineering research. We follow the same splits of TAPE [26] as they are designed to test the model generalization ability from lower-order mutations to higher-order mutations. The training and evaluation datasets contain only mutants with three or fewer mutations while the testing contains mutants with four mutations or more. **Solubility Prediction (SoIP):** This classification task evaluates the binary label of a set of dissimilar proteins as soluble or not. Solubility is an indispensable design parameter for effective proteins, especially for pharmaceuticals [29]. We adopted the solubility database utilized in the development of DeepSol [30] with the same dataset splits, where any protein with \(\geq 30\%\) sequences identity to any protein in the testing subset is removed from the training and evaluation subsets. **GB1 Fitness Prediction (GB1):** This regression task evaluates the fitness of GB1 binding following mutations in 4 specific positions curated in the FLIP benchmark [31]. GB1 is the binding domain of the immunoglobulin-binding Protein G, used in antibody purification. GB1 landscape is considered the gold standard in studying the non-additive interactions of mutations, termed epistasis [32]. Unlike the fluorescence mutants, the GB1 mutants are confined only to four positions, which is why it is selected to represent another common case in protein engineering research. In FLIP, 149,361 GB1 mutants with measured binding scores were down-sampled to 8,733 as 96% of the original data points were non-binder or poor-binder. The sampled split was utilized to evaluate the investigated PLMs as it provided the most stable results. #### Protein Structure Prediction This group of tasks aims to evaluate the ability of the sequence-based embeddings of a protein to encompass accurate information about its structure. A sequential chain of amino acids folds into a set of predetermined stable three-dimensional structures. The large majority of biological parameters of a protein can be inferred from its structure [33]. Consequently, this group of tasks is esteemed due to its correlation to protein understanding, and the large set of applications it enables. **Contact Prediction (CP):** This is a binary classification task, where pairs of residues are predicted Figure 5: A Demonstration of Structure & Function Benchmarks: We showcase all the adopted downstream tasks (a) In SSP, the input is the protein sequence and the output is a per-residue classification that spans either 3 or 8 states. (b) In FolP, the input is the protein sequence and the output is a per-protein classification that spans 1194 possible folds. In CP, the input is the protein sequence that is processed as residue pairs and the output is a binary classification indicating whether or not the designated residues contact. (d) In FluP, the input is the protein sequence and the output is a regression score indicating the fluorescence intensity. (e) In SolP, the input is the protein sequence and the output is a binary classification indicating whether or not the protein is soluble. (f) In LocP, the input is the protein sequence and the output is a per-protein classification that spans 10 classes. (g) In EAT, the input is annotated query protein and the output is CATH annotation transferred from the best match in annotated lookup dataset (h) In Novel Generation, the input is a protein sequence and the output is a variant of the input protein that maintains the same desired function. (i) In GB1 Fitness Prediction, the input is a protein sequence corresponding to mutations at four possible positions and the output is a regression score indicating the fitness prediction to be in contact in their 3D structure (commonly defined with an 8A distance threshold) or not. Contact prediction, as a residue-level 3D structure prediction, provides significant global information about the protein structure. In literature, contact prediction is utilized as an intermediate prediction step toward atom-level 3D structure prediction. We utilized ProteinNet [34], a standardized dataset for structure learning, whose approach is to piggyback on the CASP competitions [35]. ProteinNet uses the CASP structures as a testing set and augments all the historical records of structures released before the CASP dates as training and evaluation sets. We utilized the latest version of ProteinNet that uses CASP12 as its test set with the same dataset splits as TAPE [26]. To further asset the robustness of the model training, we add independent testing based on the free modeling (FM) structures of the latest CASP competition, CASP14. **Fold Prediction (FolP):** This is a classification task, where a full protein sequence is classified into 1194 possible folds. This task is utilized in the detection of emergent remote homologs of proteins of interest like new antibiotic-resistant genes, and industrial enzymes [36]. We adopted from Hou's dataset [37], with its original splits. Entire clustered superfamilies are held for the testing dataset to affirm the models' generalization ability to detect the structural similarity of drastically different sequences. **Secondary Structure Prediction (SSP):** This is a classification task, where each residue in a protein is classified into its secondary structure fold with two levels of difficulty: 3-classes, and 8-classes. Secondary structures hold significant information about functional domains and are commonly utilized to capture evolutionary information through multiple sequence alignment. We utilized the training and evaluation set from NetSurfP-2.0 [38] and used a variety of testing sets to affirm the robustness of the model including CB513 [39], TS115 [40], and CASP12 [41], and CASP14 [42]. **Embedding-based Annotation Transfer (EAT):** Protein annotation transfer from labeled (experimentally-annotated) proteins to unlabeled proteins traditionally employed Homology-based inference (HBI) in sequence space. Recently, embedding-based annotation transfer has emerged as an alternative faster approach, as it does not require multiple sequence alignment (MSA) calculations. In the new framework, the distances between query proteins and a lookup set of annotated proteins are calculated to transfer annotation from the most similar known match through k-Nearest Neighbors (k-NN) in embedding space [43, 44]. This task evaluates the ability of raw embeddings to capture meaningful information about the proteins without the need for supervised training. We utilized a bench-marking set of 69k look-up sequences and 219 test sequences that were developed for ProTucke evaluation [44]. This evaluation dataset is curated from CATH v4.3 dataset, where proteins are classified into four levels of structural annotations: Class, Architecture, Topology, and Homologous super-family [45]. For simplicity, we report the mean of the four accuracy scores as a performance measure for this task in Table. 1. The performance over the four classes was consistent with the mean as seen in Table 16. #### Protein Localization Prediction This task aims to evaluate the ability of protein embeddings to capture where a protein is expected to accumulate in the cell, known as protein localization prediction [46]. This attribute is significant for understanding protein functions, especially in disease target identification studies. **Sub-cellular localization prediction (LocP):** This classification task evaluates the localization of a protein into 10 sub-cellular classes. We utilize the DeepLoc [47] dataset with the same dataset splits described in their paper. #### Generation of Novel Protein Sequences Following the comparative study, we utilized the \(Ankh\) model to generate synthetic variants of natural proteins to affirm its applicability in a crucial protein engineering task, computational variant generation. We evaluate Ankh with two input datasets representing two different settings and scales in protein engineering: family-based and single sequence-based variant generation. **High-N (Family-Based Variant Generation):** For the family-based use case, we utilized a curated dataset of malate dehydrogenase (MDH), which was utilized as the training dataset of ProteinGAN [48]. ProteinGAN is a recent deep learning-based generative model that showed superior performance compared with experimental variant generation in the case of MDH. This choice of this protein family is convenient given its diverse 16,706 unique training sequences as well as the complexity of enzyme catalysis as it requires binding to both its substrate and the NAD+ cofactor, which add complexity to the generation process **One-Shot (Single Sequence Variant Generation):** For the single sequence use case, we used single chain SARS-Cov-2 nanobody that was added to the CoV-AbDab dataset after June 2022 [49]. This is to ensure that they are new sequences the model did not see in its unsupervised training. This one-shot generation use case specifically challenges the model's generalization capability by demanding it to generate variants of a small-scale dataset without over-fitting on the dataset in question. We conducted seven independent virtual generation experiments, utilizing seven different nanobodies Cov-AbDab identified by the following names: Nb-007, F6, Nb_1-23, Nb_1-25, Nb_2-62, Nb_2-65, and Nb_2-67. All of the selected nanododies have experimentally validated structures to facilitate their comparison with the predicted structures of the generated candidate nanobodies. ### Downstream Model: ConvBERT For our implementation of the top/supervised models mapping the pre-trained embeddings to the designated supervised targets, we utilized the same supervised network with very few modifications to account for the differences in protein processing levels (e.g., residue-level and entire protein-level) and output distributions (e.g., binary classification, multi-class classification, and regression). In all cases, our top/supervised models consist of two types of layers. The choice of the first type draws-upon previous experimentation done in ProtTrans promoting CNNs as top/downstream models/layers that are proven to perform better when coupled with self-attention [3]. We utilize a ConvBERT layer with the same embedding dimension as the pre-trained model, a feed-forward network dimension of pre-trained embeddings divided by 2 (i.e., if we're benchmarking with ESM-1b whose embedding dimension is 1280 then the feed-forward network dimension will be 640), 4 attention heads, a dropout rate of 0.2, convolutional kernel size of 7, and a Gated-GELU activation [50]. The second type is linear layers whose activation varies between None, Sigmoid, and Softmax for regression, binary classification, and multi-class classification, respectively. The third type of layers, however, is not shared across all cases. In fact, we only used a global max pooling layer only in regression and binary classification tasks prior to/at the beginning of the aforementioned supervised network. Generally, we acknowledge -and promote- task-specific optimization. Accordingly, we acknowledge that different top models with different set of hyperparameters and configurations can result in a better downstream performance. Nevertheless, we unify the setting of this top model believed to achieve the better generalized performance as the core of the downstream bench-marking is to evaluate and compare the level of protein understanding embodied each model's learned protein representations. ### Variant Generation Model The proposed auto-regressive fine-tuning generation framework for the High-N (protein family-based generation) scale offers an accessible approach for variant generation, that can be easily scaled across different protein families. Moreover, the framework offers easy control over the exploration-exploitation trade-off of the generation, by manipulating the logit warping temperature parameter. To validate the framework, the same fine-tuned model was first utilized to generate three datasets of 500 sequences, utilizing three different temperatures (1.0, 1.5, and 2.0). For the three datasets, the Shannon entropy curves between the generated set and a multi-sequence alignment (MSA)of the natural set are reported. Shannon Entropy aims to characterize if the generated sets preserve the evolutionary sequence properties of the natural MDH set by comparing their representative statistics of amino acid variation. Low entropy values reflect conserved residues governing retained functionality whereas high entropy values reflect less conserved regions with higher mutation rates. We can observe in Figure 2, the three generated sets show high similarity to the distribution of the natural sequences with almost identical positions for both peaks and valleys. MSE between entropy values of natural and generated sets are quantified as 0.1, 0.09, and 0.08 for a generation with temperatures 1.0, 1,5, and 2.0 respectively. We emphasize that the reported similarity is calculated based on generated set of 500, which is less than 3% of the total sequences of the natural set (16,700). In other words, the model is able to mimic the distribution of the natural set with a small portion of its original sequences. Since variant generation tasks adopt a framework that is different from the aforementioned pretrained-top model framework and evaluation settings, we present it solely. We adopt two frameworks for variant generation, Auto-Regressive Fine-Tuning for High-N (family-based generation) and Masked Language Modeling (MLM) for One-Shot (single protein-based generation). Fine-tuning denotes specializing the pre-trained model on a specific dataset/task(s) [51]. We define auto-regressive fine-tuning, however, by adding a constraint to the classic fine-tuning setting that denotes completely freezing the encoder and only allowing for the decoder's parameter to change. We allow this for the entirety of the decoder's layers and initialize the fine-tuning by the original decoder parameters. We train each experiment for 2 epochs, shifting it from an masked language modeling prediction into an auto-regressive prediction. We set a maximum sequence length of 256 tokens and a maximum prompt length of 20 tokens. We utilize a learning rate of \(3e{-}4\) and an epsilon value of \(e{-}8\) for the Adam optimizer. Moreover, we use a train batch size of 4 and an evaluation batch size of 8. For the auto-regressive sampling, we use beam search with a number of beams of 10. For the auto-regressive logit warping, we use temperature with three different temperature values of 1, 1.5, and 2 to observe the behavior of the model under different temperatures. The mixture of beam search sampling and temperature logit warping works as follows: Firstly, temperature changes the logit distribution, preserving the order of the tokens but smoothing/sharpening the distribution. Secondly, sampling occurs where the beams are scored (greedily, in our case). MLM denotes a pre-training objective that guides the model's learning -and accordingly its inference- of token representations by requiring it to predict a random sample of input tokens that is usually replaced by a \([MASK]\) placeholder(s) in a multi-class setting over the entire vocabulary [51]. However and for fine-tuning purposes, we use MLM in the context of inference only (i.e., no training or change of parameters of any kind has been done). Furthermore and unlike the commonly-used small values of masking probabilities for pre-training purposes, we perform two experiments where we use larger masking probabilities corresponding to a bigger range of variations in the original sequences. The two experiments try different variations of the exploration-exploitation trade-offs. The first experiment attempted a higher masking probability of 50% and a lower temperature logit warping of 1.0. The second experiment utilized a lower masking probability of 40% and a higher temperature logit warping of 1.5. Both experiments utilized a beam search sampling with 30 beams. ### Computational Power (Software & Hardware) Processing a limited vocabulary that incorporates unlimited life within its tokens, we had to also reach out for the limits of up-scaling the efficiency of our computational power in terms of both software and hardware. #### 4.5.1 Flax & JAX Flux is an end-to-end high-performance library and an ecosystem for JAX that is designed for flexibility and tailored for neural networks. JAX opts for a plausible range of composable function transformations, allowing just-in-time (JIT) compilation, automatic differentiation, CPU/GPU/TPU compatibility, and automatic vectorization and parallelization [52]. JAX, for example, can deliver approximately 1.4x speed-up for language model training on TPU Pods compared to PyTorch. We used Flax to build our source codes to leverage the aforementioned advances during training the PLMs on Google TPU Pods but made our models available on HuggingFace to support a wider range of researchers, which use any of the three top deep learning libraries: JAX, TensorFlow, and PyTorch. #### 4.5.2 TPUs We were fortunate enough to be amongst the first manuscripts, in general, and to be the first protein modeling work, in particular, utilizing Google's latest TPU v4 and unleashing unseen capabilities of supercomputers. This means all the pre-trained models on this work were trained using Google TPU v4 Pods with either 64 or 128 cores. The single TPU v4 VM host has 8 TPU cores, and each core has 16 GiB of high-bandwidth memory, 120 CPU cores, and 400 GB of main memory. At first glance, it seems TPU V4 is similar to TPU V3; however, it has two main advantages over TPU V3. First, the new mega core features allow the virtual merge of 2 cores, making deep learning libraries like JAX sees every two cores as one core. This gives each of the four merged cores per host access to 32 GiB of memory, which leads to fitting bigger models up to 3 billion on a single host without the need to use model parallelism. Second, TPU V4 can deliver approximately 2.2x speedup compared to TPU V3. ### Data & Model Experimentation #### 4.6.1 Masking Masking is a pre-training objective that guides the model's learning of token representations by demanding it to predict a random sample of input tokens that is usually replaced by a \([MASK]\) placeholder(s) in a multi-class setting over the entire vocabulary [21]. This class of model experimentation aimed to investigate the impact of two masking-related parameters, masking strategy and masking probability. **a. Masking Strategy:** Masking strategy indicates the means by which we decide which tokens to mask and which to keep unmasked [53, 54, 55]. Motivated by the skewed distribution of amino acid tokens in protein sequences in addition to the redundancy in the database, we have tested different means of masking strategies to ensure protein-specific adoption. For that purpose, we tested out five variations/experiments. **Exp. 0:** This experiment represents our baseline model whose details can be found in Sub-Section 3.1.2. **Exp. 1:** We masked every unique 1-gram token (i.e., every unique amino acid) in the sequence at least once. We did so by repeatedly iterating the sequence and randomly masking an amino acid at a time providing that neither the desired masking probability has yet been reached nor that the amino acid in question is the one with the highest count (i.e., if we are using a 15% masking probability for the sequence "\(ABCAAAAAAAA...A\)" whose length is 20 amino acids: token "\(A\)" will always be masked one time and the remaining two masks will always be for tokens "\(B\)" and "\(C\)"). Finally, we performed a full de-masking/reconstruction of the whole sequence tokens. This has been found to achieve higher performance w.r.t. the baseline and accordingly was proceeded with. **Exp. 2:** We have also masked every unique token in the sequence at least once but instead of masking one token at a time, we masked its precedent and subsequent tokens (i.e., in the sequence "\(ABCDEFG\)": if token "\(D\)" is masked, then tokens "\(C\)" and "\(E\)" are also masked), turning it from a 1-gram token masking to 3-gram token masking. We also retained the full de-masking/reconstruction of the whole sequence tokens. This, however, reduced the performance in all the downstream tasks and accordingly was not proceeded with. **Exp. 3:** We replicated _Exp. 1_ indicating the masking of every unique 1-gram token in the sequence at least once. However, as opposed to the default scenario where all tokens' de-masking/reconstruction is incorporated in the calculation of the loss function, we have only incorporated the reconstruction of the masked tokens even though the output will contain the entirety of the sequence tokens (i.e., in the sequence "\(ABCDEFG\)": if tokens "\(D\)" and "\(F\)" are masked, then only the reconstruction of tokens "\(D\)" and "\(F\)" is accounted for the calculation of the loss function). However, this reduced the performance in all the downstream tasks and deducing that we should reconstruct the entire input, even if it is already known, and accordingly was not proceeded with. **Exp. 4:** This experiment reflected a change in the input-target mapping of masking and de-masking. To elaborate, every input token is reconstructed as a single target token in the default 1-gram token masking case. The change we performed was to reconstruct all the consecutive unmasked tokens as a single merged token (i.e., if the input sequence was "\(ABCDEFG\)" and tokens "\(C\)" and "\(G\)" were masked then the sequence was inputted as "\(A,B,[MASK],D,E,F,[MASK]\)" and reconstructed as "\([MASK],C,[MASK],G\)" where \([MASK]\) is a single target token mapping two input tokens (AB) and \([MASK]\) is a single target token mapping three input tokens (DEF), turning it from a 1-gram token masking into a 1-gram span masking. The change was motivated by reducing the computational cost of the unneeded computations associated with the reconstruction of unmasked tokens of the output. We refer to this masking strategy as "1-Gram Span Partial De-masking/Reconstruction". It is important to note that this partial reconstruction is done only on the output as the input tokens are left as is. This direction has proven to be a valid one, corresponding to higher performance w.r.t. the first experiment and accordingly was proceeded with for the upcoming experiments. **Exp. 5:** We applied the 1-gram span partial reconstruction -introduced in _Exp. 4_- on the approach of _Exp. 1_, indicating the masking of every unique 1-gram token in the sequence at least once and the reconstruction of all the unmasked tokens as a single merged token (i.e., if we are using a 15% masking probability for the sequence "\(ABCAAAAAAAA....A\)" whose length is 20 amino acids and if the first random index was the zeroth index then the sequence will be inputted as "\([MASK_{0}],[MASK_{1}],[MASK_{2}],A,A,A,....,A\)" and reconstructed as "\(A,B,C,[MASK_{0}]\)" where "\([MASK_{0}]\)" is a single token mapping seventeen input tokens). This, however, was shown to reduce the performance, consistently, and was therefore discarded. **Exp. 6:** We have tried a variant of _Exp. 4_ in terms of the partial reconstruction where we mapped all the consecutively-masked tokens into a single token upon reconstruction (i.e., if the input sequence was "\(ABCDEFG\)" and the tokens "\(C\)", "\(D\)", and "\(E\)" were masked then the sequence was inputted as "\(A,B,[MASK_{0}],F,G\)" and reconstructed as "\([MASK_{0}],[MASK_{1}],[MASK_{2}]\)", where "\([MASK_{1}]\)" is a single token mapping three masked tokens, \([MASK_{0}]\) and \([MASK_{2}]\) are single tokens mapping two unmasked tokens) turning it into a 3-gram span masking. This change, too, was motivated by reducing the computational cost but has shown to be an invalid direction and accordingly was not proceeded with. Hence, it can be deduced that the top performing version of the six tested versions was the version of _Exp. 4_, where we reconstruct all the consecutive unmasked tokens as a single merged token. Therefore, this was the version that carried on to the following sub-set of experiments, masking probability. The results for this set of experiments can be found in Table 3. **b. Masking Probability** Masking probability indicates the ratio of tokens to be masked out of the entire sequence length. As indicated in the baseline model's configurations, the default masking probability is 15% [21]. Here, we have experimented with three additional values on the top-performing version of _Exp. 4_. **Exp. 7:** The first tested masking probability was 10%. **Exp. 8:** The second tested masking probability was 20%. **Exp. 9:** The third tested masking probability was 30%. Out of the four values, 10% was the worst masking probability and was accordingly disregarded. Interestingly, it was found that the default value of _Exp. 4_ (15% masking probability) was outperforming for the sub-cellular localization prediction, fold prediction, as well as some of the secondary structure prediction tasks for datasets such as _CB513_ and TS115. Nevertheless, it was found that the value of _Exp. 9_ (30% masking probability) was outperforming for the entirety of the contact prediction tasks in addition to the secondary structure prediction tasks for CASP12 dataset. Given the inconsistency amongst the secondary structure prediction dataset results, we referred to the results on CASP12 -being a domain standard- that promotes the higher masking probability. Furthermore and as we are opting for holistic predictions via a general-purpose language model, we promoted generalization across different types of tasks, especially when the difference amongst results is of such a small magnitude and when acknowledging that task-specific customization is of impact. Finally, given that all the experimental variations were trained for only two epochs, we proceeded with the intermediate value of _Exp. 8_ (20% masking probability) for the post-experimentation long-term training to fulfill the semi-comprehensive inclusion of different pols of tasks anticipated from a general-purpose protein language model that can then be customized per downstream task. The results for this set of experiments can be found in Table 4. #### 4.6.2 Architecture Since we are utilizing an encoder-decoder transformer architecture, the architecture variations we target correspond to the number of encoder and decoder layers, different combinations of depth and width variations, and the means by which the model learns the order of tokens (i.e., positional embeddings). **a. Number of Encoder-Decoder layers** Although the presence of a decoder is essential in improving the representations produced by the encoder, previous runs of Prot-T5, deduced that the decoder did not provide any notable difference in most of the downstream tasks and was accordingly eliminated which, in turn, cut down the inference cost to almost half [3]. This motivated experimenting with an encoder whose number of layers is larger than the decoder's with varying extents as well as testing out the opposite to exclude any refutations. It is noteworthy to mention that we maintained the same total number of layers of the encoder and decoder combined, that is, 72 layers corresponding to 36 layers each in the previous set of experiments. Proceeding from the top-performing version so far of _Exp. 7_, we tested three variations of encoder-decoder's relative number of layers. **Exp. 10:** We initially experimented with having an encoder with 54 layers and a decoder with 18 layers. **Exp. 11:** We then experimented with having an encoder with 48 layers and a decoder with 24 layers. **Exp. 12:** We finally experimented with having a decoder with 48 layers and an encoder with 24 layers. It was found that the version with a 48-layer encoder and 24-layer decoder _-Exp. 11-_ outperformed the version of _Exp. 8_ in all of the secondary structure prediction tasks (8-states), the fold and sub-cellular localization prediction, and the overall mean and median whilst the other two versions demonstrated a fluctuating performance depending on the task. Eventually, we proceeded with _Exp. 11_ (48-layer encoder and 24-layer decoder) to the subsequent set of experiments, unlocking a gain in extracting richer embeddings -as a result of a bigger encoder-with the same total cost of equal-sized encoder-decoder. Our choice was mainly motivated by promoting generalization embodied in the need to retain an adequate number of decoder layers due to their importance in a broad class of generation tasks as well as pooling the majority of task datasets. Moreover, our choice was also motivated by computational efficiency embodied in the smaller number of encoder layers resulting in faster feature extraction. The results for this set of experiments can be found in Table 5. **b. Depth vs. Width Variation layers** Depth corresponds to the number of layers which, as demonstrated, corresponds to the encoder and decoder's layers in the case of the transformer. Width, however, corresponds to the embedding dimension in the transformer's context [19]. **Exp. 13:** The only experiment conducted w.r.t depth-width variations corresponds to increasing the embedding layer dimension from 768 to 1024 and accordingly reducing the depth from a 48-layered encoder & 24-layered decoder to a a 24-layered encoder & a 12-layered decoder to retain, approximately, similar or smaller number of parameters. This, however, corresponded to fluctuating results and accordingly was not proceeded with. We refer to the version with an embedding dimension of 768 as the base model or, _Ankh_base_. The results for this set of experiments can be found in Table 6. **c. Activation Function** The Activation function is the function introducing non-linearity to the forward layer [56]. So far and throughout all the previous experiments, the activation function we have been using was Gated-GELU. It is important to note that Gated-GELU denotes a significantly larger trainable parameter size that, in turn, forces us to neutralize increases in width by decreases in depth, which we have done in _Exp. 12_[15]. To overcome the high parameter demand of the Gated-GELU forcing us to reduce the model's depth, we have ought to change the activation function altogether so that we are not mandated to compromise significantly large depths. Thus, the remaining two experiments conducted w.r.t depth-width variations correspond to a change of the activation function to classic ReLU and testing two combos of depth and width [57]. Now, it may come to mind that varying depth/width alongside the activation function corresponds to two independent variables. However, it is important to note that this is traced to the nature of the Gated-GELU proposing a demand that no longer exists when the Gated-GELU is omitted as well as the constraint of maintaining the same number of parameters to opt for a computationally-fair comparison. **Exp. 14:**: The first combo we tested is a depth of 62-layer encoder and 11-layer decoder with an embedding dimension of 768. **Exp. 15:**: The second combo is a depth of 48-layer encoder and 24-layer decoder, also with an embedding dimension of 768. It was found that none of the combos pursued in the depth-width variation set of experiments consistently surpassed the top performer version, _Exp. 11_. Hence, none of this set's combos were proceeded with and we reverted back to _Exp. 11_. The results for this set of experiments can be found in Table 7. **d. Relative Positional Embedding** Positional embedding describes the location of sequence tokens so that each position is assigned a unique fixed representation, an essential assignment in the case of transformers as their fundamental idea is the replacement of sequential units with attention. [58]. Using a fixed positional embedding has several limitations such as not being able to extract embeddings for tokens exceeding the maximum length of the pre-defined positional embedding dimension, correlated to the embedding dimension [59]. To overcome this limitation, Relative Positional Embedding was introduced. Instead of utilizing a fixed embedding per position, Relative Positional Embedding assigns the sequence tokens variable and relative positional representations whose variability derives from the offset between the "key" and "query" compared in the self-attention setting. We utilize a simplified variation of the Relative Positional Embedding introduced in [20] where every representation is merely a scalar added to the denoting logit utilized for computing the attention weights. In this setting, all representations' parameters are shared across all layers in our model although each attention head uses a different learned positional representation within a given layer. Although the learned embeddings/representations are relative and variable, the embedding dimension learned is fixed in the same experiment as it corresponds to a range of possible key-query offsets. So far and throughout the aforementioned experiments, we have been using 32 embeddings for all of our models. Furthermore, we have been using an offset of 128 tokens throughout which we assign all relative positions to the same embedding. It is noteworthy that a given layer is insensitive to a relative position beyond 128 tokens, but the following layers can still be sensitive to larger offsets by combining local information from previous layers, enabling the model to accommodate sequence lengths larger than the maximum predetermined length. This set of experiments corresponds to different combos of the embedding dimension as well as the embedding offset to ultimately test whether it is better to have a few large-sized embeddings or many small-sized ones, or something in between. The first two experiments in this set retained the default embedding dimension of 32 but varied the embedding offset. **Exp. 16:**: The first experiment in this set retained the default embedding dimension, that is, 32 but increased the embedding offset to 256. **Exp. 17:**: The second experiment in this set also retained the default embedding dimension of 32 but decreased the embedding offset to 64. It was found that the smaller embedding offset of 64 exceeded the performance of both 256 as well as the default value of 128. The following two experiments in this set retained the top-performing embedding offset of 64 but varied the embedding dimension. **Exp. 18:**: The third experiment in this set retained the embedding offset of 64 but increased the embedding dimension to 64. **Exp. 19:**: The fourth experiment in this set retained the embedding offset of 64 but decreased the embedding dimension to 16. Nevertheless, none of those variations consistently exceeded the performance of the embedding dimension of 32 and embedding offset of 64. As it was shown that doubles are performing better, the final two experiments in this set tried out two combos of doubles. **Exp. 20:**: The fifth experiment tested an embedding offset of 128 and an embedding dimension of 64. **Exp. 21:**: The final experiment of this set tested an embedding offset of 256 and an embedding dimension of 128. It was found that the combo with the most consistent and general results was that of _Exp. 20_ (embedding offset of 128 and an embedding dimension of 64) and accordingly proceeded with as we refer to fold prediction when classification tasks are inconsistent and refer to _CASP12_ when secondary structure dataset results are inconsistent. The results for this set of experiments can be found in Table 8. **e. Weight Tying** Weight Tying originates from the motive of reducing the number of parameters associated with the training of the language models and accordingly, training and convergence time, and results consistency [60]. **Exp. 22:** The mechanism by which we pursue the aforementioned motives is tying/sharing the weights and biases of the embedding and the decoder. However, it was found that this did not consistently surpass the results of the so-far top-performing model. This is traced to the difference between the input and output types due to how we masked and damask the input and output tokens, respectively, denoting higher prediction abilities from a setting with fewer parameters. The results for this set of experiments can be found in Table 9. #### 4.6.3 Dataset In the experimentation of Prot-T5, it was found that _UniRef50_ outperformed larger datasets such as _UniRef100_ and _BFD_[3]. This was traced to the high quality of _UniRef50_ in means of the lack of duplication and sequence diversity. Throughout all the previous experiments, the pre-training was also conducted on UniRef50. Yet, we wanted to test a dataset of intermediate size between UniRef50 and the bigger datasets which have proven less efficient. **Exp. 23:** We have conducted a single experiment on _UniRef90_. Nonetheless, the experiment has encouraged the initial direction of proceeding with _UniRef50_ as representative for efficient high-quality attributes. It is noteworthy to mention that all previous experiments with _UniRef50_ were trained for 2 full epochs in contrast to the experiment with _UniRef90_ which was trained for only one epoch that is arguably equivalent. The results for this set of experiments can be found in Table 10. ## 5 Availability Both the Ankh and \(Ankh\_base\) models are publicly available for research innovation at our Ankh repository "[https://github.com/agemagician/Ankh](https://github.com/agemagician/Ankh)", under an Attribution- NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. The repository also contains extensive python and Jupyter python notebook tutorials for various examples, including embedding extractions and supervised learning of several downstream tasks using freely available online resources (GoogleColab). For commercial usage and licensing, please visit "[https://www.proteinea.com/](https://www.proteinea.com/)" ## 6 Acknowledgments The authors would like to thank the deep learning and bioinformatics teams in Proteinea for their invaluable help with hardware, software, and with many other aspects of this work. Mohammed AlQuraishi, Columbia University, for their feedback. From Google, the authors would like to thank Jonathan Caton, Shira Genauer, Astitva Chopra, and all Google cloud, Google innovator, JAX, and TRC Teams for helping to set up the project on Google Cloud and solving Google cloud issues. The models trained in this work could not be easily publicly available without support from the HuggingFace team, including Patrick von Platen, Julien Chaumond, and Clement Delangue. Google supported this project through Google research innovator and Google TPU Cloud Research Credits Program. We would like to thank all researchers worldwide who made all the datasets used in this research publicly available. Finally, ElNaggar and Essam would like to thank Allah for giving the strength, knowledge, and courage to finish this project and share it with the rest of humanity.
2302.08015
Individual Fairness under Uncertainty
Algorithmic fairness, the research field of making machine learning (ML) algorithms fair, is an established area in ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration during the building of ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i.e., the class label is given as a precondition. Unlike prior studies in fairness, we propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels, while enforcing similar individuals to be treated similarly from a ranking perspective, free of the Lipschitz condition in the conventional individual fairness definition. We argue that this perspective represents a more realistic model of fairness research for real-world application deployment and show how learning with such a relaxed precondition draws new insights that better explains algorithmic fairness. We conducted experiments on four real-world datasets to evaluate our proposed method compared to other fairness models, demonstrating its superiority in minimizing discrimination while maintaining predictive performance with uncertainty present.
Wenbin Zhang, Zichong Wang, Juyong Kim, Cheng Cheng, Thomas Oommen, Pradeep Ravikumar, Jeremy Weiss
2023-02-16T01:07:58Z
http://arxiv.org/abs/2302.08015v2
# Individual Fairness Guarantee in Learning with Censorship ###### Abstract Algorithmic fairness, studying how to make machine learning (ML) algorithms fair, is an established area of ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration when building ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, _i.e._, the class label is given as a precondition. Unlike prior fairness work, we study individual fairness in learning with censorship where the assumption of availability of the class label does not hold, while still requiring that similar individuals are treated similarly. We argue that this perspective represents a more realistic model of fairness research for real-world application deployment, and show how learning with such a relaxed precondition draws new insights that better explain algorithmic fairness. We also thoroughly evaluate the performance of the proposed methodology on three real-world datasets, and validate its superior performance in minimizing discrimination while maintaining predictive performance. ## 1 Introduction There is recent concern that we are in the midst of a discrimination crisis within the field of machine learning (ML) and artificial intelligence (AI) [1, 2, 13]. Rightfully, the AI/ML community has conducted a vast research to study the quantification and mitigation of algorithmic bias, which is critical for the use of algorithmic decision-making systems in domains of high societal impact, with examples in criminal justice [1], healthcare [3], predictive policing [12] and employment [11]. Thus far, most work in this well-established field tackles the problem by proposing fairness constraints via regularizers/optimizations at the group level that first identifies a _sensitive attribute_, _e.g._, race or gender, which defines a potential source of bias among the collection of high-level groups, then achieves parity for some fairness statistic of the classifier, such as the prediction accuracy and true positive rate, across these predefined groups [14]. However, these fairness approaches are inapplicable when there is class label uncertainty [15]. Additionally, while group fairness enjoys the merit of easy operationalization, its aggregative characteristic make it easy to fail [1]. In contrast, the _individual fairness_ approach seeks to alleviate this drawback by evaluating a much finer granularity of fairness assessed at the individual level. In particular, the compelling notion of individual fairness is proposed in the seminal work of [13], which requires similarly situated individuals to receive similar probability distributions over class labels to prevent inequitable treatment. This notion of individual fairness is also much less restrictive than group fairness in that there is no need to explicitly identify sensitive attributes. However, individual fairness, like group fairness, also assumes the availability of the class label. Also, the Lipschitz condition required in existing individual fairness is non-trivial, which has been a major obstacle to wider adoption. Such difficulty was also pointed out in [10], but only resulted in an additional effort of metric learning. Another major obstacle of individual but also group fairness' real-world applicability is the assumption of the presence of class labels, which does not hold when there is uncertainty in class label due to censorship [15, 16]. Consider Figure 1 as an example of a clinical prediction task. As exemplified by \(d_{2}\) and \(d_{4}\), the true time to relapse or hospital discharge for patients may be unknown leading the absence of these individuals' class label. Due to the inability to handle the censorship information, existing fairness studies quantify and mitigate bias by focusing on the proportions with assured class label, thus either dropping observations with uncertain class labels [1, 13, 14] or omitting the censorship information [20, 21, 15]. However, removing them would bias the results towards the individuals with known class labels [15, 16]. In summary, there is a need for an algorithm that addresses individual fairness in ML under censorship, an under explored area of research that presents the following challenges: **i) Quantifying and mitigating bias in censored settings.** The algorithm should not ignore either the censored data nor the censorship information to avoid bias. **ii) Free from the Lipschitz conditions resulted from the principle of individual fairness.** Without this, the algorithm may have limited use cases due to the metric calibration between the input and output spaces. To tackle the aforementioned issues, this paper makes an initial investigation on _individual fairness with censorship_ for fairness guarantee more in line with realistic assumptions across individuals and is free from Lipschitz condition. Our individual fairness measure named _Fair Normalized Discounted Cumulative Gain_, or _FNDCG_, is motivated by the same principle that similar individuals be treated similarly, but we see this requirement as the correlation of similarity in the feature and risk spaces, which enables defining a fairness measure usable on censored data. Along with FNDCG, we also propose a corresponding algorithm to address discrimination involving censored individuals. Our method, named _fairIndvCox_, augments the standard model of survival analysis, the Cox proportional hazard model, with our new fairness measure to learn the parameters of risk prediction while being aware of individual fairness. To the best of our knowledge, this work is the first attempt to quantify and mitigate bias under the individual fairness principle, but from a ranking perspective, in a censorship setting and, as a result, free of Lipschitz condition. Our major contributions are summarized as follows: * We formulate a new research problem of individual fairness guarantee in learning with censorship. * We devise _FNDCG_, a notion of individual fairness to measure bias on censored data. Defined with the correlation of similarity in the feature space and the one in the risk prediction space, FNDCG does not require Lipschitz condition and complete class labels. * We propose a debiasing algorithm named _fairIndCov_ for bias mitigation in censorship settings, by incorporating our individual fairness measure into the standard model of survival analysis. * We evaluated our new debiasing algorithm on three real-world datasets with censorship and compared against four survival analysis algorithms and the Lipschitz variant of our algorithm, confirming the utility of the proposed approach in practice. Additional analysis also illustrated the trade-off between individual fairness and predictive performance. The remainder of this paper is organized as follows. In section 2, we describe related work in fairness machine learning and learning with censorship, followed by the preliminaries of survival analysis and the problem definition in Section 3. In section 4, we propose our notion of individual fairness under censorship and corresponding survival model with an individual fairness specification. In section 5, we empirically validate the effectiveness of our learning algorithm on real-world survival analysis datasets and provide qualitative analysis on the effect of the hyper-parameters on the model. Finally, we conclude and provide future directions in section 6. ## 2 Related Work ### Censored Data In many real-world applications, the main outcome under assessment, _i.e._, class label, could be unknown for a portion of the study group. This phenomenon, deemed censorship, hinders the use of many methods of analysis and can arise in various ways. For example, a study may end while an individual has not yet experienced the event of interest, _e.g._, individual \(d_{4}\) in Figure 1. In another case, the studied individual can be lost to follow-up during the study period, withdraw from the study, or experience a competing event making further follow-up impossible, _e.g._, individual \(d_{2}\). In the typical setting of survival analysis, censored examples are only guaranteed not to have experienced events until their last observation, _e.g._\(t_{2}\) and the end of the study for \(d_{2}\) and \(d_{4}\), respectively, and we do not know their exact class labels. The censorship information is used together with the observed data to fit or evaluate survival models, a statistical model that analyzes the expected duration of time until each individual's event. Specifically, we can guarantee that a censored example with the time of event \(T\) happens after \(T\), so we can compare two events at \(T_{1}\) and \(T_{2}\) with \(T_{1}<T_{2}\), if \(T_{1}\) is not censored, regardless whether the \(T_{2}\) is censored or not. The green edges shown in Figure 1 roughly illustrate the order graph of the example data, representing the comparable pairs among individuals with censored and observed events. For example, we can tell the event of \(d_{1}\) happens before \(d_{2}\), but we cannot tell the event of \(d_{2}\) happens before \(d_{3}\). Given that censored data is common, _e.g._, clinical prediction (SUPPORT) [11], marketing analytics (KKBox) [15], recidivism prediction instrument datasets (COMPAS [1] and ROSSI [14]), survival analysis has gained popularity in applied work. For example, in customer analytics Figure 1: An illustration of the censoring phenomenon. Individuals \(d_{2}\) and \(d_{4}\) are censored while others, _i.e._, \(d_{1}\) and \(d_{3}\), are non-censored. Individuals are arranged in the increasing time order of their survival times with the lowest, _i.e._, \(T_{1}\), being at topmost. The study ends at the time shown as the red vertical dash line. There is no edge originating from a censored individual due to censorship, which means that pair comparison between two individuals cannot be made when the individual with lower survival time is censored. whether a customer will cancel the service, _e.g.,_ event of interest/class label, can be unknown due to various reasons discussed above [21]. Similarly one may predict in domains of reoffense [1], analyzing financial outcomes in actuarial analysis [20], and predictive maintenance in mechanical operations [20]. ### Fairness in AI #### 2.2.1 Quantifying Bias Much progress has been made to quantify and mitigate unfair or discriminatory manner of AI algorithm. These efforts, at the highest level, can be typically divided into two families: _individual fairness_ and _group fairness_. A vast majority of existing works focuses on group notions, aiming to ensure members of different groups, _e.g.,_ gender or race _aka_ sensitive attributes, achieve approximate parity of some statistic over class labels, such as statistical parity [17], disparate impact [16], equality of opportunity [15] and calibration [14]. While enjoying the merit of easy operationalization, group-based fairness methods are easy to fail when guaranteeing fairness at the individual level in addition to several other drawbacks [1]. On the other hand, individual fairness alleviates such a drawback by requiring that individuals who are similarly situated with respect to the task at hand, receive similar probability distributions over class labels [13]. Formally, this objective can be formulated as the Lipschitz property and fairness is achieved iff: \[D(f(x_{a}),f(x_{b}))\leq LD^{\prime}(x_{a},x_{b}) \tag{1}\] where \(L\) is the Lipschitz constant, \(D^{\prime}(\cdot,\cdot)\) and \(D(\cdot,\cdot)\) are corresponding distance functions of features in input space, \(x\), and probability distributions over class labels in output space, \(f(\cdot)\), respectively. The major obstacles for wider adoption of individual fairness, though, are the difficulty of calibrating the distance functions resulted from the Lipschitz condition and the assumption of the availability of class labels, which is not impractical in many applications due to censorship. Our new fair methodology is a member of individual-based approaches, but resolves these two main limitations in current literature, providing a fairness guarantee across individuals with censorship and is free from Lipschitz condition. #### 2.2.2 Mitigating Bias The fairness notions mentioned above are used as a constraint or as a regularizer to enforce fairness. These debiasing algorithms, mostly group-based, can be categorized into three according to which stage of machine learning the intervention occurs: pre-processing, in-processing, and post-processing approaches. The first category, pre-processing approaches, work on bias in the data or input stage, assuming that unbiased training data is necessary for a fair ML model. These methods modify the data distribution to ensure fairness of the representations from different groups and are model-agnostic. Examples of this category include data massaging [11] which changes data distribution, an extension called local massaging [15], and reweighing [1] which assigns different weights to the communities. The second category, in-processing approaches, directly changes ML algorithms to produce unbiased predictions, are model-specific in general. For example, in [17], the fairness gain is incorporated into the splitting criteria of the Hoeffding Tree algorithm, which is later extended in [17] to ensemble-based methods. The methods focus on group fairness and require complete class labels. Yet, there is a limited number of research of individual fairness under data censorship, which this work focuses on. The last category, post-processing approaches, modifies the decision boundaries to fairly represent diverse groups. Examples include building an interpretable model [16], adjusting decision threshold to reduce the unfairness [15], and moving decision boundaries of the deprived communities to prevent discrimination [18]. However, that applying these techniques in censoring settings is not straightforward, as decision boundaries may also be censored owing to its distribution. ### Survival Analysis The prevailing censored data motivates the study of survival analysis to address the problem of partial survival information from the study cohort [15, 16, 17]. Cox proportional hazard (CPH) model [18] is the most commonly used model, which expresses the hazard function as the product of a shared time-dependent baseline hazard and an individual-specific risk function. Developing the CPH model, [15] parameterized the effect of individual's covariates by a neural network. Another line of research is tree-based methodology [1, 14], where the splitting rule is modified to handle censored data and free from the proportional assumption of the CPH model. Interested readers may refer to [20] for a comprehensive survey on the recent methods of modeling censored data. Like other AI approaches, care must be taken to ensure the fairness of survival models to prevent bias against deprived communities. Starting with [17, 18], there is a line of work studying fairness with censorship but subject to group-based constraints. In these works, the survival model is modified to ensure fair risk predictions as in [16]. However, their work requires the Lipschitz condition as in the conventional individual fairness and does not explicitly considers survival information to address discrimination. Our method aims to address these two limitations. ## 3 Notations and Problem Definition In this section, we provide preliminary notations and concepts of survival analysis, followed by the definition of the problem of our concern. In survival analysis, censored data can be typically be described as follows. Each individual \(d_{i}\) with index \(i\in\{1,\cdots,N\}\) is equipped with a characteristics tuple \((x_{i},T_{i},\delta_{i})\), where the entries of each tuple are i) \(x\): the observed feature, ii) \(T\): the survival time, _i.e._ the time of event, iii) \(\delta\) the event indicator, which indicates whether the event is observed. In the setting of survival analysis, the event is observed only when \(\delta=1\), and \(T\) is the actually time of event in this case. When \(\delta=0\), then the event time is censored resulting in uncertainty on the class label and is only known to be greater than or equal to \(T\). The modeling function commonly used is the _hazard function_, which specifies the instantaneous rate of event occurrence at a specified time \(t\) conditioned on surviving to \(t\): \[h(t|x)=\lim_{\triangle t\to 0}\frac{\text{Pr}(t<T<t+\triangle t|T\geq t,x)}{ \triangle t} \tag{2}\] Given a hazard model, one can also compute the survival function \(S(t|x)=\text{Pr}(T>t|x)\), the probability that the event occurs after a specific time \(t\) by \[S(t|x)=\exp\left(-\int_{0}^{t}h(t|x)dt\right) \tag{3}\] Among the various proposed survival analysis methods, the Cox proportional hazards model (CPH) [12] has become the standard for modeling censored data in which the multiplicative relation between the risk, as expressed by the hazard function and covariates is described, _i.e._, \[h(t|x)=h_{0}(t)\exp(\beta^{\top}x) \tag{4}\] where \(h_{0}(t)\) is called the baseline hazard function (_i.e._, when \(x=0\)) while \(\beta\) is a set of unknown parameters, which can be estimated by applying maximum likelihood estimation. Given a dataset of \(N\) individuals \(\{(x_{i},T_{i},\delta_{i})\}_{i=1}^{N}\) with i.i.d. assumption, we can compute the likelihood as the product of the likelihood of the uncensored individuals. Such function is called the partial likelihood and can be written as follows: \[L(\beta)=\prod_{i:\delta_{i}=1}\frac{\exp(\beta^{\top}x_{i})}{\sum_{j:T_{j} \geq T_{i}}\exp(\beta^{\top}x_{j})} \tag{5}\] The partial likelihood estimate \(\hat{\beta}=\arg\max_{\beta}L(\beta)\) can be obtained by maximizing the partial likelihood function. Note the partial likelihood function does not include the baseline hazard function. One can also add a regularization function, such as ridge or lasso regularization, for \(\beta\). To evaluate survival models, the _concordance index_, or _C-index_, is commonly used [15]. Given a survival model, and the concordance index of the model measures the fraction of all comparable pairs of individuals whose predicted survival times are correctly ordered as training data: \[C=\frac{1}{\sum\limits_{i:\delta_{i}=1}[\{j:T_{j}>T_{i}\}]}\underset{i: \delta_{i}=1}{\sum}\sum_{j:T_{j}>T_{i}}\hskip-14.226378pt\mathbb{1}[f(x_{j})> f(x_{i})] \tag{6}\] where \(f(x)\) is the expected survival time for an individual [13]. C-index is also equal to the area under ROC curve (AUC) in the presence of censorship. In a proportional hazard model, the order of expected survival time is as same as the order of hazard function. Please see Figure 1 for an example of order graph, which represents the comparable pairs or individual. The main problem we address in this work is to devise an algorithm that can quantify the individual fairness notion in survival analysis and use the quantification to mitigate the bias. Under the general assumption of survival analysis, unlike most existing works of individual fairness, not all individuals are given a label, or survival time, due to data censoring. Another desirable quality the algorithm can have is to alleviate or be free from the Lipschitz condition resulting from locality between similarity metrics. We make note that although similarity-based constraint has been considered to alleviate bias [13, 14], the jointly consideration of censored information is a unique contribution. ## 4 Method In this section, we introduce a learning algorithm of censored data with specification for individual fairness. First, in Section 4.1, we define a rank-based similarity measure of risk scores and propose a corresponding individual fairness score, named _FNDCG@k_. In Section 4.2, we propose a survival analysis model, called _fairIndvCox_, which incorporates FNDCG@k to the Cox proportional hazard model. ### Individual Fairness with Censorship Existing individual fairness notions depend on the Lipschitz condition which is non-trivial due to the difference of the similarity metrics of the input and output space. In addition, they do not consider survival information when quantifying unfairness which is important and requires special attention, otherwise substantial bias could be introduced. To overcome these, we propose to evaluate unfairness from a ranking perspective while jointly considering survival information. To this end, for each individual we first obtain two ranked lists of other individuals based on the similarity matrix \(\text{Sim}_{D^{\prime}}\) and \(\text{Sim}_{D}\), and require the relative order of individuals in these two lists are consistent with each other. Back in Figure 1, assume the list derived from \(\text{Sim}_{D^{\prime}}\) between individual \(d_{1}\) and other three individuals is \(\{d_{3}\), \(d_{2}\), \(d_{4}\}\), ordered by closest-to-farthest, then the predictions are individually fair for \(d_{1}\) if the encoded list from \(\text{Sim}_{D}\) is \(\{d_{3}\), \(d_{2}\), \(d_{4}\}\) as well. Note that the input similarity matrix \(\text{Sim}_{D^{\prime}}\) is often given a priori as it is problem-specific [13, 12], while we define \(\text{Sim}_{D}\) as follows, \[\text{Sim}_{D,ij} =\exp\left(-|\bar{h}(t|x_{i})-\bar{h}(t|x_{j})|\right)\] \[=\exp\left(-|\exp(\beta^{\top}x_{i})-\exp(\beta^{\top}x_{j})|\right) \tag{7}\] where \(\text{Sim}_{D,ij}\) is the \((i,j)\)-th entry of \(\text{Sim}_{D}\), \(\bar{h}(t|x)\) is the hazard function with \(h_{0}(t)\) dropped, _i.e._, \(\bar{h}(t|x)=\exp(\beta^{\top}x)\), as it is not individual specific in the CPH model. In Equation 7, the similarity metric is formulated as the exponential of the negative difference of the risk score. We make a note that this considers various factors to make a similarity that performs a trade-off between accuracy and fairness. First, the exponential followed by negation is used for smoothing. This bounds the difference in the unbounded risk scores to a value between 0 and 1. Second, it transforms a distance metric into a similarity function which has a value closer to 1 when the two individuals are similar. It also makes the function applicable in the calculation of discounted cumulative gain (DCG), which will be used to compute the fairness quantification. In DCG@k, the quality of the most similar pairs in the output space will be accumulated with a discounted factor decaying with their ranking. Here, similarity is more proper than metric for the quality function as the closer a pair is, the higher the function is. Considering the encoded ranking list should also take important survival information and consistency between predicted and actual outcome into consideration, we adjust \(\text{Sim}_{D}\) according to the concordance difference (\(C_{\triangle}\)), \[\text{Sim}_{D,ij}=(1-C_{\triangle}(x_{i},x_{j}))\exp\left(-|\exp(\beta^{\top}x _{i})-\exp(\beta^{\top}x_{j})|\right) \tag{8}\] where \(C_{\triangle}(x_{i},x_{j})=|C_{x_{i}}-C_{x_{j}}|\) measures the concordance difference between \(x_{i}\) and \(x_{j}\). The concordance of individual \(x_{g}\) within the ranking list, \(C_{x_{g}}\), is defined as, \[C_{x_{g}} =\frac{1}{\sum_{g^{\prime}\neq g}\mathbb{1}[\delta_{<}=1]}\sum_{ g^{\prime}\neq g}\mathbb{1}[h(t|x_{>})<h(t|x_{<}),\delta_{<}=1]\] \[=\frac{1}{\sum_{g^{\prime}\neq g}\mathbb{1}[\delta_{<}=1]}\] \[\quad\times\sum_{g^{\prime}\neq g}\mathbb{1}[\exp(\beta^{\top}x _{>})<\exp(\beta^{\top}x_{<}),\delta_{<}=1] \tag{9}\] where \(x_{>}\) and \(x_{<}\) is the individual with a longer, _i.e._, \(T_{>}=\max(T_{g},T_{g}^{\prime})\), and shorter, _i.e._, \(T_{<}=\min(T_{g},T_{g}^{\prime})\), survival time, and \(\delta_{<}\) is the event indicator of shorter survival time. \(C_{x_{g}}\) can be interpreted as the fraction of all other individuals whose predicted survival times are correctly ordered with \(x_{g}\) as their actual survival times. The concordance difference effectively adjusts the similarity values defined in Equation (7). In the general case, we would like the original similarity in the output space to be downscaled according to the prediction deviation as reflected by the concordance difference, which also explicitly includes survival information when quantifying unfairness in the censoring setting. Armed with the similarity matrix \(\text{Sim}_{D}\) and \(\text{Sim}_{D^{\prime}}\), we propose _Fair Normalized Discounted Cumulative Gain (FNDCG@k)_, motivated by learning to rank [14], for the evaluation of individual fairness with censoring as below, \[\text{FNDCG@k}=\frac{1}{N}\sum_{n=1}^{N}\frac{\min(\text{DCG}_{\text{Sim}_{D }(d_{n})},\text{DCG}_{\text{Sim}_{D^{\prime}}(d_{n})})}{\max(\text{DCG}_{ \text{Sim}_{D}(d_{n})},\text{DCG}_{\text{Sim}_{D^{\prime}}(d_{n})})} \tag{10}\] where \(N\) is the number of individuals and \(\text{DCG}_{\text{Sim}(d_{n})}\) is the discounted cumulative gain formulated as follow, \[\text{DCG}_{\text{Sim}(d_{n})}=\sum_{\text{pos}=1}^{k}\frac{\text{Sim}_{ \text{pos}}}{\log(\text{pos}+1)} \tag{11}\] where \(k\) is the length of the ordering list, pos is position of each individual in the ordering list derived from the corresponding similarity matrix for individual \(d_{n}\), while \(\text{Sim}_{\text{pos}}\) is the similarity between the individual in this position of the ordering list and the individual \(d_{n}\). Align with the existing individual fairness notions, the values of FNDCG@k is also within the interval of [0,1]. In addition, the higher the FNDCG@k score, the more consistency between the ranking list encoded from the input and output space and thus, the fairer the model. ### Individual Fairness Algorithm under Censorship With the tailored individual fairness definition specifically accounts for censoring, we now introduce a corresponding learning algorithm, _fairIndvCox_, following the Cox proportional hazard model, the standard for modeling censored data, to generate tailored forecasts while providing fair risk predictions across individuals. Essentially, the learning algorithm augments the partial likelihood maximization of the CPH model with our individual fairness quantification, FNDCG@k. Starting with the model utility maximization, the utility loss function \(\mathcal{L}_{\text{utility}}\) is formulated as the negative log partial likelihood of the CPH model. Given the partial likelihood in Equation (5), we have \(\mathcal{L}_{\text{utility}}\) as \[\mathcal{L}_{\text{utility}}=-\sum_{i:\delta_{i}=1}(\beta^{\top}x_{i}-\log\sum_ {j:T_{j}\geq T_{i}}\exp(\beta^{\top}x_{j})) \tag{12}\] Next, we integrate Equation (10) as the individual fairness regularizer \(\mathcal{L}_{\text{fairness}}=\text{FNDCG@k}\) and define the unified objective function as \[\mathcal{L}_{\text{unified}}=-(\mathcal{L}_{\text{utility}}+\gamma\mathcal{L} _{\text{fairness}}) \tag{13}\] where \(\gamma\) is the tuning parameter controlling the trade-off between utility and fairness. Combining the utility and fairness loss function in the objective function, the model building is thus accuracy-driven as well as fairness-oriented for individual fairness guarantee in learning with censorship. There are two hyper-parameters governing fairIndvCox: \(\gamma\), the coefficient controlling the balance between utility and fairness, and \(k\), the length of the ordered list in the computation of \(\text{DCG}_{\text{Sim}(d_{n})}\). Both parameters effect our algorithm as a trade-off between the predictive performance and individual fairness, as we show empirically in Section 5.4 and 5.5. ## 5 Experiments In this section, we conduct experiments to evaluate the effectiveness of our fairIndvCox algorithm, as well as conduct an comparison study on our Lipschitz-free bias quantification and examine the trade-offs controlled by the algorithm's hyper-parameters. First, we describe the real-world censored datasets that our proposed algorithm is evaluated against various baselines and the Lipschitz version of our algorithm, and provide the experimental results of predictive performance and individual fairness. We then explain the experiments on the effects of the hyper-parameters and discuss the results. ### Datasets We validate our model on three real-world censored datasets with socially sensitive concerns: i) The _ROSSI_ dataset pertains to persons convicted then released from Maryland state prisons, who were followed up for one year after release [12]. ii) The landmark algorithmic unfairness _COMPAS_ dataset to predict recidivism from Broward County [1]. iii) The _KKBox_ dataset from the WSDM-KKBox's Churn Prediction Challenge 2017 [20]. See Table 1 for the statistics. Note that survival information is explicitly included in these datasets to specifically account for censorship. ### Experiment Results We compare fairIndvCox against four baselines to evaluate its design: i) the recently proposed fair survival model IDCPH [12], which, to the best of our knowledge, is the only work for fair survival analysis problem across individuals, ii) along with the baseline therein, the typical CPH [12], iii) the state of the art random forests modeling censored data RSF [13], iv) deep neural network on survival analysis DeepSurv [14] as additional baselines. Other competing fairness methods are not considered as none of them is capable of addressing fairness in the presence of censoring. Neither are group-based fair survival models as they necessitate the specification of sensitive attribute to enforce fairness which is unspecified in individual fairness learning. In addition to the proposed individual fairness measure considering censorship, we also report the widely used concordance index, or C-index. Without loss of generality, we employ the euclidean distance with feature scaling to obtain \(\text{Sim}_{D^{\prime}}\). Furthermore, \(k\) is set as 10 while \(\gamma\) as 1 in the unified objective function for quantitative performance comparison. All methods are trained in the same way with 5-fold cross validation for fair comparison. The results of our experiment are presented in Table 2. We can clearly see that our new fairIndvCox dominates all other baselines in minimizing discrimination while maintaining a competitive predictive performance, which verifies the necessity of its debiasing design across individuals while accounting for censorship. On the other hand, lack of considering survival information as well as non-trivial handling of Lipschitz constant result in the inferior performances of other baselines. In addition, the improved overall predictive performance of fairIndvCox also shows the merit of such anti-discrimination design for prediction accuracy, presumably due to fairness regularization reducing overfitting. ### Comparison Study on the Lipschitz-free Bias Quantification We further perform an comparison study to verify the advantage of being free from Lipschitz condition over Lipschitz constant specification in individual fairness quantification. We replace the \(\mathcal{L}_{\text{fairness}}\) in fairIndvCox with Equation (1), Lipschitz condition, as suggested in [15], and denote the method as _fairIndvCox_. Results in Table 3 show that fairIndvCox outperforms fairIndvCox- in minimizing discrimination for all datasets by large margins and also in terms of the predictive performance, except for a small decrease in the ROSSI dataset. This verifies that relaxing the Lipschitz constant specification in individual fairness guarantees can lead to improved performance. ### Effect of Different \(\gamma\) Values on Individual Fairness and Accuracy To investigate the effect of \(\gamma\) on the performance of fairIndvCox, we vary \(\gamma\) within the set \(\{e^{-4},e^{-3},\cdots,e^{4}\}\) where \(e\) is the natural constant, with all other hyper-parameters remaining the same, and compare the performance in terms of utility and individual fairness. According to the results shown in Figure 2, there are three cases of \(\gamma\) values. (1) For small values of \(\gamma\) (e.g., less than \(e^{-2}\) for ROSSI, \(e^{-3}\) for COMPAS, and \(e^{-2}\) for KKBox), the individual fairness constraint has less effect on fairIndvCox's performance on the model utility and FNDCG@10% for the \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{FNDCG@10\%} & \multicolumn{2}{c}{C-index\%} \\ & fairIndvCox & fairIndvCox & fairIndvCox- & fairIndvCox \\ \hline ROSSI & 45.29 & 53.29 & 64.42 & 64.12 \\ COMPAS & 77.39 & 85.64 & 60.14 & 69.17 \\ KKBox & 54.02 & 68.67 & 82.71 & 83.33 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of comparison study on the Lipschitz-free Bias Quantification. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{MethodMetrics} & \multicolumn{1}{c}{FNDCG@10\%} & \multicolumn{1}{c}{C-index\%} \\ \hline & IDCPH & 43.41 & 52.28 \\ & CPH & 33.41 & 64.24 \\ ROSSI & RSF & 36.17 & 65.56 \\ & DeepSurv & 31.43 & **66.67** \\ & **53.29** & 64.12 \\ & fairIndvCox & **(22.76\%)** & (-3.82\%) \\ \hline \hline \multirow{4}{*}{COMPAS} & IDCPH & 76.27 & 62.16 \\ & CPH & 73.51 & 69.24 \\ \cline{1-1} & RSF & 72.64 & 72.61 \\ \cline{1-1} & DeepSurv & 74.18 & **75.12** \\ \cline{1-1} & fairIndvCox & **85.64** & 69.17 \\ \cline{1-1} & **(12.29\%)** & (-7.92\%) \\ \hline \hline \multirow{4}{*}{KKBox} & IDCPH & 56.61 & 72.61 \\ \cline{1-1} & CPH & 47.32 & 80.02 \\ \cline{1-1} & RSF & 42.41 & 82.32 \\ \cline{1-1} & DeepSurv & 43.45 & 83.01 \\ \cline{1-1} & **68.67** & **83.33** \\ \cline{1-1} & fairIndvCox & **(21.30\%)** & (0.39\%) \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results of different models with the best results marked in bold. The numbers in parentheses represent the relative performance improvement of fairIndvCox compared to the best baseline. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{FNDCG@10\%} & \multicolumn{2}{c}{C-index\%} \\ & fairIndvCox & fairIndvCox & fairIndvCox- & fairIndvCox \\ \hline ROSSI & 45.29 & 53.29 & 64.42 & 64.12 \\ COMPAS & 77.39 & 85.64 & 60.14 & 69.17 \\ KKBox & 54.02 & 68.67 & 82.71 & 83.33 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of comparison study on the Lipschitz-free Bias Quantification. three tasks. (2) As the value of \(\gamma\) increase progressively (e.g., \(e^{-2}\) to \(e^{1}\) for ROSSI, \(e^{-3}\) to \(e^{1}\) for COMPAS, and \(e^{-2}\) to \(e^{1}\) for KKBox), fairness increases significantly but at the cost of some sacrifice in accuracy. This would imply that fairIndv-Cox achieves the appropriate balance between fostering individual fairness and preserving model performance. (3) If the \(\gamma\) is relatively large (e.g., large than \(e^{1}\) for all the datasets), the strength of individual fairness promotion will continue to have an effect on accuracy. Not only that, but FNDCG@10% decreases as the value of \(\gamma\) increases. The reason for this result is that as the value of \(\gamma\) increases, it leads to increasing weight of \(\mathcal{L}_{\text{fairness}}\). But this does not mean we can obtain the optimal node. Therefore, the performance of individual fairness promotion within a fixed epoch is close to its limit, and it is difficult to achieve better performance. ### Effect of Different Number of Neighbors \(k\) Values on Individual Fairness and Accuracy Similar to the above section, we conducted the experiments with a variety of values for \(k\) in \(\{4,7,10,15,20,30,50\}\), keeping all other training factors the same, and compared the predictive performance and fairness of the resulting models. Based on the trends presented in Figure 3, the following observations are available: (1) As the value of \(k\) increases, the fairIndvCox achieves better performance on _FNDCG@10%_, showing better optimization for individual fairness. (2) When \(k\) is a modest value (e.g., smaller than 15 for ROSSI, 20 for COMPAS, and 15 for KKBox), model utility performance is hardly affected or even increased. This indicates that the fairIndvCox strikes the right balance between maintaining model utility and fostering individual fairness with proper choices \(k\) in optimization. (3) When \(k\) is significant (e.g., greater than 15 for ROSSI, 20 for COMPAS, and 15 for KKBox), model utility performance significantly declines. As the value of \(k\) increases, more points will be referenced at a time, which introduces more interference values. This leads a decrease in the weight of the correct label and a blurred classification, thus leading to a reduction in model performance. ## 6 Conclusion This work highlights the gap between the prevailing real-world applications with censorship and the assumption of class label availability of existing AI fairness methods. We make an initial investigation on individual fairness guarantee in learning with censorship. In addition, this work also goes a step further to define individual fairness from a ranking perspective, thus relaxing from the Lipschitz constant specification in the conventional individual fairness studies. The proposed notion and algorithm are expected to be versatile in quantifying and mitigating bias in various socially sensitive applications. We provide an empirical evaluation of three real-world datasets to validate our proposed method's effectiveness. The experimental results show that with suitable \(\gamma\) and \(k\) values, our method can substantially improve individual fairness with an acceptable loss of predictive perfor Figure 3: Study the choice of k-value: The fairIndvCox models subject to different k variations (between 4 and 50) on ROSSI, COMPAS, and KKBox exhibit effects on individual fairness and model accuracy. Figure 2: Study on individual fairness and accuracy trade-off on \(\gamma\): The fairIndvCox models subject to different \(\gamma\) variations (between \(e^{-4}\) and \(e^{4}\) ) on ROSSI, COMPAS, and KKBox exhibit effects on individual fairness and model accuracy. mance, and the model outperforms the current state-of-the-art individual fairness promotion methods. Finally, this work defines a new task and opens possibilities for future work on a comprehensive study of AI fairness.
2301.07676
A Workflow Model for Holistic Data Management and Semantic Interoperability in Quantitative Archival Research
Archival research is a complicated task that involves several diverse activities for the extraction of evidence and knowledge from a set of archival documents. The involved activities are usually unconnected, in terms of data connection and flow, making difficult their recursive revision and execution, as well as the inspection of provenance information at data element level. This paper proposes a workflow model for holistic data management in archival research; from transcribing and documenting a set of archival documents, to curating the transcribed data, integrating it to a rich semantic network (knowledge graph), and then exploring the integrated data quantitatively. The workflow is provenance-aware, highly-recursive and focuses on semantic interoperability, aiming at the production of sustainable data of high value and long-term validity. We provide implementation details for each step of the workflow and present its application in maritime history research. We also discuss relevant quality aspects and lessons learned from its application in a real context.
Pavlos Fafalios, Yannis Marketakis, Anastasia Axaridou, Yannis Tzitzikas, Martin Doerr
2023-01-18T17:53:52Z
http://arxiv.org/abs/2301.07676v1
A Workflow Model for Holistic Data Management and Semantic Interoperability in Quantitative Archival Research ###### Abstract Archival research is a complicated task that involves several diverse activities for the extraction of evidence and knowledge from a set of archival documents. The involved activities are usually unconnected, in terms of data connection and flow, making difficult their recursive revision and execution, as well as the inspection of provenance information at data element level. This paper proposes a workflow model for holistic data management in archival research; from transcribing and documenting a set of archival documents, to curating the transcribed data, integrating it to a rich semantic network (knowledge graph), and then exploring the integrated data quantitatively. The workflow is provenance-aware, highly-recursive and focuses on semantic interoperability, aiming at the production of sustainable data of high value and long-term validity. We provide implementation details for each step of the workflow and present its application in maritime history research. We also discuss relevant quality aspects and lessons learned from its application in a real context. ## 1 Introduction Archival research is a type of research which involves investigating and extracting evidence from archival records usually held in libraries, museums or other organisations. In its most classic sense, archival research involves the study of historical documents, thus it lies at the heart of original historical research (Ventresca and Mohr, 2017). A large body of research in the field concerns the study of archival documents that have a _repetitive_ structure, such as registers, logbooks, payrolls, censuses, etc., and which provide information about one or more _types of entities_, such as persons, locations, objects, organisations, etc. Research in this case usually starts by first collecting a set of archival documents related to a domain of interest, which are then transcribed and curated for enabling quantitative (but also qualitative) analysis of empirical facts, their description and interpretation of possible causes, influences and evolution trends (Petrakis _et al._, 2020). Common data management problems in this context include: What data to transcribe and how? How to curate the transcribed data for enabling valid quantitative analysis and more effective exploration services? How to integrate the data under a common schema/model for supporting the investigation of information needs that require combining data from more than one source? How to support the long-term preservation and reuse of the data? How to maintain all data provenance information, which is important for the verification and the long-term validity of research findings that use the data? Consider, for instance, the real use case of the SeaLiT project1 (ERC Starting Grant in the field of _maritime history_), which studies the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea (1850s-1920s) (Delis, 2020). Historians in this project have collected and studied a large number of archival documents of different types and languages, such as crew lists, payrolls, and sailor registers, gathered from multiple authorities in five countries. Complementary information about the same entity of interest, such as a ship or a sailor, may exist in different archival documents. For example, for the same ship, one source (_accounts book_) may provide information about its owners, another source (_naval ship register list_) may provide construction details and characteristics of the ship (length, tonnage, horsepower, etc.), while other sources (_crew lists_) may provide information about the ship's voyages and crew. There might also be another source (_civil register_) that provides additional information about the crew members, such as their marital status and previous professions. Data integration is very important in this context, for supporting historians in finding answers to questions that require combining information from more than one source, such as "finding the nationality of sailors of large ships that arrived at a specific port". Footnote 1: [https://sealitproject.eu/](https://sealitproject.eu/) In addition, the name of the same entity (e.g. of a person) might be different in different sources due to typos, different language, unrecognisable characters, or use of abbreviation (e.g. 'G. Schiaffino', 'Gaetano Schiaffino', 'Gaetano Schiafino'). Moreover, the same term, such as a profession or a ship type, may appear under different names in different sources (e.g. 'brigantine', 'brigantino'). Data curation, in particular entity (instance) matching and term alignment, is crucial in this context for enabling valid quantitative analysis (like grouping a list of retrieved sailors by profession). However, at the same time, such curation must not alter the original transcribed data since this is important for verification and thus the long-term validity of the research findings. To cope with these problems, in this paper we describe a workflow model for holistic data management in archival research (depicted in Fig. 1). The workflow relies on the strong collaboration between researchers (domain experts) and data engineers (modeling experts), and focuses on _semantic interoperability_, the ability of computer systems to exchange data with unambiguous/shared meaning (Ouksel and Sheth, 1999), because such an approach supports the production of sustainable data of high value that can be extended and re-used beyond a particular research activity or project. The workflow was designed based on real users' needs and is provenance-aware, in the sense that it retains the full provenance chain of each piece of data. It achieves this by decoupling data entry from data curation and integration. The researcher can go back to the transcript or the original source and inspect the initial form of a piece of information. It is also highly-recursive, supporting the revision of the transcription, curation and integration steps, e.g. due to new knowledge acquired in the course of research. In comparison to related work, we treat the relevant activities in an holistic manner, paying particular attention on maintaining the provenance information at micro (data element) level, which is important for reproducible research in the age of Open Science (Vicente-Saez and Martinez-Fuentes, 2018). We showcase an implementation of the workflow model in a real use case in the field of maritime history and report empirical results from its application for satisfying real information needs of a large group of historians. We also discuss relevant data quality aspects and lessons learned. The rest of this paper is organised as follows: Section 2 provides the required background and describes related work. Section 3 provides an overview and the main characteristics of the proposed workflow model. Section 4 details how each step of the workflow model can be realised. Section 5 provides information about the automation of the workflow. Section 6 describes a real use case. Section 7 discusses quality aspects and relevant lessons learned. Finally, Section 8 concludes the paper and outlines future work. ## 2 Background and Related Work We first explain the basic notions about semantic technologies (Section 2.1) and review how such technologies are used in humanities research, a large part of which concerns archival research (Section 2.2). We then focus on the different data management activities towards semantic interoperability in archival research and present relevant works (Section 2.3). Finally, we position our work (Section 2.4). ### Basic Notions Semantic technologies aim at helping machines understanding data. RDF (Resource Description Framework)2 and OWL (Web Ontology Language)3 are key semantic technologies that enable encoding the semantics of data, thus allowing to formally represent the meaning involved in information (Antoniou and Van Harmelen, 2004). This representation has the form of a _semantic network (or _knowledge graph_) which stores interlinked descriptions of "entities" (objects, persons, events, concepts, etc.) in a graph structure in which vertices represent entities and edges represent semantic relations between the entities. Typical standardized semantic networks are expressed as RDF triples (statements of the form _subject-predicate-object_) stored in a semantic repository (RDF triplestore) (Ali _et al._, 2021). Semantic technologies help achieving semantic interoperability, the ability of computer systems to exchange data with unambiguous/shared meaning, which is a requirement to enable machine computable logic, inferencing, knowledge discovery, and data federation between information systems (Ouksel and Sheth, 1999). ### Semantic Technologies for Humanities Research There is an increasing adoption of semantic technologies in the humanities field, with a main focus on how to produce and make publicly available interoperable _Linked Data_(Heath and Bizer, 2011) that can be easily queried and integrated with other datasets (Hyvonen, 2020; Hyvonen _et al._, 2014; Hawkins, 2021; Beretta, 2021; Fafalios _et al._, 2021). Oldman _et al._ (2015) provide a critical discussion on how semantic technologies and the idea of Linked Data are used in humanities research, and describe strategies for the wider adoption of these technologies for supporting high-quality digital humanities projects and the production of data that better represents human knowledge and better reflects the needs of humanities researchers. Hawkins (2021) examines how Linked Data about archives is beneficial for those engaged in digital humanities research and scholarship, considering some of the barriers that currently prevent digital humanists from being able to utilise digitised and born-digital archives. We believe that the workflow model that we propose, in particular its provenance-awareness at data element level, is a first step towards tackling some of the major issues described in the aforementioned works, such as the ability "to trace the provenance of knowledge back to the source micro-level (with its original context and perspective intact)" (Oldman _et al._, 2015, p.10), or "preventing the decontextualisation and loss of nuance of archives" (Hawkins, 2021, p. 11). With respect to historical research, for which archival research is a core part, Merono-Penuela _et al._ (2015) survey the joint work of historians and computer scientists in the use of semantic technologies. The article provides an extensive analysis on works and systems for knowledge modelling, text processing and mining, search and retrieval, and data integration. It also discusses aspects of semantic technologies that could be furtherly exploited in historical research. Such an aspect is the "non-destructive data transformations" (Merono-Penuela _et al._, 2015, p. 22). Decoupling data entry from data curation and transformation, and maintaining a recursive workflow between these processes, are core characteristics of the proposed workflow model that help towards this direction. ### Data Management for Semantic Interoperability in Archival Research Common data management activities for enabling semantic interoperability in archival research include: * digitization / transcription of archival documents (scanning of documents, text recognition, manual transcription) * documentation / metadata recording (what is the origin of a document, what is the document about, who makes the transcription, etc.) * data curation / preparing the data for statistical analysis (correction or normalisation of data values, instance matching, term alignment, etc.) * data integration under a common representation language (ontology-based modeling, creation of mappings, data transformation) * data publication (e.g. as Linked Data) * data analysis and exploration (qualitative and/or quantitative analysis, query building, data visualisation, etc.) There is a plethora of software tools and systems for each of these activities. Below we present relevant works that have a focus on humanities research. **Digitization/Transcription.** One can either use text recognition software for automatically extracting text from historical documents, or manually perform the transcription process, each approach having its pros and cons. For example, the automated approach usually needs large amounts of training data and its effectiveness (quality of results) highly depends on the kind/quality of text to be extracted and the amount of training data. On the other hand, manual transcription provides high quality results but it requires a lot of effort. A mixed method is to combine automated extraction with manual correction and data entry. Regarding software tools, Transkribus (Kahle _et al._, 2017) is a popular platform for the digitisation of historical documents, offering AI-powered text recognition. FastCat (Fafalios _et al._, 2021) is a web application for manual and collaborative transcription based on templates. It organises the data (and metadata) in tabular forms (tables), similar to spreadsheets, offering a fast and user-friendly way to data entry. **Documentation / metadata recording.** There are two main approaches for documentation towards semantic interoperability: a) decoupling the documentation process from the ontology-based integration and the production of the semantic network, b) creating the semantic network from the very beginning, i.e. during the documentation process. Synthesis (Fafalios _et al._, 2021) is a web-based system that applies the first approach for the collaborative and scientific documentation of cultural entities (objects, events, persons, organisations, etc.), offering embedded processes for transforming the data to an ontology-based RDF dataset. ResearchSpace (Oldman and Tanase, 2018) and WissKi (Scholz and Goerz, 2012) are platforms that apply the second approach, supporting the direct ontological representation of (meta)data. Spreadsheet software, such as Microsoft Excel, and relational database management systems (RDBMS), like Microsoft Access, are still popular (and probably the dominant) tools for (meta)data entry and analysis, and are extensively used for manual documentation and metadata recording. There are also RDBMS-based systems, such as HEURIST4 and nodegoat5, that are tailored to humanities researchers and which combine a set of functionalities for building and managing research datasets, without however focusing on semantic interoperability. Footnote 4: [http://heuristnetwork.org/](http://heuristnetwork.org/) Footnote 5: [https://nodegoat.net/](https://nodegoat.net/) **Data curation.** This is an optional step which is usually undertaken when a quantitative (statistical) analysis of the transcribed data is needed. In such a case, curation is very important because data quality can affect the reliability of the analysis results. OpenRefine6 is a popular desktop application for data cleaning. It operates on rows of data which have cells under columns (similar to relational tables). Silk7(Volz _et al._, 2009) is an open source framework for finding links between related data items, e.g. for instance matching. It provides a declarative language for specifying linkage rules and support of RDF link generation, through _owl:sameAs_ or other types of links. For fully-automated instance matching (entity resolution), there is a plethora of learning-based methods that require manually or automatically generated training data (Christophides _et al._, 2020). Finally, the FastCat system (Fafalios _et al._, 2021) offers a web-based environment, called FastCat Team, which supports both automated (rule-based) and manual instance matching and vocabulary curation processes. The applied curation does not alter the original (transcribed) data and maintains links from the curated to the original data. Footnote 6: [https://openrefine.org/](https://openrefine.org/) Footnote 7: [http://silkframework.org/](http://silkframework.org/) **Data integration.** The objective here is to semantically represent all data and metadata using a domain (formal) ontology (as the common representation language), in order to enable semantic interoperability and make the data exploitable beyond a particular research problem or project. This activity includes the _data modeling_ and _data transformation_ processes. Data modeling consists of defining or selecting the domain ontology and creating the schema mappings, while data transformation transforms the data based on the schema mappings and creates the semantic network of integrated data. Regarding software systems, Protege is a popular ontology editor which provides a graphic user interface to define ontologies. It can be used for creating a new ontology for a given domain in OWL, or for building an extension of an existing ontology. For the creation and execution of schema mappings, R2RML8 is a W3C standard for mapping relational databases into RDF, while Dimou _et al._ (2014) describe an extension called RML for mapping heterogeneous sources into RDF. Finally, the X3ML toolkit (Marketakis _et al._, 2017) provides a declarative (XML-based) mapping definition language as well as a set of tools for the creation and execution of schema mappings. ation and maintenance of the schema mappings, and the actual transformation of the data to RDF. **Data publication** The integrated data can be now imported in a semantic repository (RDF triplestore), either publicly available or private, which offers an Application Programming Interface (API) for accessing the data and running structured queries using the SPARQL9 protocol and language. Then, user-friendly applications can be built on top of this API for supporting end users in exploring and analysing the integrated data. The data can be also published as Linked Data, following the Linked Open Data (LOD) principles (Heath and Bizer, 2011). The Sampo model10(Hyvonen _et al._, 2014) provide a framework for collaborative publishing and using of LOD, which has been tested in several domains by building the so-called 'Sampo portals' (Hyvonen and others, 2020). Footnote 9: [https://www.w3.org/TR/sparql11-overview/](https://www.w3.org/TR/sparql11-overview/) Footnote 10: [https://seco.cs.aalto.fi/applications/sampo/](https://seco.cs.aalto.fi/applications/sampo/) **Data exploration and analysis.** There are two main general methods that can be used for exploring the integrated data: (a) _free text search_: the user provides a set of keywords or a natural language question, as in ad-hoc information retrieval, (b) _interactive interface_: the user is supported by the system to express an information need, through a user-friendly interactive interface. In both cases the result is (usually) a ranked list of entities from which the user can start exploring relevant information, e.g. through browsing, faceted search, or different visualisations such as charts, maps, timelines, etc. There is a plethora of different methods for implementing keyword search over RDF data, e.g. using a document-centric information retrieval system (Kadilierakis _et al._, 2020), or by translating a keyword query to a structured (SPARQL) query (Izquierdo _et al._, 2021). For the presentation of the keyword search results, Nikas _et al._ (2020) suggest a multi-perspective approach that offers multiple presentation methods (perspectives), allowing the user to easily switch between these perspectives and thus exploit the added value of each one. Regarding interactive interfaces, A-Qub (Kritsotakis _et al._, 2018) and ResearchSpace (Oldman and Tanase, 2018) offer user-friendly environments which support end users in gradually building complex questions (corresponding to SPARQL queries) that associate different types of entities and information. ### Positioning To the best of our knowledge, there is no related work that approaches the data management part of archival research in an holistic manner, in the sense that the proposed workflow model enables the representation and efficient management of information, applies semantic data integration facilities in order to provide a rich knowledge graph of archival data, and at the same time it preserves the full provenance chain allowing researchers traverse from the final semantically integrated collection back to the original and transcribed manuscripts and vice versa. Workflow Model: Overview and Main Characteristics We first provide an overview of the workflow model (Section 3.1) and then highlight its distinctive characteristics (Section 3.2). ### Roles, Input/Output and Processes Fig. 1 depicts the proposed workflow model for supporting holistic data management in archival research. **Roles.** There are two main roles engaged in the workflow: a) the _researcher (domain expert / end user)_, who collects and studies the archival material, provides domain knowledge, and defines requirements, and b) the _data engineer (modeling expert)_, who designs and implements the different workflow processes. **Input/Output**. The input of the workflow is a set of _archival documents_ gathered from different authorities by one or more researchers, together with _information needs_ provided by the researchers that are related to their research aims and for which the gathered archival material can provide important information (evidence). The gathering of information needs is very useful in this stage because it allows data engineers to better design and implement the next workflow processes. The output of the workflow is a rich semantic network (a _knowledge graph_) of integrated information, which is used by the researchers for Figure 1: Workflow model for holistic data management and semantic interoperability in archival research. data analysis and exploration, as well as two distinct, intermediate databases: a database of records (original transcripts), and a database of curated data (curated entity instances and vocabulary terms). **Process 1: Creation of Source Schemas.** Following the description logic based framework for information integration as introduced by Calvanese _et al._ (1998), we first need to create the source schemas, one for each different type of source, which provide the required data entry forms in a software system for the transcription and documentation of the original archival documents. This first step enables data curation and consolidation relative to source model semantics, as well as modeling and integration under a common ontology which can be modified in the course of research, without this affecting/delaying the transcription process. The close collaboration between the researchers and the data engineers is very important in this process for properly designing the schemas and avoiding mistakes during data entry that can cause difficulties/limitations in the next steps. An example of such a mistake is the use of a single data entry field for the recording of a measurement unit and value. This is very likely to cause issues to the end user when wanting to perform comparisons during data exploration. The creation of a new source schema, or a modification/extension of an existing one, will be required if new archival documents of a different type of source are gathered by the researchers and need to be transcribed. This can happen at any stage of the overall pipeline and does not affect the other processes that can run in parallel for the existing gathered material. **Process 2: Transcription.** After having created one or more source schemas for the gathered archival documents, the _transcription_ of the documents can begin by the researchers using a software system that offers the required data entry forms. Apart from the transcription of the important document contents, this step includes the recording of metadata information for both the documents (archive/library, dating, etc.) and the transcription process (who makes the transcription, etc.). The result of the transcription process is a database of transcripts. This is a task solely performed by the group of researchers, but which can make use of software tools for facilitating/automating transcription, such as text recognition software. **Process 3: Curation.** The next step of the workflow is the _curation_ of the transcribed data. At this stage researchers need to harmonise the different data elements that appear in the transcripts and resolve identity ambiguities, so that different elements that co-refer to the same real-world entity/concept receive the same identifier, and false co-references are disassociated. The data elements can be divided into two main categories: (a) _universals_; concept instances that belong to a specific vocabulary or thesaurus of terms, such as professions, object types, etc., and (b) _particulars_; entity instances that belong to specific categories and are accompanied by characteristics/properties, such as _persons_ (first name, last name, birth date, etc.), _locations_ (name, type, etc.), _organisations_ (name, location, etc.). Curation can also include the provision of corrected/preferred values (e.g. correcting the first name of a person instance) or the entity enrichment (e.g. adding coordinates to a location instance), tasks which are usually important for better data exploitation and visualisation by the external services that operate over the curated and integrated data. The curation process is a task performed by the group of researchers and may include both manual and automated steps. For example, instance matching of entities, or alignment of vocabulary terms, can comprise both an automated step (based on rules) and a manual step (for validation of ambiguous cases). The result is a distinct database of curated data, with links to the original data elements, which means that the curation step does not alter the data as transcribed from the original sources. **Process 4: Ontology-based Data Integration.** The next step is the ontology-based integration of the transcribed and curated data, which includes the _modeling_ and _transformation_ sub-processes. For modeling, the good practice suggests to either use an established domain model (if such a model is available for the application domain), or create a new model (a specialised extension) that is compatible to an established upper ontology. This process usually requires extensive discussions between the domain experts, who know the data, and the data engineers, who build the domain ontology and create the mappings. An important part of the modeling process is the creation of the _schema mappings_ that describe how the input data (transcripts and curated data) are mapped to classes and properties of the domain ontology. In general, the creation of the schema mappings can be a time-consuming process when the source schemas are many and large/complex. Nevertheless, it needs to be done only once for each different type of source, while revisions may be required if there are changes in the schemas or the target ontology. The use of a declarative language for defining the mappings, such as X3ML (Marketakis _et al._, 2017), is recommended because local changes in the sources require local changes in the mapping specifications that are easy to locate and perform. The _transformation_ process takes as input i) the databases (outputs of transcription and curation processes), ii) the domain ontology, and iii) the schema mappings, and produces a rich semantic network of integrated data. This step can be fully automated and can repeated for any new data sources that are transcribed and curated, as long as there is no change in the transcription schemas. **Process 5: Research, analysis, exploration.** The resulting semantic network of integrated data is exploited by the researchers through one or more services that operate over the semantic network and which offer user-friendly interfaces for data browsing, analysis, and exploration. Here it is important for the end users to be able to go back to the transcripts, or even the scans of the original sources, for inspecting the initial form of a piece of information (before its curation and transformation), or for gathering further contextual information. In addition, in the course of research, a user may identify that corrections are needed in the transcribed or curated data, thus researchers need to be able to revisit the transcription and curation steps, make corrections, and then re-transform (automatically) the data for updating the semantic network. Likewise, new archival documents might be collected at any time, which means that one or more new source schemas and corresponding mappings might need to be created for enabling their transcription, curation and transformation. ### Workflow Distinctive Characteristics Below we highlight and motivate the distinctive design and methodological characteristics of the proposed workflow model: * **Strong collaboration between researchers (domain experts) and data engineers (modeling experts).** Such a collaboration is required for a) better designing the source schemas (and the corresponding data entry forms), b) better defining/designing the target (domain) ontology and creating the schema mappings, and c) better creating/configuring the user interfaces of the data exploration service(s). * **Decoupling data entry from data curation and maintaining links from the curated to the original data.** This is very important not only for maintaining the data provenance, verifying information, and thus validating the research findings that make use of the data, but also because data curation and consolidation may be ambiguous and require further research and repeated revision at any time in the future (by the same or other researchers). * **Separating source schema creation from ontology modeling.** We aim at removing the bias of the initial research hypothesis from the target (integration) model, one of the most severe philosophical problems of unbiased research and at the core of the discussion about scientific realism (Turner, 2007; Chapman and Wylie, 2018). The target model (ontology) can be developed in parallel with the data entry process and can be re-adapted at any time to new insight from the sources, without invalidating the entered data and without this affecting (or delaying) the transcription and curation processes. * **Separating the databases (of transcripts and curated data) from the semantic network.** Decoupling data entry and curation from the creation of the semantic network enables maintaining the semantics of the source model by keeping the transcripts as close to the original (archival) document as possible (trying to maintain their original structure), offering at the same time a familiar way to data entry that can highly speed up this time consuming process. In addition, this allows the straightforward production of different versions of the semantic network, considering different ontologies, or different versions of the same ontology (this only requires creating the schema mappings based on the desired target model). How to Realise the Workflow We now provide implementation details for realising the workflow. ### Faithful, Fast and Collaborative Data Transcription Common requirements that a data transcription system should satisfy, include: * Supporting the _faithful_ and _structured_ transcription of information from the archival documents (as exact to the original information as possible), as well as the recording of _metadata_ information. * Supporting _fast_ data entry through an intuitive user interface that researchers are familiar with or can quickly get familiar with. * Supporting the _collaborative_ transcription by more than one researcher, making use of the same structures (source schemas) for data entry. These characteristics can highly affect the usability of the data entry system and thus its acceptance by the end users (researchers). For enabling the next _data curation_ process, we first need to identify what are the main entity categories (like persons, locations, objects, etc.) and the main vocabularies or hierarchies of terms that appear in the transcribed data and need curation. To this end, we need to define the fields in the data entry forms that provide entity or term related information. For example, the data entry fields _first name_ and _last name_ provide information for a person instance, while the field _profession_ provides a vocabulary term. The values of these fields must be copied (ideally, automatically) to a new environment that allows for their curation without altering the original data as it appears in the transcripts. We then only need to provide a link from the curated to the original data and/or position information (e.g. record name, table name, row number), in order to retain the provenance information. ### Provenance-aware Data Curation Data curation activities that need to be supported by a dedicated software system include: * _Correcting_ the name of an entity or the value of one of its properties (by setting a preferred label). * _Instance matching:_ matching two or more entity instances that refer to the same real-world entity, which means that they must receive the same identity. * _Instance unmatching_: unmatching a specific entity instance from a set of automatically matched instances, which means that the instance will receive a different identity. * _Enrichment_: complementing an entity instance with additional information, like adding coordinates to a location. * Providing a _preferred term_ for a vocabulary term (e.g. a term from a fixed thesaurus, or a term in English for a term in another language). * Providing a _broader term_ for a vocabulary term (thereby creating an hierarchy of terms). Instance matching in this context can be multi-stage. A first automated step can assign the same identity to a set of entity instances having some common characteristics, e.g. common first name, last name, and birth date, in the case of person instances (rule-based approach), or make use of machine learning techniques (supervised or semi-supervised approach) (Christophides _et al._, 2020). Then, a second manual step (performed by the researchers) can match additional entity instances that the automated step did not manage to match, or unmatch an entity instance that was incorrectly matched to other instances by the automated step. The instance matching/unmatching activities and the provision of preferred terms for vocabulary terms are of key importance for valid quantitative (statistical) analysis over the integrated data. Consider, for example, that a researcher who studies archival documents related to maritime history (like crew lists) wants to find the birth place of sailors that arrived at a specific port, or group them by their profession. Providing the same identity to all sailor instances that represent the same real-world person, as well as providing the same 'preferred' term for all different professions that correspond to the same profession, ensures that the generated aggregated information is correct. ### Ontology-based Integration The ontology-based integration of the transcribed and curated data consists of the below tasks: 1. Data modeling using a domain ontology. 2. Creation of schema mappings and definition of how to generate the entity identifiers (URIs). 3. Running the transformations for producing the semantic network of integrated data. #### 4.3.1 Data modeling. CIDOC-CRM11(Doerr, 2003) is a high-level, ISO standard ontology (ISO 21127:2014)12 of human activities, things and events happening in space and time, thus it can be used for modeling the transcribed data and supporting semantic interoperability and long-term data preservation. Depending on the application domain, an extension of CIDOC-CRM might be required for specialising particular notions of interest. For instance, in our use case we created the SeaLiT Ontology, an extension of CIDOC-CRM for the modeling and integration of data related to maritime history (more in Section 6). For semantic data management using CIDOC-CRM, Tzitzikas _et al._ (2022) analyse the relevant processes and tasks, and review the literature on applying machine learning techniques for reducing the costs related to compliance and interoperability based on CIDOC-CRM. **Mapping & Generation of Identifiers.** This step defines how the transcribed and curated datasets will be transformed so that they will eventually construct the semantic network. The challenge is to preserve the full provenance chain, from the curated data to the original data of the transcript of the source, so that researchers can easily validate, further improve, or seek for further information. The first part of this step is the definition of the schema mappings, that identify which parts from the input schema (e.g. a particular table column) will be mapped to concrete classes and properties of the domain ontology, ensuring that the semantics of the original data are well-defined, non-ambiguous, and no data is lost. The second part defines how resource URIs and labels will be generated. At this point URIs will be used as the 'glue' connecting relevant pieces of information. Fig. 2 shows an indicative example on how URIs are used for establishing such connections. In this example there are two different transcription records, each one of them describing various persons. In one of them there is a person called 'Agostino B??ndi' (i.e. the question marks reveal that the characters could not be recognised from the original source), and in another one there is a person called 'A Brondi'. For these persons two different URIs are created, since their names do not match and also they are found in different records. However in the curated dataset, historians agreed that these references point to the same person. Therefore a new person instance is created, with a new URI, linked to the previous ones. This new instance is called'master', while the linked instances are considered 'local'. Figure 2: Identity (URI) management and provenance chain. Each URI consists of three parts: (a) the URI prefix which is common for all the resources, (b) the type or hierarchy of the resource, (c) the actual or hashed content of the resource. An indicative URI is: _[https://rs.sealitproject.eu/kb/location/sardinia_](https://rs.sealitproject.eu/kb/location/sardinia_). We should also mention, that there are cases where the aforementioned strategy is not applied. An indicative case is the construction of intermediate nodes in the semantic network, for which a URI is not required (e.g. the 'E67 Birth' event). In such cases a random UUID is assigned for them. Transformation.This step takes as input (a) the transcribed and curated datasets and (b) the definitions of the schema mappings and URI generators, and produces the ontological instances (RDF triples) with respect to the domain ontology, that are the core contents of the semantic network. This step does not require any human intervention and can be fully automated. One apparent advantage of this automation is that the semantic network can be fully or partially refreshed as soon as new data have been transcribed and/or researchers have curated more data. A good practice for managing the semantic data in terms of updating and versioning flexibility is the use of _named graphs_[1], one for each source record. When there is a new version of a record, or of its mapping definition file, the record output produced with a new workflow cycle can be easily integrated in the semantic repository by replacing the RDF data in the corresponding named graph. Also, the hierarchies of terms and locations can be effectively managed and updated in distinct named graphs, as well as the result of the _materialisation_ process for semantically inferred statements (the production of new RDF triples as shortcuts that represent long paths, for improving query performance). ### Semantic Network Exploitation The integrated data of the semantic network can be now exploited as a primary source for archival research. This includes finding answers to complex information needs and analytical queries that require combining information from different sources, as well as visualising the results in different forms, such as tables, charts, timelines, or maps, for direct use in research. The actual information needs depend on the application domain and the type of exploration or analysis needed by the end users. The challenge here is to provide researchers with user-friendly and intuitive-to-use interfaces that they can trust for expressing their information needs and findings relevant information. Thus, the key success factors of such data exploration services are usability and trustworthiness. The latter can be achieved by enabling users to directly inspect the provenance of the displayed information, by allowing them to directly visit the transcript containing the information, or even a scan of the original archival document. Some general categories of information needs include: (i) finding information about a particular entity, such as the birth date and place of a person; (ii) retrieving a list of entities based on one or more properties of these entities (e.g. all persons having a specific residence location); (iii) grouping a list of retrieved entities based on some property or characteristic (e.g. grouping all retrieved persons by their profession); (iv) finding comparative information related to some entities (e.g. number of persons employed by the organisation in different time periods). Finally, a strategy on how to handle missing values in the data, which is very common for certain types of archival documents, is very important in order to get valid aggregated information and make safe conclusions. For example, the residence location for some persons might be empty in the original document. When grouping a set of persons by their residence location, there must be an 'unknown' value for this missing information. ## 5 Workflow Automation The systems used in the transcription and curation processes need to intercommunicate for automating the copy of the data elements (entities, terms) that need curation from the transcription system to the curation system. Then, an important part of the workflow can be fully automated as long as the modeling process has been completed and the mappings for all different source schemas have been created (tasks that need to be done _once_ for each different type of archival documents). In this case, new transcribed and curated data can be automatically transformed and imported in the semantic repository of integrated data, and thus directly be explored by the end users through the data exploration application. Specifically, the workflow scenario is the following: a group of researchers have collected a first set of archival documents and the data entry forms have been created in a dedicated system for each different type of source. The researchers start the transcription process. When transcription has been completed for the collected set of archival documents, the data elements that need curation are automatically copied to the curation environment and researchers start curating them. At the same time, data engineers, with the support (domain knowledge) of the researchers and by studying the available material evidence and the experts' requirements, define the target (domain) ontology and create the schema mappings for each different type of source. When both the transcription and curation processes have been completed for all (or a large set) of the archival documents, and the corresponding schema mappings have been created, researchers can 'publish' the data, which means that the transformation process is executed and the semantic network is created and ingested in a semantic repository. Researchers can then start exploring the integrated data through the user-friendly interface of an application that operates over the semantic repository. At any time, researchers can transcribe and curate new archival documents, or make corrections in the existing (curated) data due to new knowledge acquired in the course of research, and then re-execute the transformation process and update the semantic repository automatically. The changes in the seman tic repository are directly (and automatically) reflected in the data exploration application. The entire set of archival documents to be considered by the researchers does not need to be known from the beginning, meaning that new documents might be collected for transcription at any time. In this case, creation of new source schemas (data entry forms) is needed if such new documents belong to a new type of source which is different from the existing ones. Accordingly, changes in an existing data entry form might be needed (e.g. addition of a new column) in order to enable the transcription of a new important type of information that was not originally planned or known for an existing type of source. In both cases, revision/extension of the domain ontology might be needed, as well as creating new schema mappings or applying changes in the existing ones. Note here that, even if there are changes in the transcription schemas and the integration model, which actually occur during the course of a project, such changes are independent of the other transcription and curation processes performed (in parallel) by the researchers (thus, they do not affect or delay them). Moreover, the full automation of the data transformation step reduces the overhead for the researchers to the absolute minimum. The two steps of the workflow that are the most time consuming are the _transcription_ and _curation_ processes. As already stated, several sub-tasks in these two processes can be automated or semi-automated, e.g. using state-of-the-art text recognition software (Kahle _et al._, 2017), or applying automated instance matching / entity resolution (Christophides _et al._, 2020). Here the challenge is to find the best trade-off between fully automating the tasks and having results of high accuracy for enabling valid data analysis. We suggest semi-automated solutions that consider human-in-the-loop for ensuring high quality (Wu _et al._, 2022; Gurajada _et al._, 2019). ## 6 Use Case in Maritime History Research The workflow has been fully implemented in a real use case for supporting a large number of historians in managing a diverse set of archival sources related to _maritime history_. The context is the project SeaLiT13, in which maritime historians study the transition from sail to steam navigation and its effects on seafaring populations in the Mediterranean and the Black Sea (1850s-1920s). Footnote 13: [https://sealitproject.eu/](https://sealitproject.eu/) Below we provide details on how each process of the workflow was implemented and illustrate an example on how a real information need provided by the historians is satisfied by exploiting the integrated data. **Archival material and information needs.** The archival material studied in SeaLiT covers a variety of sources in five languages (Spanish, Italian, French, Russian, Greek), including crew and displacement lists, registers of different types (sailors, naval ships, students, etc.), logbooks, payrolls, account books, employments records, and censuses. Details about the full archival corpus and its origin is a available in the project's web site.14 Footnote 14: [https://sealitproject.eu/archival-corpus](https://sealitproject.eu/archival-corpus) Our first task was to gather a set of information needs from the historians of SeaLiT, related to their research aims and for which the studied archival material can provide important information. This is fundamental for better designing the source schemas (data entry forms), the integration model, as well as the data exploration services. We collected around 100 information needs. Indicative examples are:15 Footnote 15: The full list of gathered information needs is available at [https://users.ics.forth.gr/~fafalios/SeaLiT_Competency_Questions_InfoNeeds.pdf](https://users.ics.forth.gr/~fafalios/SeaLiT_Competency_Questions_InfoNeeds.pdf) * What are the places of construction of ships during a specific period? * What are the most popular European destinations (under a chronological perspective) of the ships from the Black Sea? * How many people that arrived at a specific place (e.g. Barcelona) have place of birth more than X miles away? * How many ship owners per ship during a specific period? **Creation of source schemas and transcription.** The FastCat system (Fafalios _et al._, 2021b), which is available as open source software16, was used for the creation of the source schemas and the transcription of the archival documents by around 30 users in 5 countries (historians of SeaLiT). In FastCat, users can transcribe documents and provide metadata information by creating'records' belonging to specific 'templates'. A'record' organises the data and metadata of an archival document in a set of tables, while a 'template' represents the structure of a distinct data source, i.e. it defines the data entry tables, their columns as well as the type of each column (for denoting columns that provide vocabulary terms or entity-related information, whose values will be curated after transcription). For the case of SeaLiT, twenty templates were created, one for each different type of archival source. Table 1 provides the templates as well as an overview of the information that can be recorded in each template. Footnote 16: [https://github.com/isl/FastCat](https://github.com/isl/FastCat) The total number of records transcribed by the historians of SeaLiT is currently more than 620. Fig. 3 shows a part of a real record belonging to the template _Crew List (Ruoli di Equipaggio)17_ (there are totally 98 records belonging to this template). This template consists of six tables, enabling historians to provide/transcribe information about: i) the record itself (creation date, last modification date, transcriber); ii) the source (archive/library, location, register number, issuing authority, etc.); iii) the ship (name, type, tonnage, construction location, etc.); iv) the crew list (embarkation port and date, discharge port and date, surname, name, residence location, profession, payment information, etc.); v) the documented navigation (date, duration, first planned destination, total crew number); vi) the route (departure port and date, arrival port and date). In the record of Fig. 3, for instance, the transcriber has provided data for twenty \begin{table} \begin{tabular}{l|l} \hline **Archival source** & **Overview of recorded information** \\ \hline \hline Crew and displacement list (Roll) & Information about ships, crew members, ports. \\ \hline Crew List (Ruoli di Equipaggio) & Information about ships, voyages, crew members, ports. \\ \hline General Spanish Crew List & Information about ships, ship owners, crew members, voyages, ports. \\ \hline Sailors Register (Libro de registro de marineros) & Information about sailors (including profession and military service organisation locations) \\ \hline Register of Maritime Personnel & Information about persons (including residence location, marital status, previous profession, military service organisation locations). \\ \hline Register of Maritime Workers & Information about maritime workers, ships, captains, ports. \\ \hline Seagoing Personnel & Information about persons (including marital status, profession, end of service reasons), ships, destinations. \\ \hline Naval Ship Register List & Information about ships (including tonnage, length, construction location, registration location) and ship owners. \\ \hline List of Ships & Information about ships (including previous names, registry port and year, construction place and year, tonnage, engine characteristics, owners). \\ \hline Civil Register & Information about persons (including profession, origin location, marital status, death location and reason). \\ \hline Maritime Register, La Ciotat & Information about persons, embarkation and disembarkation locations, ships, captains, patrons. \\ \hline Students Register & Information about students and courses. \\ \hline Census La Ciotat & Information about occupants (including nationality, marital status, religion, profession, working organisation, household role). \\ \hline Census of the Russian Empire & Information about occupants (including marital status, estate, religion, native language, household role, occupation). \\ \hline Payroll (of Greek Ships) & Information about ships, captains, voyages, persons, employments (including wages). \\ \hline Payroll (of Russian Steam Navigation and Trading Company) & Information about ships, persons, recruitments (including salary per month). \\ \hline Employment records (Shipyards of Messageries Maritimes, La Ciotat) & Information about workers (including marital status, profession, status of service in company). \\ \hline Logbook & Information about ships, captains, ports, route movements, voyage events. \\ \hline Accounts Book & Information about ships, voyages, captains, ports, transactions. \\ \hline Notarial deeds & Information about deeds, notaries, witnesses, contracting parties, ships. \\ \hline \hline \end{tabular} \end{table} Table 1: Considered archival sources and overview of recorded information. six sailors and thirteen route ports that concern the navigation of the the ship _Pallade_ (type _Brigantino_) from 11-01-1861 to 26-02-1862. The creation and configuration of the templates in FastCat was not an 'one shot' process. New templates were created periodically based on new archival material gathered from the historians, or existing templates were changed several times even after the creation of records (e.g. by including additional columns in a table), for incorporating new (and important) type of information provided by particular archival documents. **Curation.** The curation of the transcribed data (vocabulary terms and entity instances) is performed through a dedicated environment within FastCat, called FastCat Team. Specifically, when a historian has completed the transcription of one or more documents (records), the record(s) can be 'published', which means Figure 3: An example of a real FastCat record belonging to the template ‘Crew List (Ruoli di Equipaggio)’. that all data concerning vocabulary terms and entity instances are copied to FastCat Team for enabling their curation. In the case of SeaLiT, the current number of vocabularies is fifty two (examples include: _ship type, engine type, profession, marital status_), while the types (and current number) of entities that can be curated are _ships_ (about 2,400), _persons_ (about 99.200), _locations_ (about 9,800), _legal entities_ (about 1,100). For each term in a vocabulary, the user can provide a preferred term (in English) and a broader term, or inspect the records in which the term appears. For the curation of the entity instances, the user can correct values, select two or more instances for matching them (indicating that they represent the same real-world entity), unmatch a particular instance from a set of automatically-matched instances, or inspect the records in which the entity instance appears. In the case of locations, the user is able to add an identifier (TGN/Geonames ID), as well as coordinates or a secondary location name (e.g. a historical name). Fig. 4 shows the user interface of FastCat Team, in particular the page that allows the curation of ship instances. For more information about FastCat (and FastCat Team), the reader can refer to Fafalios _et al._ (2021b). **Ontology-based integration and transformation.** For data integration we created a data model compatible with CIDOC-CRM, called 'SeaLiT Ontology'18. The current version of the ontology (v1.1) contains forty six classes and seventy nine properties, allowing the description of information about ships, voyages, Figure 4: Curation of ship instances in FastCat Team. employments, payments, seafaring people, teaching courses, and other relevant activities. For creating the schema mappings and transforming the data to RDF we make use of the X3ML framework (Marketakis _et al._, 2017). In particular, one mapping definition file has been created for each template in FastCat, as well as one for each of the four categories of entities in FastCat Team and one for all the vocabularies. The derived semantic network contains more than 18.5M RDF triples and is currently exploited by the data exploration application (ResearchSpace; more below) for supporting historians in finding answers to their information needs. The full RDF datasets are publicly available19. The network contains interconnected information for thousands of sailors, ships, locations, organisations, voyages, and many other relevant activities, as well as connections with publicly available resources (Geonames, Getty TGN). Footnote 19: [https://zenodo.org/record/6460841](https://zenodo.org/record/6460841) **Semantic network exploration.** For enabling historians of SeaLiT and other interested parties to explore the integrated data and find answers to their information needs, we make use of ResearchSpace (Oldman and Tanase, 2018). ResearchSpace is a configurable, open source platform which operates over a semantic network accessible through an RDF triplestore. It offers a variety of functionalities, including a _query building_ interface that supports users in gradually building and running complex queries through a user-friendly interface. The results can then be browsed and analysed quantitatively through different visualisations, such as bar charts. The platform was configured for the case of SeaLiT data, offering three main data exploration functionalities: a) keyword search, b) semantic search (through its assistive query building interface), and c) entities browsing (per type of archival source). Fig. 5 shows a screen dump of the semantic search functionality. The user inspects the "construction location of ships that were constructed between 1830 and 1840". The user first searched for ships constructed between 1830 and 1840 (Fig.5-A), and then selected to group the retrieved ships by their construction location (Fig.5-B) and visualise the results in a bar chart (Fig.5-C). This query corresponds to a real information need as provided by the historians of SeaLiT, and the answer is shown to the user instantly (in less than one second). If the construction location is unknown for a ship, this missing information is displayed in the chart (see 'Unknown' bar, Fig.5-D). The user can also start browsing information about the retrieved ships (e.g. inspecting the owners of a ship and then other ships owned by the same owner), visit the FastCat transcripts that provide the corresponding information (for validation, or inspection of additional contextual information), or download the results in CSV format for further (external) analysis. A deployment of the application is publicly accessible.20 Footnote 20: [http://rs.sealitproject.eu/](http://rs.sealitproject.eu/) ## 7 Quality Aspects and Lessons Learned We discuss data quality aspects as well as relevant lessons learned from the application of the proposed workflow model in maritime history research. ### Quality Aspects Every workflow cycle ends up with semantic data that in some cases may suffer low quality characteristics, making the data practically difficult to be exploited for the needs of research. In literature, data quality is commonly considered as "fitness for use" as well as an indicator of data usability (Pipino _et al._, 2002; Wang and Strong, 1996), and several dimensions and metrics for measuring data quality have been proposed (Pipino _et al._, 2002; Zaveri _et al._, 2016). Although studying quality factors in detail is out of the scope of this paper, below we focus on three main quality dimensions of the semantic data that can significantly affect the quantitative analysis process: completeness, consistency, conciseness. **Data completeness.** A quality dimension that can be easily assessed in the context of a schema/ontology or the particular use case scenario (Zaveri _et al._, 2016). The lack of essential information, like missing dates and locations of events, or names and professions of actors of a registry, may affect the research analysis and the evidence for making a decision about a historical subject. Figure 5: Semantic search and results visualisation in ResearchSpace. **Data consistency.** This dimension can be viewed from a number of perspectives (Zaveri _et al._, 2016; Hassenstein and Vanella, 2022). Our perspective comprises the _schema-based_ and the _value-based_ (or _representational_) consistency. Schema-based consistency can be evaluated against a particular schema/ontology. It prevents modeling issues, like the incompatible attribution/interlinking of the entities, and averts potential reasoning malfunction. For example, assigning 'tonnage' to a person (instead of a ship) makes no sense, and under particular reasoning premises it may produce inaccurate inference that people were used for the transportation of goods. Value-based consistency concerns the format and the structure of comparative values (numbers, dates, measurement values) to enable comparability. Magnitudes, dimensions, quantities, time-spans, dates, places' coordinates, etc., to be effectively compared, they have to align their reference points or units of measurement. **Data conciseness**. This quality dimension comprises two perspectives: _schema-level_ conciseness and _instance-level_ conciseness (Zaveri _et al._, 2016; Mendes _et al._, 2012). Schema-level conciseness means that the data does not contain equivalent attributes with different names (responsibility of the data modeling engineer), while instance-level conciseness means that the data does not contain equivalent objects with different identifiers (highly-dependant on the quality of the curation process). ### Lessons Learned Next we present issues related to data quality that we faced while implementing the workflow and which should be taken into account. **Missing information.** Missing values are very common and an important-to-know information for researchers because they can affect the accuracy of quantitative (statistical) analysis. This is related to the _completeness_ quality aspect described above. When a piece of information is not provided in the original source, the corresponding cell in the data entry system is left empty. The data exploration system must consider such empty values while aggregating and showing information. **Data entry errors.** Errors in the transcripts during data entry are common, such as accidentally filling the wrong column in a table, or putting the information in the wrong place due to misunderstanding. This is related to the _schema-based consistency_ quality aspect described above. Such errors are directly reflected in the data exploration interfaces and can spoil user experience. Thus, it is important to allow researchers visit the original transcripts for validation or making corrections. Moreover, offering mechanisms in the user interface that support users to avoid such errors during data entry can limit the problem. **Non-consistent comparative values.** It is very common that comparative values, such as dates, dimensions, quantities, location coordinates, are not consistent across archival sources of different types, because of different reference points or units of measurement, making difficult their use in comparisons, filtering, etc. This is related to the _value-based consistency_ quality aspect described above. An additional (automated, semi-automated or manual) step is needed for aligning such values, however without changing the values as they appear in the original source. This can happen either during data curation or during data transformation. **Costly data curation.** Low-quality data curation can reduce user satisfaction and produce invalid analysis results. This is related to the _instance-level conciseness_ quality aspect described above. The cost of manual data curation is relative to the size of the data that need curation (number of entity instances, number of vocabulary terms). The process can be very time consuming for researchers in cases such as SeaLiT where the number of entities and vocabularies is high. Thus, there is a need for tools that automate as much as possible curation without significantly affecting quality, e.g. through semi-automatic processes, supervised algorithms, or application-specific machine learning. ## 8 Conclusion We presented a workflow model for holistic data management in archival research: from transcribing and documenting a set of archival documents, to curating the transcribed data, integrating it to a rich semantic network, and then exploring and analysing the integrated data. The merits of the approach is that it speeds up data entry, it is provenance-aware decoupling data entry from data curation and integration, it is interactive as well as appropriate for semantic interoperability, aiming at the production of sustainable data of high value and long-term validity. We have showcased the feasibility and effectiveness of the model in maritime history research, and we have reported empirical results from its application (about thirty users, twenty types of archival documents, more than 600 records, more than fifty vocabularies, more than 110,000 entity instances, more than 18.5 million triples of integrated information). Issues that are worth further research include: (a) semi-automated methods to speedup data curation, (b) investigate the evolution requirements of the semantic network, as proposed by Marketakis _et al._ (2021), (c) methods and interfaces to support researchers in defining and updating the source schemas by themselves. #### Acknowledgements This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 890861 (Project ReKnow), and ii) the European Research Council (ERC) grant agreement No 714437 (Project SeaLiT).
2305.15785
Personal History with MEF and Some Related Topics
We present our personal histories with Michael Fisher. We describe how each one of us first came to Cornell University. We also discuss our many subsequent interactions and successful collaborations with him on various physics projects.
Helen Au-Yang, Jacques H. H. Perk
2023-05-25T06:57:10Z
http://arxiv.org/abs/2305.15785v1
# Personal History with MEF and Some Related Topics ###### Abstract We present our personal histories with Michael Fisher. We describe how each one of us first came to Cornell University. We also discuss our many subsequent interactions and successful collaborations with him on various physics projects. ## 1 From Shanghai to Cornell University Born under the name Ou Yang Yee Sun in Shanghai and after having suffered through much of the great famine of 1959-1962, Helen got official permission to join her parents in Hong Kong. It was quite a shock, not knowing Cantonese and English, as she only spoke Shanghainese, Mandarin and some Russian that she had learned in high school. Fortunately, the written Chinese is universal and Helen found a poster announcing a competition for two fellowships at Chu Hai College, Kowloon. Even though Helen had not yet finished high school, she won one of them and was admitted under the new name Helen Au-Yang. After graduating in 1965 she was admitted to San Diego State University. In 1968 Helen entered graduate school at the State University of New York in Stony Brook. Here she generalized the work of McCoy and Wu[1] on the Ising model on the half plane with boundary magnetic field to the case with boundary coupling different from the coupling in the bulk. Thus she provided exact results[2] related to the series, Monte Carlo and mean field results of Binder and Hohenberg[3, 4] on 2D and 3D Ising and Heisenberg models with a boundary. Helen continued with the main topic of her PhD thesis:[5] Layered Ising models with period \(n\), deriving exact results for the specific heat[6] and for the pair correlation in a row parallel to the layering.[7] That last calculation led to a block Toeplitz determinant with a \(2\times 2\) matrix generating function \(a(\xi)\). After reviewing the then known theorems for such determinants, especially for those for which \(a(\xi)\) can be properly factorized, Helen gave \(n=2\) results for the \(T<T_{c}\) spontaneous magnetization and for the leading long-distance behavior of the pair correlation for \(T<T_{c}\) and for \(T>T_{c}\). In 1973 Helen graduated with a PhD in Theoretical Physics, with her thesis signed by Professors B. M. McCoy, C. N. Yang, T. T. Wu and others.1 She became a postdoc with Michael Fisher at Cornell University. Footnote 1: The publication of the thesis[5] and the papers[6, 7] got delayed, as her advisor Barry McCoy was involved with the massive project on Painleve III scaling functions in 2D Ising correlations.[8] ## 2 Helen at Cornell University With Helen in his group, Michael Fisher continued the work on his project "Bounded and inhomogeneous Ising models" started with Arthur Ferdinand[9, 10] many years earlier. In 1969 they had published finite-size effects on the specific heat of an \(m\times n\) Ising lattice, asymptotically for large \(n\) with \(\xi=m/n\) fixed.[10] Helen and Michael first studied the specific-heat scaling function for an infinitely long Ising strip of width \(n\) in the limit \(n\to\infty\).[11] After this, they studied the finite-size effects of regularly spaced point defects,[12, 13, 14] also incorporating some earlier calculations of Ferdinand. Michael was very happy to have Helen around, as he liked things to be done exactly, if they could be done exactly. One time a student solved a cubic equation numerically by computer. Michael told Helen,"Helen, you show him how that is done exactly!" Michael wanted to keep Helen for a longer time by creating a special research position for her. He asked C. N. Yang to take her back as a postdoc for a year, while he created such a new position at Cornell. At Stony Brook Helen calculated Ising multispin correlations along the diagonal[15] to check the operator reduction formulae of Kadanoff. She also calculated a new scaling form for the four-spin correlation along the diagonal.[16] Back at Cornell University, Helen got involved in a generalization of earlier work,[11] namely the study of an \(n\times\infty\) Ising strip with field \(h_{1}\) on the first infinite layer and field \(h_{n}\) on the last (\(n\)th) layer. One motivation for this work was to investigate the validity of the _ad hoc_ local free energy functional of Fisher and de Gennes.[17] In spite of serious limitations of that theory, their most striking predictions were confirmed and finite-size scaling forms were derived in the limit \(n\to\infty\).[18, 19] A more detailed summary of this research is given in section 4.3 of Fisher's "Simple Ising Models Still Thrivel" lecture.[20] At the same time period much numerical work was done to study inhomogeneous differential approximants for power series of one or more variables[21] Nowadays, with Maple and Mathematica available, this is a lot easier, but then it was not. At that time using Fortran, one had to prepare a stack of puncchards for each example with up to 80 characters per card and hand that in for processing overnight, carefully keeping the stacks of cards in the right order. This work was very labor intensive, as not only existing high- and low-temperature series coefficients and activity (high-field) expansions of various well-known spin models were examined, but also many test examples derived from mathematical functions with a given singularity and various background noise contributions. One case was studied in detail: Universality tests at Heisenberg bicritical points [22]. The experimentally determined amplitude ratio \(Q_{\rm fit}\approx 1.6\pm 0.35\) found in MnF\({}_{2}\) was much smaller than theory seemed to predict, \(Q_{\rm th}\approx 2.39\). Noise contributions in the theory could bring that value about 10% down, but that is not enough. Fisher et al. then note [22] that shifts in the critical lines compatible with experimental uncertainties could raise \(Q_{\rm fit}\) enough to bring theory and experiment in agreement. In other words, \(Q_{\rm fit}\) should have been reported with significantly larger error bars. During a visit back to Stony Brook, Helen calculated the scaling form for the four-spin correlation function of two parallel nonaligned pairs in the planar Ising model [23]. Some time in 1980, Helen's oldest sister and her family were released from labor camp in Western China to settle in Hongkong with the rest of the family there. Helen then abruptly quit her position at Cornell in order to help tutor her two nieces into the Hongkong school system, using the earlier experience that she had had moving from Shanghai to Hongkong in 1962. In 1982 Helen wanted to return to the USA, and Barry McCoy and Joel Lebowitz organized a special invitation to the December 1982 Rutgers Stat. Mech. Meeting. ## 3 Jacques' visits to Cornell and Rutgers In 1978 I (Jacques) received the Royal Dutch Shell Oil Travel Prize awarded at Leiden University for my thesis research. This award allowed me to visit research groups at several universities and research institutes in the USA. Thus I came to also visit Professors Fisher and Widom at Cornell University, giving a seminar on (in)stability of critical behavior [24, 25], as that seemed more suitable than my thesis work on time-dependent correlations in XY chains. Students and postdocs had beforehand told me that Fisher had "destroyed" the speakers in the seminar the previous several weeks, but that did not happen this time. I had seen Fisher in action at earlier conferences and I was prepared. I got a few questions about how things were proved that I could answer. Noteworthy was the party at Fisher's home. It started with a box of Chinese metal wire puzzles put in front of me. As I had about the same collection, I knew these well. I quickly took a few apart and put them together again, upon which Fisher removed the box, causing a sigh of relief among the others. Fisher then got his Spanish guitar out. He was quite accomplished and entertaining. This and the special treats Mrs. Fisher had prepared were the high points of the party. At the December 1979 Stat. Mech. meeting at Rutgers, I asked Leo Kadanoff if he knew that there are continuously varying critical exponents if one varies the coupling constants within one line in the bulk of the square-lattice Ising model [26, 27, 28]. With Michael Fisher and others standing by, Kadanoff said that that is impossible, upon which I said to expect a preprint soon. Fisher and Kadanoff followed that up with some further remarks on the phenomenon [29, 30]. At this point, we note that, like many others, both of us were very surprised that Fisher and Kadanoff did not share the 1982 Nobel Prize in Physics with Kenneth Wilson. ## 4 Helen and Jacques married It all started with Barry McCoy asking Jacques to pick up Helen, who had visited Michael Fisher at Cornell, at the Greyhound bus terminal in Manhattan on New Year's eve of 1982 and to bring her to the McCoy home. Jacques and Helen did not know that Barry and Martha McCoy conspired and had a detailed plan designed to get both of us married. All the scheming worked and six days later we were engaged to be married, with the wedding taking place on January 22, beginning a very happy marriage lasting more than 40 years so far. Soon after Helen asked, "Jacques, have you ever done anything with your quadratic difference equations for 2D Ising correlations?" This led to the first of many joint papers following, namely a letter with fairly complete results to calculate critical Ising pair correlations [31] and even a result for the monomer-monomer correlation in the square dimer problem. Many years later, students in the statistical mechanics class of a colleague told us that this letter was cited in Pathria's textbook [32]. Having seen several papers on chiral clock models written at Cornell University during her time there, Helen was led to extend the search for integrable Potts models to chiral ones, discovering the genus 10 curve condition for solving the star-triangle equations for the 3-state non-selfdual chiral Potts model [33]. The full parametrization of the \(N\)-state case was found during a visit to Canberra [34] and the history of these discoveries is described in a Topical Review celebrating Baxter's 75th birthday and implicitly Helen's 70th [35]. ## 5 Later collaborations with Michael Fisher Michael Fisher had been very apprehensive about our marriage at the beginning, but he noted that Helen was very happy, seeing her at several of Lebowitz's Stat. Mech. meetings at Rutgers. Then, at the end of 2008, Fisher called me (Jacques) that he wanted a calculation done and that Helen was the only one he trusted to do it correctly. I told him that he was free to ask Helen and let her decide what to do. Fisher then emailed a letter in his typical style, which is reproduced in the appendix. Fisher wanted some exact calculations done on the specific heat to qualitatively understand some features seen in the experiments of Gasparini's group in Buffalo.[36, 37] There was some delay, but in the end two papers were published on 2D Ising models with alternating layers with weaker and stronger interactions,[38, 39] the second with the details left out in the first one. Fisher was very happy that I helped with some of the LaTex coding and cleaning up the figures. Later, Helen thought of a better way to compare with an array of 3D cubes of Helium covered with a 2D film. Correspondingly she studied 2D Ising layers connected by layers of 1D Ising strings, with the strings 1, 2, or 3 units apart. First she did a 60 page calculation of the free energy in her usual fine print handwriting using Hamm's method,[40] followed by a 20 page calculation, reproducing the results using another method.[13] When she showed me the results, we could guess the general answer for Ising strings \(N\) units apart and next prove it.[41, 42] We also obtained exact results for the spontaneous magnetization.[41, 43] Finally, Fisher contacted me, as he had to do something about the 3D Ising deceptions of Zhang Zhidong. He asked me if I had any follow-up on my original comment. I sent him what I had, including my last comment.[44] Fisher got also very upset that his friend of long ago, Norman March, was fooled by the deceptions. Figure 1: Helen and Michael at December 2013 Stat. Mech. Meeting at Rutgers University. Together we wrote another comment,[45] in which we also introduced the statistical mechanics community to the new bootstrap results for 3D Ising critical exponents. Fisher was particularly happy, when I pointed out that the result for the correction to scaling exponent \(\Delta=\omega\nu=0.5231(12)\) is very close to the \(\frac{1}{2}\) proposed by Andrea Liu and him[46] In conclusion, it has been a great pleasure to work with Michael Fisher. In some sense Michael was the conscience of the statistical mechanics community, keeping people honest with probing questions during and after their talks and often making good suggestions. He will be dearly missed.
2308.13308
Anisotropic Conformal Dark Gravity on the Lorentz Tangent Bundle Spacetime
In this work we investigate the anisotropic conformal structure of the gravitational field incorporating dark gravity in a generalized Lagrange geometric framework on the Lorentz tangent bundle and we present two applications; the anisotropic conformal Minkowski spacetime and the anisotropic conformal FLRW cosmology. In the first application, the conformal factor induces an anisotropic conformal de-Sitter-like space with extra curvature which causes extra gravity and allows for Sasaki-type Finsler-like structures which could potentially describe certain gravitational phenomena in a more extended form. The cosmological properties of the model are also studied using a FLRW metric structure for the underlying base manifold in the second application, where we derive generalized Friedmann-like equations for the horizontal subspace of the Lorentz tangent bundle spacetime that reduce under certain conditions to those given by A. Triantafyllopoulos and P. C. Stavrinos (2018) [Class. Quantum Grav. 35 085011] as well as those of general relativity.
Christos Savvopoulos, Panayiotis Stavrinos
2023-08-25T11:15:01Z
http://arxiv.org/abs/2308.13308v1
# Anisotropic Conformal Dark Gravity on the Lorentz Tangent Bundle Spacetime ###### Abstract In this work we investigate the anisotropic conformal structure of the gravitational field incorporating dark gravity in a generalized Lagrange geometric framework on the Lorentz tangent bundle and we present two applications; the anisotropic conformal Minkowski spacetime and the anisotropic conformal FLRW cosmology. In the first application, the conformal factor induces an anisotropic conformal de-Sitter-like space with extra curvature which causes extra gravity and allows for Sasaki-type Finsler-like structures which could potentially describe certain gravitational phenomena in a more extended form. The cosmological properties of the model are also studied using a FLRW metric structure for the underlying base manifold in the second application, where we derive generalized Friedmann-like equations for the horizontal subspace of the Lorentz tangent bundle spacetime that reduce under certain conditions to those given by A. Triantafyllopoulos and P. C. Stavrinos (2018) [Class. Quantum Grav. 35 085011] as well as those of general relativity. _Keywords_ dark matter, dark energy, anisotropic gravitational field, conformal gravity, tangent bundle, Finsler-like, generalized Friedman equations, modified gravity ## 1 Introduction Over the last decades the topic of dark matter and dark energy stands at the forefront of scientific research in the field of gravity and cosmology [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. The significant interest in this topic stems from observational data that attribute the vast majority of the mass in the observable universe to sources other than ordinary luminous matter, what researchers called dark matter [19]. Examples of phenomena that would suggest a modified theory of gravity that would account for the discrepancies in the classical theory of gravitation due to the presence of dark matter and dark energy arise from the study of gravitational lensing, cosmic microwave background radiation (CMB) or the rotational curves of spiral galaxies [20, 21]. The study of such phenomena suggests that dark matter would contribute significantly in the evolution and acceleration of the universe which would mean that the study of dark matter is essential for cosmology. In particular, dark matter could possibly be considered as the main reason for galaxy structure formations and dark energy as the drive for the measured cosmic acceleration [22, 23, 24]. This would suggest the need for a modified theory of gravity that would incorporate such gravitational effects and potentially describe the aforementioned phenomena, since extra dark gravity influences all scales of matter. Particularly, the \(\Lambda\)-CDM model is especially efficient in agreeing with observational data [25, 26, 27]. However, it has been argued [28] that this model is lacking in sufficient mathematical and theoretical background. This would therefore indicate that there exists a need to obtain an improved mathematical structure consistent with such description of the universe with dark gravity. There is some evidence that a conformal theory of gravity can dynamically accommodate for this "extra" gravity by introducing additional degrees of freedom to the existing underlying metric structures [29, 30, 31]. In addition, a conformally invariant theory could possibly be linked to a bounce evolution of the universe [32, 33]. Furthermore, the purely gravitational dark matter may be produced mainly by the gravitational particle creation process [34], which is thought to normally convert anisotropy energy into radiation energy [35]. It is also worth noting that a conformal framework of gravity seems to be particularly compatible with observational data of galactic rotational velocities and halos among others [36, 37, 38, 39]. One geometrical frame for the anisotropic conformal modification of gravity arises from the extension of the underlying geometry of a manifold \((M,g(x))\); i.e. generalized Lagrange metric structures on the tangent bundle [40, 41, 42, 43, 44, 45, 46, 47]. In this framework the gravitational field is extended in a higher dimensional space with greater volume. A Sasaki-type Finsler-like structure of this kind not only furnishes the geometric frame with extra degrees of freedom, but also endows the structure of the spacetime with local anisotropy and extra dimensions, which could be associated with dark gravitational effects [48], while simultaneously preserving the light cone [49, 50]. These extra degrees of freedom are introduced in 8-dimensions and are linked to the notion of direction-dependent anisotropy caused by velocity or momentum coordinates [40]. This dependence of the physical quantities on the observer 4-velocity provides a natural geometric extension of the Riemannian frame on the tangent bundle, which could be reproduced from the generalized frame by eliminating this direction dependence. Moreover, such a Sasaki-type consideration could potentially be related to a generalized anisotropic conformal de-Sitter Minkowski spacetime structure. We can notice that a Friedmann spacetime is isotropic conformal to a Minkowski flat spacetime. Analogously, it could be interesting to study an anisotropic conformal Minkowski as well as FLRW spacetime using the aforementioned geometry. Finally, due to the strong association of Finsler and Finsler-like geometries with the effective geometry within anisotropic media [40, 51, 52], forming a natural gravitational analogy [40, 53], it could be argued that they could play an important role in quantum gravity considerations [54, 55, 56, 57]. This work is organized as follows: in Section 2, we present the generalized Lagrange Sasaki-type geometric structure of the tangent bundle giving the relations for the metric, the connection, the curvature tensor field as well as the field equations, among others. In Section 3, we give the geodesic equations for this model. In Section 4 we study the case of the anisotropic conformal Minkowski spacetime and derive a couple of special types of conformal factors. Further in section 5 we investigate the anisotropic conformal FLRW-cosmology using the geometric frameworks developed in this work. Finally, in section 6, we summarize our results of and in Appendix A we present some further geometric results. ## 2 Metric Structure In this section, we shall introduce some basic notions from the geometry of generalized Finsler-like metric structures. Let \(M\) be a differentiable manifold of dimension \(\dim(M)=n\) and \(TM\) be its tangent bundle. Let the manifold \(M\) be endowed with a (pseudo-)Riemannian metric \(\gamma(x)\). Then it is well known [58, 59] that its tangent bundle can be endowed with a Riemann-Sasaki metric structure as follows: \[dl^{2}=\gamma_{\mu\nu}(x)dx^{\mu}\otimes dx^{\nu}+\gamma_{ab}(x)\delta y^{a} \otimes\delta y^{b} \tag{1}\] where \[\delta y^{a}=dy^{a}+N^{a}_{\ \mu}(x,y)dx^{\mu} \tag{2}\] with \(\mu,\nu,\cdots=0,1,\cdots,n-1\) and \(a,b,\cdots=0,\cdots,n-1\). The components of \(N^{a}_{\ \mu}(x,y)\), which is known as the non-linear connection, is produced by the Whitney sum of the horizontal and vertical subspaces of the tangent bundle [60, 61]. It is then well-established that this metric structure for the tangent bundle can be further generalized to include Finsler, Lagrange and generalized Lagrange metrics \(g(x,y)\), collectively known as Finsler-like structures. In this case we have: \[d\tau^{2}=g_{\mu\nu}(x,y)dx^{\mu}\otimes dx^{\nu}+g_{ab}(x,y)\delta y^{a} \otimes\delta y^{b} \tag{3}\] Let us now consider a non-reducible generalised Lagrange tangent bundle space \(TM\), \[GL^{(2n)}=\big{(}g_{\mu\nu}(x,y),g_{ab}(x,y)\big{)} \tag{4}\] with metric \(\mathcal{G}\) such that \[ds^{2}=e^{f(x,y)}dl^{2}=\sigma(x,y)dl^{2} \tag{5}\] where \(f\), \(\sigma:TM\rightarrow\mathbb{R}\) are functions which are at least \(C^{2}\) known as the (anisotropic) conformal factors. For convenience we shall be using both of these equivalent definitions for the conformal factor throughout this study. Physically, the conformal factor is introduced to incorporate the dark gravitational effect into the geometric framework of the gravitational field. The variable \(y\) in particular, is the internal variable that introduces direction dependence and hence local anisotropy. If the conformal factor does not depend on \(y\), this is interpreted as isotropic dark gravity, and if \(f=0\) then we get a spacetime without dark gravity. This metric space is said to be anisotropic conformal [50, 62] to the Riemann-Sasaki metric space defined by the Riemannian metric \(\gamma(x)\). In terms of the bundle components, the metric tensor can be equivalently written as, \[\mathcal{G}_{MN}(x,y)=\{g_{\mu\nu}(x,y),g_{ab}(x,y)\} \tag{6}\] where \(M,N,\cdots=0,1,\cdots,2n-1\) \[g_{\mu\nu}(x,y)=e^{f(x,y)}\gamma_{\mu\nu}(x) \tag{7}\] \[g_{ab}(x,y)=s^{\mu}_{\;\;a}\delta^{\nu}_{\;\;b}g_{\mu\nu}(x,y) \tag{8}\] and \(\gamma_{\mu\nu}(x)\) is a Riemannian metric that has been extended in the vertical subspace as \(\gamma^{\prime}_{\;\;ab}=s^{\mu}_{\;\;a}\delta^{\nu}_{\;\;b}\gamma^{\prime}_{ \mu\nu}\). The adapted and dual bases of TTM are given by \[X_{M}=\{\delta_{\mu},\bar{\partial}_{a}\}\,\ X^{M}=\{dx^{\mu},\,\delta y^{a}\} \tag{9}\] respectively, where \[\delta_{\mu}=\partial_{\mu}-N^{a}_{\mu}(x,y)\bar{\partial}_{a}\,\ \partial_{\mu}:=\frac{ \partial}{\partial x^{\mu}}\ \,\ \ \bar{\partial}_{a}:=\frac{\partial}{\partial y^{a}} \tag{10}\] The connection is then given by the following: \[D_{\delta_{\nu}}\delta_{\mu}=L^{\lambda}_{\;\;\mu\nu}\delta_{ \lambda}\ \,\ \ D_{\delta_{\nu}}\bar{\partial}_{a}=\tilde{L}^{c}_{\;\;a\nu}\bar{ \partial}_{c}\] \[D_{\partial_{\mu}}\delta_{\mu}=\tilde{C}^{\lambda}_{\;\;\mu\nu} \delta_{\lambda}\ \,\ \ D_{\partial_{\mu}}\bar{\partial}_{a}=C^{c}_{\;\;ab}\bar{ \partial}_{c} \tag{11}\] Hence, the coefficients of the d-connection are \[\boldsymbol{\Gamma}^{L}_{\;\;MN}=\{L^{\lambda}_{\;\;\mu\nu},\tilde{L}^{c}_{\; \;a\nu},\tilde{C}^{\lambda}_{\;\;\mu\nu},C^{c}_{\;\;ab}\} \tag{12}\] The d-connection preserves the horizontal and vertical components of a vector under parallel translation [60]. Throughout this study we shall assume a metrical d-connection [60, 63, 61, 64]. From this assumption we get the subsequent relations for the d-connection coefficients: \[L^{\lambda}_{\;\;\mu\nu}=\gamma^{\lambda}_{\;\;\mu\nu}+\frac{ \delta^{\lambda}_{\;\;\nu}\delta_{\mu}\sigma+\delta^{\lambda}_{\;\;\mu}\delta _{\nu}\sigma-\gamma_{\mu\nu}\gamma^{\lambda\rho}\delta_{\rho}\sigma}{2\sigma} \tag{13}\] \[\tilde{L}^{a}_{\;\;b\mu}=\frac{\bar{\partial}_{b}N^{a}_{\;\;\mu}+ \gamma^{ac}\partial_{\mu}\gamma_{bc}+\delta^{a}_{\;\;b}\frac{\delta_{\mu} \sigma}{\sigma}-\gamma_{bd}\gamma^{ac}\bar{\partial}_{c}N^{d}_{\;\;\mu}}{2}\] (14) \[C^{c}_{\;\;ab}=\frac{1}{2\sigma}\left(\delta^{c}_{\;\;b}\bar{ \partial}_{a}\sigma+\delta^{c}_{\;\;a}\bar{\partial}_{b}\sigma-\gamma_{ab} \gamma^{cd}\bar{\partial}_{d}\sigma\right)\] (15) \[\tilde{C}^{\lambda}_{\;\;\mu c}=\frac{1}{2}\delta^{\lambda}_{\;\; \mu}\bar{\partial}_{c}\left(\ln\sigma\right) \tag{16}\] where \(\gamma^{\lambda}_{\;\;\mu\nu}\) are the Christoffel symbols of the Riemannian metric \(\gamma\). Using the d-connection coefficients we previously found in relations (13-16), we can now calculate the curvature of this space. In particular, let \(\mathcal{R}\) be the curvature tensor field of the d-connection \(D\), then the non-zero components of \(\mathcal{R}\) are given by the following relations: \[\mathcal{R}^{\;\;\mu}_{\;\;\nu\;\;\rho\sigma}=\mathcal{R}^{\;\; \mu}_{\;\;\nu\;\;\rho\sigma}\,\ \ \mathcal{R}^{\;\;a}_{\;\;b\;\;\kappa\lambda}=\mathcal{R}^{\;\;a}_{\;\;b\;\;\kappa\lambda} \tag{17}\] \[\mathcal{R}^{\;\;\mu}_{\;\;\nu\;\;\rho\sigma}=P^{\;\;\mu}_{\;\;\; \nu\;\;\rho\sigma}\,\ \ \mathcal{R}^{\;\;a}_{\;\;b\;\;\kappa d}=P^{\;\;a}_{\;\;b\;\;\kappa d}\] (18) \[\mathcal{R}^{\;\;\mu}_{\;\;\nu\;\;ab}=S^{\;\;\mu}_{\;\;\nu\;\;ab},\ \ \mathcal{R}^{\;\;a}_{\;\;cd}=S^{\;\;a}_{\;\;cd} \tag{19}\] where the d-tensor fields are given by: \[R_{v}^{\ \mu}{}_{\rho\sigma} =\delta_{\sigma}L^{\mu}{}_{\nu\rho}-\delta_{\rho}L^{\mu}{}_{\nu \sigma}+L^{\kappa}{}_{\nu\rho}L^{\mu}{}_{\kappa\sigma}-L^{\kappa}{}_{\nu\sigma}L ^{\mu}{}_{\kappa\rho}+\tilde{C}^{\mu}{}_{\nu\kappa}R^{c}{}_{\rho\sigma} \tag{20}\] \[R_{b}^{\ a}{}_{\rho\sigma} =\delta_{\sigma}\tilde{L}^{a}{}_{b\rho}-\delta_{\rho}\tilde{L}^{a}{ }_{b\sigma}+\tilde{L}^{c}{}_{b\rho}\tilde{L}^{a}{}_{c\sigma}-\tilde{L}^{c}{}_{ b\sigma}\tilde{L}^{a}{}_{c\rho}+C^{a}{}_{b\kappa}R^{c}{}_{\rho\sigma}\] (21) \[P_{v}^{\ \mu}{}_{\rho d} =\tilde{\partial}_{d}L^{\mu}{}_{\nu\rho}-\tilde{C}^{\mu}{}_{val \rho}+\tilde{C}^{\mu}{}_{\nu b}P^{b}_{\ \rho d}\] (22) \[P_{b}^{\ a}{}_{\rho d} =\tilde{\partial}_{d}\tilde{L}^{a}{}_{b\rho}-C^{a}{}_{bd|\rho}+C ^{a}{}_{b\kappa}P^{c}_{\ \rho d}\] (23) \[S_{v}^{\ \mu}{}_{ab} =\tilde{\partial}_{b}\tilde{C}^{\mu}{}_{va}-\tilde{\partial}_{a} \tilde{C}^{\mu}{}_{vb}+\tilde{C}^{\lambda}{}_{va}\tilde{C}^{\mu}{}_{\lambda b} -\tilde{C}^{\lambda}{}_{v\rho}\tilde{C}^{\mu}{}_{\lambda a}\] (24) \[S_{b}^{\ a}{}_{cd} =\tilde{\partial}_{d}C^{a}{}_{bc}-\tilde{\partial}_{c}C^{a}{}_{ bd}+C^{a}{}_{b\kappa}C^{a}{}_{cd}-C^{a}{}_{bd}C^{a}{}_{c\epsilon} \tag{25}\] where \(R^{c}{}_{\rho\sigma}=\delta_{\sigma}N^{c}_{\ \rho}-\delta_{\rho}N^{c}_{\ \sigma}= \delta_{[a}N^{c}_{\ \rho]}\). No is said to be integrable if and only if \(R^{c}{}_{\rho\sigma}=0\)[60, 63]. Let the non-linear connection be of Cartan-type, i.e. \(N^{a}{}_{\kappa}=\gamma^{a}_{\ \ b\kappa}y^{b}\). Then it is clear that in general \(N\) is not integrable. For the anisotropic conformal metric (5) we have the following curvature tensor field components1: Footnote 1: The study of the \(P-\)curvature lies outside the scope of the present study and shall be henceforth omitted. \[R_{v}^{\ \mu}{}_{\rho\sigma} =K_{v}^{\ \mu}{}_{\rho\sigma}+\frac{1}{2}\mathcal{L}_{v}^{\ \mu}{}_{\rho\sigma}+\frac{1}{4}M_{v}^{\ \mu}{}_{\rho\sigma} \tag{26}\] \[R_{b}^{\ a}{}_{\rho\sigma} =\frac{1}{2}\tilde{L}^{a}_{b\ \rho\sigma}+\frac{1}{4}\tilde{M}^{a}_{b\ \rho\sigma}\] (27) \[S_{v}^{\ \mu}{}_{ab} =0\] (28) \[S_{b}^{\ a}{}_{cd} =\frac{1}{2}\big{(}\delta^{a}{}_{[d}\tilde{\partial}_{d]}\tilde{ \partial}_{b}f+\gamma_{[a]}\gamma^{a\epsilon}\tilde{\partial}_{c]}\tilde{ \partial}_{\epsilon}f\big{)}+\frac{1}{4}\big{(}\delta^{a}{}_{[d}\tilde{ \partial}_{b}f\tilde{\partial}_{c]}f+\delta^{a}{}_{[c}\gamma_{d]b}\gamma^{c \ell}\tilde{\partial}_{\epsilon}f\big{)}\gamma^{a\epsilon}\tilde{\partial}_ {\epsilon}f\big{)} \tag{29}\] where \[K_{v}^{\ \mu}{}_{\rho\sigma} =\partial_{[a}\gamma^{\mu}{}_{\rho]v}+\gamma^{\kappa}{}_{[\rho} \gamma^{\mu}{}_{\sigma]v} \tag{30}\] \[\mathcal{L}_{v}^{\ \mu}{}_{\rho\sigma} =\delta^{\mu}{}_{[b}\delta_{\sigma]}\delta_{v}f+\delta^{\mu}{}_{v \sigma}\delta_{[a}\delta_{\rho]}f+\partial_{[a}\gamma_{\sigma]v}\gamma^{\mu \lambda}\delta_{\lambda}f+\gamma_{v[a}\partial_{\rho]}\gamma^{\mu\lambda} \delta_{\lambda}f+\gamma_{v[a}\gamma^{\mu\lambda}\delta_{\rho]}\delta_{\lambda}f\] \[+\gamma^{\kappa}{}_{[\rho}\delta^{\mu}{}_{\sigma]\delta}\delta_{v }f+\gamma^{\kappa}{}_{v[\sigma}\gamma^{\rho}{}_{\rho]v}\gamma^{\mu\lambda} \delta_{\lambda}f+\gamma^{\mu}{}_{\kappa[\rho}\gamma_{\sigma]v}\gamma^{\nu \lambda\lambda}\delta_{\lambda}f+\delta^{\mu}{}_{v\sigma}\delta_{[a}N^{c}_{ \ \rho]}\tilde{\partial}_{c}f\] (31) \[M_{v}^{\ \mu}{}_{\rho\sigma} =\delta^{\mu}{}_{[\sigma}\delta_{\rho]f}\delta_{v}f+\gamma_{[a} \gamma^{\mu\lambda}\delta_{\rho]f}\delta_{\lambda}f+\delta^{\mu}{}_{[\rho} \gamma_{\sigma]v}\gamma^{\nu\lambda}\delta_{\lambda}g_{\lambda}f\] (32) \[\tilde{\mathcal{L}}^{a}_{b\ \rho\sigma} =\delta_{[a}\tilde{\partial}_{b}N^{a}_{\rho]}+\partial_{[\gamma^{a} \sigma}\delta_{\rho]f}\gamma_{bc}+\delta^{a}{}_{b\delta}[\delta_{[a}\delta_{ \rho]f}+\gamma^{a\epsilon}\partial_{[\rho}\gamma_{bd}\tilde{\partial}_{c}N^{a }_{\ \sigma]}+\gamma_{bd}\partial_{[\rho}\gamma^{ac}\tilde{\partial}_{c}N^{d}_{\ \sigma]}\] \[+\gamma_{bd}\gamma^{a\epsilon}\delta_{[\rho}\tilde{\partial}_{c}N^{d }_{\sigma]}+\delta_{[\sigma}N^{a}_{\ \rho]}\tilde{\partial}_{b}f+\delta_{[\alpha}N^{c}_{\ \rho]}\delta^{a}_{\ \rho}\tilde{ \partial}_{c}f-\delta_{[\alpha}N^{c}_{\ \rho]}\gamma_{\gamma c}\gamma^{ad}\tilde{\partial}_{d}f\] (33) \[\tilde{M}^{a}_{\ \rho\sigma} =\tilde{\partial}_{b}N^{c}_{[\ \rho}\tilde{\partial}_{c}N^{a}_{\ \sigma]}+\gamma^{c\bar{\partial}}\tilde{\partial}_{c}N^{a}_{\ \rho]}\gamma_{bd}+\gamma^{ad}\tilde{\partial}_{b}N^{c}_{[\ \ \rho}\partial_{c}\gamma_{cd}+\gamma_{bd}\gamma^{c\epsilon}\tilde{\partial}_{e}N^{d} _{[\ \sigma}\tilde{\partial}_{c}N^{d}_{\ \rho]}\] \[+\gamma^{a\epsilon}\gamma^{cd}\partial_{[\sigma}\gamma_{c}\sigma_{ \rho]}\gamma_{bd}+\gamma_{bd}\gamma^{a\epsilon}\gamma^{c\epsilon}\tilde{\partial}_ {c}N^{d}_{[\ \sigma}\partial_{\rho]}\gamma_{cf}+\gamma_{bd}\gamma^{a\epsilon}\tilde{\partial}_ {c}N^{d}_{[\ \ \sigma}\partial_{\epsilon}N^{d}_{\ \rho]}\] \[+\gamma^{a\epsilon}\tilde{\partial}_{c}N^{d}_{\ [\ \rho}\partial_{c}\gamma_{bd}+\gamma_{bc} \gamma^{ad}\tilde{\partial}_{d}N^{c}_{[\ \ \rho}\delta_{\rho]f}+\gamma_{bd}\gamma^{a\epsilon}\tilde{\partial}_{c}N^{d}_{ \ [\ \rho}\tilde{\partial}_{c}N^{c}_{\ \sigma]} \tag{34}\] \(K\), in particular, is the Riemann curvature tensor corresponding to the (pseudo-)Riemannian metric \(\gamma\) of the underlying manifold structure. The horizontal \(R\)-curvature contains extra terms, in addition to the underlying Riemannian \(K\)-curvature, which allow for any discrepancies to the Riemannian \(K\)-curvature that result from the effect of the "extra" dark gravity and could otherwise be interepreted as perturbations to the Riemannian framework to be incorporated in the geometry of the spacetime in the tangent bundle. A physical interpretation of the vertical \(S\)-curvature (29), on the other hand, could be tied to an anisotropic behavior of dark gravity since the \(S\)-curvature indicates an anisotropically curved spacetime. This is made evident by the above-mentioned form of the vertical \(S\)-curvature which depends on the existence of a direction-dependent conformal factor, which in turn presupposes an anisotropic dark matter as mentioned in the beginning. A non-trivial \(S\)-curvature is absent from a Riemannian framework and would thus introduce extra degrees of freedom not present in a Riemannian theory of gravity. Consequently, this geometric structure could allow for a broader study of gravitational phenomena linked with dark gravity and anisotropy (e.g. the evolution of universe). We shall now find the Ricci tensors as follows: \[R_{\nu\rho}=R_{\nu~{}\rho\mu}^{~{}~{}\mu}=K_{\nu\rho}+\frac{1}{2}\mathcal{L}_{ \nu\rho}+\frac{1}{4}M_{\nu\rho} \tag{35}\] where \[\mathcal{L}_{\nu\rho}=\delta_{[\nu}\delta_{\rho]}f+\gamma_{\nu[\mu}\gamma^{\mu \lambda}\delta_{\rho]}\delta_{\lambda}f+\partial_{[\rho}\gamma_{\mu]\nu}\gamma^ {\mu\lambda}\delta_{\lambda}f+\gamma^{\kappa}_{~{}\nu[\mu}\gamma_{\rho]\kappa} \gamma^{\mu\lambda}\delta_{\lambda}f+\gamma^{\mu}_{~{}\kappa[\rho}\gamma_{\mu] \nu}\gamma^{\kappa\lambda}\delta_{\lambda}f+\delta_{[\nu}N^{c}_{~{}\rho]} \tilde{\partial}_{c}f \tag{36}\] and \[M_{\nu\rho}=\gamma_{\nu[\rho}\gamma^{\mu\lambda}\delta_{\mu]}f\delta_{\lambda}f \tag{37}\] and the Ricci tensor corresponding to the Riemannian metric \(\gamma\) is \[K_{\nu\rho}=K_{\nu~{}\rho\mu}^{~{}~{}\mu}=\partial_{[\mu}\gamma^{\mu}_{~{}~{} \rho]\nu}+\gamma^{\kappa}_{~{}\nu[\rho}\gamma^{\mu}_{~{}~{}\mu]\kappa} \tag{38}\] From the \(S\)-curvature, we get: \[S_{bc}=S_{b~{}ca}^{~{}a}=\frac{1}{4}\left(2\gamma_{b[a}\gamma^{\mu d}\tilde{ \partial}_{c]}\tilde{\partial}_{d}f+\gamma_{b[c}\gamma^{\mu d}\tilde{\partial} _{d]}f\tilde{\partial}_{d}f\right) \tag{39}\] We therefore have the following scalar curvature: \[\mathcal{R}=R+S=e^{-f}\left(K+\frac{1}{2}\mathcal{L}+\frac{1}{4}M+\frac{3}{4} \tilde{S}\right) \tag{40}\] where \[R=g^{\nu\rho}R_{\nu\rho}=e^{-f}\left(K+\frac{1}{2}\mathcal{L}+\frac{1}{4}M\right) \tag{41}\] with \[\mathcal{L} = -(n-1)\gamma^{\mu\lambda}\delta_{\mu}\delta_{\lambda}f+\gamma^{ \nu\rho}\partial_{[\rho}\gamma_{\mu]\nu}\gamma^{\mu\lambda}\delta_{\lambda}f-( n-1)\partial_{\mu}\gamma^{\mu\lambda}\delta_{\lambda}f+\gamma^{\nu\rho} \gamma^{\kappa}_{~{}\nu[\mu}\gamma_{\rho]\kappa}\gamma^{\mu\lambda}\delta_{ \lambda}f \tag{42}\] \[-(n-1)\gamma^{\mu}_{~{}~{}\kappa\mu}\gamma^{\kappa\lambda}\delta_ {\lambda}f+\gamma^{\nu\rho}\delta_{[\nu}N^{c}_{~{}\rho]}\tilde{\partial}_{c}f\] \[M = \frac{e^{-f}}{4}(n-1)\gamma^{\mu\lambda}\delta_{\mu}f\delta_{ \lambda}f \tag{43}\] and the Ricci scalar corresponding to the Riemannian metric \(\gamma\) is \[K=\gamma^{\nu\rho}K_{\nu\rho} \tag{44}\] For the scalar \(S\)-curvature we have \[S=g^{bc}S_{bc}=\frac{3}{4}e^{-f}\tilde{S} \tag{45}\] with \[\tilde{S}=\frac{1}{3}\left(-2(n-1)\gamma^{\mu b}\tilde{\partial}_{a}\tilde{ \partial}_{b}f+(n-1)\gamma^{\mu b}\tilde{\partial}_{a}f\tilde{\partial}_{b}f\right) \tag{46}\] The scalar \(S\)-curvature could be interpreted as the degree of anisotropy of a conformal anisotropically curved spacetime which includes anisotropic gravitational effects as shown in the previous relation. It is worth pointing out, however, that while the \(S\)-curvature is dominated by the direction dependence of the conformal factor, a flat vertical space does not necessarily lead to (or result from) a direction independent conformal factor and such a case should be treated carefully as shall be demonstrated in a later section of this study. The field equations are then given by the calculus of variation on the action given in [63, 60, 47]2: Footnote 2: The gravitational constant has been taken as 1. \[\mathcal{R}_{MN}-\frac{1}{2}\mathcal{R}\mathcal{G}_{MN}=\mathcal{T}_{MN} \tag{47}\] where \(\mathcal{T}_{MN}\) is the energy-momentum tensor field on the tangent bundle in the adapted basis; namely \(\mathcal{T}_{\mu\nu}\) =\(\mathcal{T}_{\mu\nu}\) and \(\mathcal{T}_{ab}\) =\(W_{ab}\) with \(\mathcal{T}_{a\nu}\) =\(\mathcal{T}_{\mu b}\) = 0. By taking the trace of relation (47) we get the following: \[\mathcal{R}=-\frac{1}{n-1}\mathcal{T} \tag{48}\] where \(\mathcal{T}=\mathcal{G}^{MN}\mathcal{T}_{MN}\) or, equivalently, \(\mathcal{T}=T+W\), where \(T=g^{\mu\nu}\mathcal{T}_{\mu\nu}\) is the trace of the horizontal component of the energy-momentum tensor and \(W=g^{ab}W_{ab}\) is the trace of the vertical component of the energy-momentum tensor, respectively. We thus arrive at the following equivalent form of the field equations: \[\mathcal{R}_{MN}=\mathcal{T}_{MN}\ -\frac{1}{2n-2}\mathcal{T}\mathcal{G}_{MN} \tag{49}\] By virtue of relations (8), (47) and (49) the following proposition holds: the horizontal and vertical Ricci curvatures are equal to each other if and only if the horizontal and vertical components of the energy-momentum tensors are also equal to each other3; namely: Footnote 3: Unless otherwise stated explicitly, we shall henceforth assume that the Ricci curvatures are not equal to each other. \[W_{ab}=\delta^{\mu}_{\ a}S^{\nu}_{\ b}T_{\mu\nu}\iff S_{ab}=\delta^{\mu}_{\ a}S^{\nu}_{\ b}R_{\mu\nu} \tag{50}\] As can be seen from relation (50), an intrinsically geometric connection of the horizontal and vertical subspaces results in a physical connection of the energy-momentum tensors and vice-versa. This is, however, also a consequence of the profound relation between the metric of the horizontal and vertical subspaces that has been assumed in relation (8), without which relation (50) would not be true. ## 3 Geodesics We define the absolute energy \(\mathcal{E}\) as follows [60]: \[\mathcal{E}:=g_{ab}y^{a}y^{b}=\sigma(x,y)\mathcal{y}_{\ ab}(x)y^{a}y^{b} \tag{51}\] Using the absolute energy we define the following tensor: \[\bar{g}_{cd}:=\frac{1}{2}\bar{\partial}_{c}\bar{\partial}_{d}\mathcal{E}= \frac{\mathcal{E}}{2\sigma}\sigma_{cd}+\mathcal{y}_{\ ad}y^{a}\sigma_{c}+ \mathcal{y}_{\ ac}y^{a}\sigma_{d}+\mathcal{y}_{\ cd}\sigma \tag{52}\] where \(\sigma_{a}=\bar{\partial}_{a}\sigma\). We shall now calculate the inverse tensor \(\bar{g}\) as follows: \[\bar{g}^{ab}=g^{ac}g^{db}\bar{g}_{cd}=\frac{\mathcal{E}}{2\sigma^{3}}\sigma^{ ab}+y^{a}\frac{\sigma^{b}}{\sigma^{2}}+y^{b}\frac{\sigma^{a}}{\sigma^{2}}+ \frac{\gamma^{ab}}{\sigma} \tag{53}\] We further define: \[G^{a}:=\frac{1}{4}\bar{g}^{ab}(y^{k}\bar{\partial}_{b}\partial_{k}\mathcal{E} -\partial_{b}\mathcal{E}) \tag{54}\] where \(y^{k}=\delta^{i}_{\ a}y^{a}\). In our case: \[G^{a}(x,y):=\frac{1}{4\sigma}\left(\frac{\mathcal{E}}{2}\sigma^ {ab}+y^{a}\sigma^{b}+y^{b}\sigma^{a}+\gamma^{ab}\right)\times\] \[\left(\sigma_{b}\partial_{\mu}\gamma_{cd}y^{c}y^{d}y^{\mu}+2 \sigma\partial_{\mu}\gamma_{bc}y^{c}y^{\mu}+2\sigma_{\mu}\gamma_{bc}y^{c}y^{ \mu}+\frac{\mathcal{E}}{\sigma}\sigma_{\mu b}y^{\mu}-\delta^{\mu}_{\ b}\sigma \partial_{\mu}\gamma_{cd}y^{c}y^{d}+\delta^{\mu}_{\ b}\frac{\mathcal{E}}{ \sigma}\sigma_{\mu}\right) \tag{55}\] where \(\sigma_{\mu}=\partial_{\mu}\sigma\) and \(\sigma^{a}=g^{ab}\bar{\partial}_{b}\sigma\). We can now write the geodesic equation as follows: \[\frac{dy^{a}}{dt}+2G^{a}(x,y)=0,\ \ \ y^{a}=\frac{dx^{a}}{dt} \tag{56}\] We observe that: \[G^{a}(x,y)=\frac{1}{2}\gamma^{a}_{\ \mu\nu}y^{\mu}y^{\nu}+\frac{1}{2}r^{a}+ \frac{1}{2}l^{a} \tag{57}\] where \({\gamma^{a}}_{\mu\nu}\) are the Christoffel symbols of second type of the Riemannian metric \(\gamma\), \[r^{a}=\frac{1}{2\sigma}\left(2\sigma_{\mu}y^{a}y^{\mu}-\frac{ \mathcal{E}}{\sigma}\gamma^{\mu\beta}\sigma_{\beta}\right) \tag{58}\] is the conformal part of the geodesics corresponding to the Riemannian case, and \[l^{a}(x,y):=\frac{1}{2}\left(\frac{\mathcal{E}}{2\sigma}\sigma^{ ab}+y^{a}\frac{\sigma^{b}}{\sigma}+y^{b}\frac{\sigma^{a}}{\sigma}\right)\times\] \[\left(\sigma\gamma_{bc\mu}y^{c}y^{\mu}+\sigma_{b}\partial_{\mu} \gamma_{cd}y^{c}y^{d}y^{\mu}+2\sigma_{\mu}\gamma_{bc}y^{c}y^{\mu}+\frac{ \mathcal{E}}{\sigma}\sigma_{\mu b}y^{\mu}+\delta^{\mu}_{\ b}\frac{\mathcal{E}} {\sigma}\sigma_{\mu}\right)+\frac{\sigma^{a}}{\sigma}\partial_{\mu}{\gamma^{ c}}_{cd}y^{c}y^{d}y^{\mu}+\frac{\mathcal{E}}{\sigma}\sigma_{\mu}{}^{d}y^{\mu} \tag{59}\] is the generalized Lagrange conformal part of the geodesics, where \(\gamma_{a\mu\nu}\) are the Christoffel symbols of first type of the Riemannian metric \(\gamma\). Therefore we can write the geodesic equation as follows: \[\frac{dy^{a}}{dt}+{\gamma^{a}}_{\mu\nu}y^{\mu}y^{\nu}+r^{a}+l^{a}=0,\ \ \ y^{a}=\frac{dx^{a}}{dt} \tag{60}\] It is then clear, that the direction dependence introduced by \(\sigma\) contributes through \(l^{a}\) in the geodesics. If the conformal map \(\sigma\) is independent of the direction variable \(y\), i.e. \(\sigma=\sigma(x)\), then \(l^{a}=0\) and the geodesics reduce to their Riemannian counterpart. The internal \(y\) coordinates of the gravitational field express the anisotropic dark structure through the function \(\sigma(x,y)\). The additional terms \(r^{a}(x,y)\) and \(l^{a}(x,y)\) in rel. (60) provide an anisotropic conformal type of metric geodesics that incorporate dark gravitational effects which is imprinted in the structure of this spacetime. Dark gravity plays an essential role in these perturbed conformal geodesics. ## 4 Anisotropic Conformal Minkowski Spacetime In this section we shall examine a first application of this geometric framework by using a Minkowski metric structure for the underlying manifold. This could be especially interesting for the cosmology of a post-inflation universe which evolves towards flatness, since this metrical model could be considered connected to an anisotropic generalization of a de-Sitter metric spacetime in which the scale factor causes a conformal structure for the spatial metric, e.g. in a Friedmann metric space. In particular, let \[g_{\mu\nu}(x,y)=e^{f(x,y)}\eta_{\mu\nu} \tag{61}\] where \[\eta_{\mu\nu}=\text{diag}(1,-1,-1,-1) \tag{62}\] for some \(f:TM\rightarrow\mathbb{R}\) which is at least \(C^{2}\). Let the non-linear connection be of Cartan-type, i.e. \(N^{a}_{\ \kappa}=\gamma^{a}_{\ b\kappa}y^{b}\). Then the non-linear connection is obviously zero and the adapted basis \(\{\delta_{\mu},\bar{\partial}_{a}\}\) coincides with the ordinary basis \(\{\bar{\partial}_{\mu},\bar{\partial}_{a}\}\). Since the curvature tensor of the underlying structure, \(K=0\), it is relatively easy to find the curvature tensor \(\mathcal{R}\). Indeed: \[R_{\nu}{}^{\mu}_{\rho a}=\frac{1}{2}\Big{(}\delta^{\mu}_{\ [\rho}\partial_{\sigma]}\partial_{\nu}f+\eta_{\nu[\sigma}\eta^{\mu\lambda} \partial_{\rho]}\partial_{\lambda}f\Big{)}+\frac{1}{4}\Big{(}\delta^{\mu}_{\ [\sigma} \partial_{\rho]}f\partial_{\nu}f+\eta_{\nu[\rho}\eta^{\mu\lambda}\partial_{ \sigma]}f\partial_{\lambda}f+\delta^{\mu}_{\ [\rho}\eta_{\sigma]\nu}\eta^{\kappa\lambda} \partial_{\kappa}f\partial_{\lambda}f\Big{)} \tag{63}\] \[R_{b}{}^{a}_{\rho\rho}=0\ \,\ S^{\ \mu}_{\ \nu}{}_{ab}=0\] (64) \[S_{b}{}^{a}_{cd}=\frac{1}{2}\bigg{(}\delta^{a}_{\ [c}\bar{\partial}_{d]}\bar{ \partial}_{b}f+\eta_{b[d}\eta^{a\epsilon}\bar{\partial}_{c]}\bar{\partial}_{ \epsilon}f\bigg{)}+\frac{1}{4}\bigg{(}\delta^{a}_{\ [c}\bar{\partial}_{b]}f\bar{\partial}_{c]}f+\delta^{a}_{\ [c}\eta_{ d]\psi}\eta^{\epsilon\prime}\bar{\partial}_{\epsilon}f\bar{\partial}_{ \delta}f+\eta_{[c}\eta^{a\epsilon}\bar{\partial}_{d]}f\bar{\partial}_{ \epsilon}f\bigg{)} \tag{65}\] The Ricci tensors and scalars are then: \[R_{\,\,\,\nu\rho} =\frac{1}{4}\left(2\eta_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, \[6\bigg{(}f_{00}\ -f_{11}\ -f_{33}\bigg{)}+3\bigg{(}-(f_{0}\,)^{2}+(f_{1 }\,)^{2}+(f_{3}\,)^{2}\bigg{)} =2(\rho+\phi)-6e^{f}(p-\psi) \tag{77}\] \[6\bigg{(}f_{55}\ +f_{66}\ +f_{77}\bigg{)}-3\bigg{(}(f_{5}\,)^{2}+(f_{6 }\,)^{2}+(f_{7}\,)^{2}\bigg{)} =10\phi-2\rho+6e^{f}(p+\psi)\] (78) \[6\bigg{(}f_{44}\ -f_{55}\ -f_{66}\bigg{)}+3\bigg{(}-(f_{4}\,)^{2}+(f_{ 5}\,)^{2}+(f_{6}\,)^{2}\bigg{)} =2(\rho+\phi)+6e^{f}(p-\psi)\] (79) \[6\bigg{(}f_{44}\ -f_{66}\ -f_{77}\bigg{)}+3\bigg{(}-(f_{4}\,)^{2}+(f_{ 6}\,)^{2}+(f_{7}\,)^{2}\bigg{)} =2(\rho+\phi)+6e^{f}(p-\psi)\] (80) \[6\bigg{(}f_{44}\ -f_{55}\ -f_{77}\bigg{)}+3\bigg{(}-(f_{4}\,)^{2}+(f_{ 5}\,)^{2}+(f_{7}\,)^{2}\bigg{)} =2(\rho+\phi)+6e^{f}(p-\psi) \tag{81}\] We shall now try to find a possible form for the conformal factor \(f(x,y)\) using the field equations (72-81). First, let \(F:TM\longrightarrow\mathbb{R}_{>0}\) be some auxiliary function such that \(F_{\mu\nu}=F_{ab}=0\ \forall\mu\neq\nu\) and \(a\neq b\). Such a function exists. For example: \[F(x,y)=\lambda_{0}(x^{0})+\lambda_{1}(x^{1})+\cdots+\lambda_{7}(x^{7}=y^{3}) \tag{82}\] with \(\lambda_{i}:I\subseteq\mathbb{R}\rightarrow\mathbb{R}\), \(\forall i=0,1,\cdots,7\) is such a function. Then \(f(x,y)=-2\ln(F(x,y))\) satisfies equations (72) and (73). Next, by manipulating relations (75-77) and (79-81) we get the following relations: \[2(f_{11}\ -f_{22}\ )=(f_{1}\,)^{2}-(f_{2}\,)^{2} \tag{83}\] \[2(f_{11}\ -f_{33}\ )=(f_{1}\,)^{2}-(f_{3}\,)^{2}\] (84) \[2(f_{22}\ -f_{33}\ )=(f_{2}\,)^{2}-(f_{3}\,)^{2}\] (85) \[2(f_{55}\ -f_{66}\ )=(f_{5}\,)^{2}-(f_{6}\,)^{2}\] (86) \[2(f_{55}\ -f_{77}\ )=(f_{5}\,)^{2}-(f_{7}\,)^{2}\] (87) \[2(f_{66}\ -f_{77}\ )=(f_{6}\,)^{2}-(f_{7}\,)^{2} \tag{88}\] Substituting \(f=-2\ln{(F)}\) in relations (83-88) we get the following pieces of information concerning \(F(x,y)\): \[F_{11}=F_{22}=F_{33} \tag{89}\] \[F_{55}=F_{66}=F_{77} \tag{90}\] Subsequently, using equations (74), (78), (89) and (90) we arrive at the following equation: \[\frac{F_{11}-F_{55}}{F}=-\frac{1}{3}\big{(}\rho-\phi\big{)} \tag{91}\] Though it does not represent a general solution, equation (91) demonstrates the dependence of the conformal factor on the thermodynamic variables of the energy-momentum tensor. If, in addition, the auxiliary function \(F(x,y)\) is of the form given in (82), then \(F_{11}-F_{55}\) is a constant due to equations (89) and (90). Then: \[g_{\mu\nu}=\chi(\rho-\phi)^{2}\eta_{\mu\nu} \tag{92}\] where \(\chi\) is a positive constant. In this special case (92) it can clearly be seen that the conformal factor is connected with the distribution of energy and matter in the anisotropic conformal Minkowski spacetime. ## 5 Anisotropic Conformal FLRW-Cosmology In this section we shall study an application of this geometric framework in cosmology. In particular, we shall use a FLRW metric structure for the underlying manifold \(M\) and derive Friedmann-like equations of the horizontal subspace on the tangent bundle. Let \[\gamma_{\mu\nu}=\begin{pmatrix}-1&0&0&0\\ 0&\frac{a^{2}}{1-kr^{2}}&0&0\\ 0&0&(ra)^{2}&0\\ 0&0&0&(ra\sin\theta)^{2}\end{pmatrix} \tag{93}\] In this case we shall consider an integrable non-linear connection. The diagonal components of the Ricci tensor will then be: \[R_{00}=-3\frac{\ddot{a}}{a}+\frac{1}{2}\mathcal{L}_{00}+\frac{1}{4 }M_{00} \tag{94}\] \[R_{11}=\frac{a\ddot{a}+2\dot{a}^{2}+2\kappa}{1-\kappa r^{2}}+ \frac{1}{2}\mathcal{L}_{11}+\frac{1}{4}M_{11}\] (95) \[R_{22}=r^{2}(a\ddot{a}+2\dot{a}^{2}+2\kappa)+\frac{1}{2} \mathcal{L}_{22}+\frac{1}{4}M_{22}\] (96) \[R_{33}=r^{2}\sin^{2}\theta(a\ddot{a}+2\dot{a}^{2}+2\kappa)+\frac {1}{2}\mathcal{L}_{33}+\frac{1}{4}M_{33} \tag{97}\] where: \[\mathcal{L}_{00}=\frac{2\kappa a^{2}r}{(1-\kappa r^{2})^{2}} \delta_{00}f+\frac{1-\kappa r^{2}}{a^{2}}\delta_{11}f+\frac{1}{r^{2}a^{2}} \delta_{22}f+\frac{1}{r^{2}a^{2}\sin^{2}\theta}\delta_{33}f-\frac{3\dot{a}}{a }\delta_{0}f+\frac{2-\kappa r^{2}}{ra^{2}}\delta_{1}f+\frac{\cot\theta}{r^{2}a ^{2}}\delta_{2}f \tag{98}\] \[M_{00}=2(\delta_{0}f)^{2}-(1-\kappa r^{2})\left(\frac{\delta_{1 }f}{a}\right)^{2}-\left(\frac{\delta_{2}f}{ra}\right)^{2}-\left(\frac{\delta_{ 3}f}{ra\sin\theta}\right)^{2}\] (99) \[\mathcal{L}_{11}=\frac{a^{2}\delta_{00}f}{1-\kappa r^{2}}-\frac{ \delta_{22}f}{(1-\kappa r^{2})r^{2}}-\frac{\delta_{33}f}{r^{2}\sin^{2}\theta(1 -\kappa r^{2})}+\frac{4a\dot{a}}{1-\kappa r^{2}}\delta_{0}f-\frac{2}{r}\delta _{1}f-\frac{\cot\theta}{r^{2}(1-\kappa r^{2})}\delta_{2}f\] (100) \[M_{11}=-\frac{a^{2}(\delta_{0}f)^{2}}{1-\kappa r^{2}}+2(\delta_ {1}f)^{2}+\frac{(\delta_{2}f)^{2}}{(1-\kappa r^{2})r^{2}}+\frac{(\delta_{3}f) ^{2}}{r^{2}\sin^{2}\theta(1-\kappa r^{2})}\] (101) \[\mathcal{L}_{22}=r^{2}a^{2}\delta_{00}f-r^{2}(1-\kappa r^{2}) \delta_{11}f-\frac{\delta_{33}f}{\sin^{2}\theta}+4r^{2}a\dot{a}\delta_{0}f-r( 3-4\kappa r^{2})\delta_{1}f-\cot\theta\delta_{3}f\] (102) \[M_{22}=-r^{2}a^{2}(\delta_{0}f)^{2}+r^{2}(1-\kappa r^{2})(\delta _{1}f)^{2}+2(\delta_{2}f)^{2}+\frac{(\delta_{3}f)^{2}}{\sin^{2}\theta}\] (103) \[\mathcal{L}_{33}=r^{2}a^{2}\sin^{2}\theta\delta_{00}f-r^{2}\sin^{ 2}\theta(1-\kappa r^{2})\delta_{11}f-\sin^{2}\theta\delta_{22}f+4r^{2}a\dot{ a}\sin^{2}\theta\delta_{0}f-r(3-4\kappa r^{2})\sin^{2}\theta\delta_{1}f-2\sin \theta\cos\theta\delta_{2}f\] (104) \[M_{33}=-r^{2}a^{2}\sin^{2}\theta(\delta_{0}f)^{2}+r^{2}\sin^{2} \theta(1-\kappa r^{2})(\delta_{1}f)^{2}+\sin^{2}\theta(\delta_{2}f)^{2}+2( \delta_{3}f)^{2} \tag{105}\] It can be seen in equations (94-97) that the horizontal Ricci curvature fields of the tangent bundle are the form given in (35), i.e. they consist of the Ricci curvature \(K\) of the Riemannian base manifold which appears perturbed by the two terms \(\mathcal{L}\) and \(M\) added to it. In particular, the Ricci curvature \(K\) of the Riemannian base manifold is naturally independent of the conformal factor and hence free of the influence of dark gravity, while \(\mathcal{L}\) is a pertubation of first order that is linear in terms of the conformal factor and \(M\) is a second order pertubation which is non-linear in terms of the conformal factor. In general the horizontal Ricci tensor field \(R\) is non-diagonal and non-symmetric, i.e. for \(\mu\neq\nu\), \(R_{\mu\nu}\neq 0\) and \(R_{\mu\nu}\neq R_{\nu\mu}\). Due to this increased complexity in the geometric structure, we are going to limit this first order approach to the study of the diagonal terms (94-97). Although limited to these terms, important results could still be deduced from their study as a first order generalization of the classical theory of gravity on the tangent bundle. The vertical Ricci curvature shall be: \[S_{00} =\frac{1}{2}\bigg{(}\frac{1-\kappa r^{2}}{a^{2}}\bar{\partial}_{11} f+\frac{\bar{\partial}_{22}f}{r^{2}a^{2}}+\frac{\bar{\partial}_{33}f}{r^{2}a^{2} \sin^{2}\theta}\bigg{)}-\frac{1}{4}\bigg{(}\frac{1-\kappa r^{2}}{a^{2}}(\bar{ \partial}_{1}f)^{2}+\frac{(\bar{\partial}_{2}f)^{2}}{r^{2}a^{2}}+\frac{(\bar{ \partial}_{3}f)^{3}}{r^{2}a^{2}\sin^{2}\theta}\bigg{)} \tag{106}\] \[S_{11} =\frac{1}{2}\bigg{(}\frac{a^{2}\bar{\partial}_{00}f}{1-\kappa r^{ 2}}-\frac{\bar{\partial}_{22}f}{r^{2}(1-\kappa r^{2})}-\frac{\bar{\partial}_{3 3}f}{r^{2}\sin^{2}\theta(1-\kappa r^{2})}\bigg{)}-\frac{1}{4}\bigg{(}\frac{(a \bar{\partial}_{0}f)^{2}}{1-\kappa r^{2}}-\frac{(\bar{\partial}_{2}f)^{2}}{r ^{2}(1-\kappa r^{2})}-\frac{(\bar{\partial}_{3}f)^{2}}{r^{2}\sin^{2}\theta(1- \kappa r^{2})}\bigg{)}\] (107) \[S_{22} =\frac{1}{2}\bigg{(}r^{2}a^{2}\bar{\partial}_{00}f-r^{2}(1- \kappa r^{2})\bar{\partial}_{11}f-\frac{\bar{\partial}_{33}f}{\sin^{2}\theta} \bigg{)}-\frac{1}{4}\bigg{(}(ra\bar{\partial}_{0}f)^{2}-(1-\kappa r^{2})(r \bar{\partial}_{1}f)^{2}-\frac{(\bar{\partial}_{3}f)^{2}}{\sin^{2}\theta} \bigg{)}\] (108) \[S_{33} =\frac{1}{2}\bigg{(}(ra\sin\theta)^{2}\bar{\partial}_{00}f-(r\sin \theta)^{2}(1-\kappa r^{2})\bar{\partial}_{11}f-\sin^{2}\theta\bar{\partial}_{ 22}f\bigg{)}\] \[-\frac{1}{4}\bigg{(}(ra\sin\theta\bar{\partial}_{0}f)^{2}-(1- \kappa r^{2})(r\sin\theta\bar{\partial}_{1}f)^{2}-(\sin\theta\bar{\partial}_ {2}f)^{2}\bigg{)}\] (109) \[S_{bc} =\frac{1}{4}\bigg{(}2\bar{\partial}_{cb}f-\bar{\partial}_{c}f \bar{\partial}_{b}f\bigg{)}\,\ \forall b\neq c \tag{110}\] As is the case with the horizontal Ricci curvature, the vertical Ricci is not diagonal but it is symmetric as is evident by relation (109) if the conformal factor \(f\) is at least \(C^{2}\). The scalar curvature will then be: \[\mathcal{R}=R+S=e^{-f}\left(K+\frac{1}{2}\mathcal{L}+\frac{1}{4}M+\frac{3}{4} \bar{S}\right) \tag{111}\] where \[R =e^{-f}\left(K+\frac{1}{2}\mathcal{L}+\frac{1}{4}M\right) \tag{112}\] \[S =\frac{3}{4}e^{-\bar{S}}\] (113) \[K =6\bigg{[}\frac{\ddot{a}}{a}+\bigg{(}\frac{\dot{a}}{a}\bigg{)}^{2 }+\frac{\kappa}{a^{2}}\bigg{]}\] (114) \[\mathcal{L} =3\bigg{[}\delta_{00}f-\frac{1-\kappa r^{2}}{a^{2}}\delta_{11}f -\frac{\delta_{22}f}{(ra)^{2}}-\frac{\delta_{33}f}{(ra\sin\theta)^{2}}+3\frac{ \dot{a}}{a}\delta_{0}f-\frac{2-3\kappa r^{2}}{ra^{2}}\delta_{1}f-\frac{\cot \theta}{(ra)^{2}}\delta_{2}f\bigg{]}\] (115) \[M =-3\bigg{[}(\delta_{0}f)^{2}-\frac{1-\kappa r^{2}}{a^{2}}(\delta _{1}f)^{2}-\frac{(\delta_{2}f)^{2}}{(ra)^{2}}-\frac{(\delta_{3}f)^{2}}{(ra\sin \theta)^{2}}\bigg{]}\] (116) \[\bar{S} =-2\bigg{[}\bar{\partial}_{00}f-\frac{1-\kappa r^{2}}{a^{2}}\bar {\partial}_{11}f-\frac{\bar{\partial}_{22}f}{(ra)^{2}}-\frac{\bar{\partial}_{33 }f}{(ra\sin\theta)^{2}}\bigg{]}-(\bar{\partial}_{0}f)^{2}+\frac{1-\kappa r^{2}} {a^{2}}(\bar{\partial}_{1}f)^{2}+\frac{(\bar{\partial}_{2}f)^{2}}{(ra)^{2}}+ \frac{(\bar{\partial}_{3}f)^{2}}{(ra\sin\theta)^{2}} \tag{117}\] ### Extended anisotropic conformal Friedmann-like equations Let us now consider the field equations (47). Suppose, as in section 4, that the energy-momentum tensor field \(\mathcal{T}\) is of the following form: \[T_{\mu\nu} =\begin{pmatrix}\rho(x)&0\\ 0&g_{ij}p(x)\end{pmatrix} \tag{118}\] \[W_{ab} =\begin{pmatrix}\phi(x,y)&0\\ 0&g_{ij}\psi(x,y)\end{pmatrix} \tag{119}\] where \(\rho(x)\), \(p(x):M\rightarrow\mathbb{R}\) are the ordinary density and pressure functions of a thermodynamic fluid and \(\phi(x,y)\), \(\psi(x,y):TM\rightarrow\mathbb{R}\) could potentially be viewed as generalized thermodynamic variables on the tangent bundle. Then, the horizontal Friedmann-like equations shall be of the following form: \[\left(\frac{\dot{a}}{a}\right)^{2}+\frac{\kappa}{a^{2}}+\frac{1}{8}\tilde{S}+ \frac{1}{12}X+\frac{1}{24}\Phi=\frac{\rho}{3} \tag{120}\] \[\frac{\ddot{a}}{a}+\frac{1}{8}\tilde{S}-\frac{1}{12}X_{i}-\frac{1}{24}\Phi_{i}= -\frac{\rho+3e^{f}p}{6} \tag{121}\] where \(i=1,2,3\) and \[X=\frac{3+4\kappa a^{2}r-6\kappa r^{2}+3\kappa^{2}r^{4}}{(1-\kappa r^{2})^{2} }\delta_{00}f-\frac{1-\kappa r^{2}}{a^{2}}\delta_{11}f-\frac{\delta_{22}f}{(ra )^{2}}-\frac{\delta_{33}f}{(ra\sin\theta)^{2}}+3\frac{\dot{a}}{a}\delta_{0}f- \frac{2-7\kappa r^{2}}{ra^{2}}\delta_{1}f-\frac{\cot\theta}{(ra)^{2}}\delta_{2}f \tag{122}\] \[\Phi=(\delta_{0}f)^{2}+\frac{1-\kappa r^{2}}{a^{2}}(\delta_{1}f)^{2}+\frac{( \delta_{2}f)^{2}}{(ra)^{2}}+\frac{(\delta_{3}f)^{2}}{(ra\sin\theta)^{2}} \tag{123}\] \[X_{1}=\frac{2\kappa a^{2}r}{(1-\kappa r^{2})^{2}}\delta_{00}f+4\frac{1-\kappa r ^{2}}{a^{2}}\delta_{11}f+\frac{\delta_{22}f}{(ra)^{2}}+\frac{\delta_{33}f}{(ra \sin\theta)^{2}}+\frac{2-4\kappa r^{2}}{ra^{2}}\delta_{1}f+\frac{\cot\theta}{ (ra)^{2}}\delta_{2}f \tag{124}\] \[\Phi_{1}=2(\delta_{0}f)^{2}+2\frac{1-\kappa r^{2}}{a^{2}}(\delta_{1}f)^{2}- \frac{(\delta_{2}f)^{2}}{(ra)^{2}}-\frac{(\delta_{3}f)^{2}}{(ra\sin\theta)^{2}} \tag{125}\] \[X_{2}=\frac{2\kappa a^{2}r}{(1-\kappa r^{2})^{2}}\delta_{00}f+\frac{1-\kappa r ^{2}}{a^{2}}\delta_{11}f+4\frac{\delta_{22}f}{(ra)^{2}}+\frac{\delta_{33}f}{(ra \sin\theta)^{2}}-\frac{1-2\kappa r^{2}}{ra^{2}}\delta_{1}f+\frac{\cot\theta}{ (ra)^{2}}\delta_{2}f \tag{126}\] \[\Phi_{2}=2(\delta_{0}f)^{2}-\frac{1-\kappa r^{2}}{a^{2}}(\delta_{1}f)^{2}+2 \frac{(\delta_{2}f)^{2}}{(ra)^{2}}-\frac{(\delta_{3}f)^{2}}{(ra\sin\theta)^{2}} \tag{127}\] \[X_{3}=\frac{2\kappa a^{2}r}{(1-\kappa r^{2})^{2}}\delta_{00}f+\frac{1-\kappa r ^{2}}{a^{2}}\delta_{11}f+\frac{\delta_{22}f}{(ra)^{2}}+4\frac{\delta_{33}f}{(ra \sin\theta)^{2}}-\frac{1-2\kappa r^{2}}{ra^{2}}\delta_{1}f-2\frac{\cot\theta}{ (ra)^{2}}\delta_{2}f \tag{128}\] \[\Phi_{3}=2(\delta_{0}f)^{2}-\frac{1-\kappa r^{2}}{a^{2}}(\delta_{1}f)^{2}- \frac{(\delta_{2}f)^{2}}{(ra)^{2}}+2\frac{(\delta_{3}f)^{2}}{(ra\sin\theta)^{2}} \tag{129}\] By virtue of the three equations (121) we get the following pair of generalized anisotropic conformal Friedmann-like equations for the horizontal subspace on the tangent bundle: \[\left(\frac{\dot{a}}{a}\right)^{2}+\frac{\kappa}{a^{2}}+\frac{1}{8}\tilde{S}+ \frac{1}{12}X+\frac{1}{24}\Phi=\frac{\rho}{3} \tag{130}\] \[\frac{\ddot{a}}{a}+\frac{1}{8}\tilde{S}-\frac{1}{6}\Psi-\frac{1}{12}\mathfrak{D }=-\frac{\rho+3e^{f}p}{6} \tag{131}\] where \(X\) and \(\Phi\) are given in relations (122,123) and \[\Psi=\frac{\kappa a^{2}r}{(1-\kappa r^{2})^{2}}\delta_{00}f+\frac{1-\kappa r ^{2}}{a^{2}}\delta_{11}f+\frac{\delta_{22}f}{(ra)^{2}}+\frac{\delta_{33}f}{(ra \sin\theta)^{2}} \tag{132}\] \[\mathfrak{D}=(\delta_{0}f)^{2} \tag{133}\] In view of relations (130,131), it is worth noting that the generalized anisotropic conformal Friedmann-like equations for the horizontal subspace on the tangent bundle include extra terms denoted by \(X\), \(\Psi\), \(\Phi\), \(\mathfrak{D}\) which introduce a higher order structure derived by the gravitational influence of dark matter and dark energy, which enrich the cosmological study of the evolution of the universe with further information. It is also clear that if these terms are equal to zero then equations (130,131) reduce to the ordinary Friedmann equations of general relativity. In particular, it can be seen that the scalar Ricci curvature \(S\) of the vertical subspace and especially \(\tilde{S}\), could be related to a dynamical anisotropic cosmological "constant" as is shown in [47], which emerges from the additional degrees of freedom of the anisotropic conformal geometric structure instead of being added ad hoc as in the classical case. Therefore, equations (130,131) reduce to the Friedmann equations of general relativity with dynamical cosmological parameter equal to \(\tilde{S}=-\frac{8}{3}\Lambda\), where in this case \(\Lambda\) denotes the varying cosmological constant [71]. By means of relation (113) the cosmological parameter is related to the scalar vertical curvature \(S\) in precisely the same way as in [47]. With respect to the classical case, the presence of a varying cosmological constant in the form of \(\tilde{S}\) indicates a different dynamical evolution of the universe which could be compared to the \(\Lambda-\)CDM cosmological model [71] in a further study, possibly viewed through the lens of a mimetic dark gravity model. In general, if \(f=0\), i.e. in the absence of dark gravity, then (130,131) reduce exactly to the classical Friedmann equations (without cosmological constant), as well as to the geometric frame as described in [47] for a flat vertical subspace. A special case of conformal factors that are of the form \(f(x,y^{0})\) is of noteworthy interest since, this family of conformal transformations leave the vertical subspace isotropic in the sense presented in [72], i.e. the vertical \(S\)-curvature tensor is diagonal and \(S_{ij}=\frac{1}{4}\gamma_{ij}(2\tilde{\partial}_{00}f-(\tilde{\partial}_{0}f )^{2})\). In particular \(S_{00}=0\). The vertical field equation (49) for \((a=b=0)\) then yields: \[\rho-5\phi=3e^{f}(p+\psi)\xrightarrow{\text{if}\,\,p\neq-\psi}f(x,y^{0})=\ln \left(\frac{\rho-5\phi}{3(p+\psi)}\right) \tag{134}\] As is the case in (92) it can be seen that the conformal factor of relation (134) is connected with the distribution of energy and matter in the anisotropic conformal spacetime. This is another indication that the conformal factor may be related to the thermodynamic properties of the spacetime. Graph 1 summarizes the relations between the physical and geometrical concepts that arise from this section. In particular, the internal properties (possibly of thermodynamical nature) of DM and DE are mathematically expressed through the conformal factor \(f\) which in turn induces a vertical \(S\)-curvature. This curvature produces then a varying dynamical cosmological "constant" \(\Lambda\) which is related to DE. A relation between DE and DM could possibly be studied as discussed in the next section 6. In light of the generalized anisotropic conformal Friedmann-like equations for the horizontal subspace on the tangent bundle (130-131), we can obtain the following pair of equations for the Hubble parameter \(H(t):=\dot{a}/a\), as follows \[3H^{2}+\frac{3\kappa}{a^{2}}=\rho+\rho_{DE} \tag{135}\] \[2\dot{H}+3H^{2}+\frac{\kappa}{a^{2}}=-(e^{f}p+p_{DE}) \tag{136}\] where \[\rho_{DE} :=-\frac{3}{8}\tilde{S}-\frac{1}{4}X-\frac{1}{8}\Phi \tag{137}\] \[p_{DE} :=\frac{3}{8}\tilde{S}+\frac{1}{12}X+\frac{1}{24}\Phi-\frac{\Psi} {3}-\frac{1}{6}\mathfrak{D} \tag{138}\] could be interpreted as the density and pressure of DE, respectively. Let us now consider the following special case of a simplified linear conformal factor, i.e. let \(f\) be of the form: \[f(x,y)=\beta t+\mu y^{0}+vy^{1}. \tag{139}\] Additionally, let us focus on a spatially flat universe and assume dust matter; namely let \(\kappa=0\) and \(p=0\). For the simplicity, let the coefficients of the non-linear vanish identically. In this case, we obtain \(X=-3\beta H\) \(\Phi=\beta^{2}\), \(\bar{S}=-\mu^{2}+\frac{\nu^{2}}{a^{2}}\), \(\Psi=0\) and \(\mathfrak{D}=\beta^{2}\), where \(H(t)=\dot{a}/a\) is the Hubble function. Then from the Friedmann-like equations (135-136) we get: \[3H^{2}=\rho+\rho_{DE} \tag{140}\] \[2\dot{H}+3H^{2}=-\rho_{DE} \tag{141}\] where \[\rho_{DE} =\frac{1}{8}(3\mu^{2}-\beta^{2})-\frac{3}{8}\frac{\nu^{2}}{a^{2}} +\frac{3}{4}\beta H \tag{142}\] \[p_{DE} =-\frac{1}{8}(3\mu^{2}+\beta^{2})+\frac{3}{8}\frac{\nu^{2}}{a^{2} }-\frac{1}{4}\beta H. \tag{143}\] Hence, as described above, the richer structure of Finsler geometry produces an effective dark energy sector of geometric origin. The first term in (142) is constant and accounts for the usual cosmological constant, the second term is an effective spatial curvature term that will have a negligible role at late-time universe, and the last term is a novel friction term. Additionally, since \(p=0\) the evolution of \(\rho\) reads simply \(\rho=\rho(0)a^{-3}\), while we can define the effective dark-energy equation-of-state parameter as \(w_{DE}\equiv p_{DE}/\rho_{DE}\). Finally, we introduce the density parameters \(\Omega_{m}\equiv\frac{\rho}{3H^{2}}\) and \(\Omega_{DE}\equiv\frac{\rho_{DE}}{3H^{2}}\), while we use the redshift \(z\) as the independent variable, defined through \(1+z=\frac{a_{0}}{a}\) (and setting the present scale factor as \(a_{0}=1\)). We evolve equations (140),(141) numerically and in Fig. 1 we depict the evolution of the effective dark energy density parameter \(\Omega_{DE}\) and of the matter density parameter \(\Omega_{m}\), as well as the evolution of the effective dark-energy equation-of-state. Finally, we perform a confrontation of the obtained \(H(z)\) behavior with Supernovae type Ia (SN Ia) data. In particular, one measures the apparent magnitude \(m(z)\) which is related to the luminosity distance as \(m(z)-M=5\log\left[\frac{d_{L}(z)_{\rm th}}{Mpc}\right]+25\), with \(M\) and \(L\) the absolute magnitude and luminosity. Moreover, the predicted luminosity distance \(d_{L}(z)_{\rm th}\) is given as \(d_{L}\left(z\right)_{\rm th}\equiv(1+z)\int_{0}^{z}\frac{dz^{\prime}}{H(z^{ \prime})}\). In Fig. 2 we present the theoretically predicted apparent minus absolute magnitude as well as the prediction of \(\Lambda\)CDM cosmology, on top of the \(580\) SN Ia observational data points from [73]. As we observe, the agreement is excellent. ## 6 Concluding Remarks and some Future Prospects Motivated by the apparent need for a mathematical and in particular geometrical framework for a theory of gravity that includes the significant contributions of dark matter and dark energy, whose study currently constitutes the greatest problem of modern cosmology [28] we developed a theoretical model based on an anisotropic conformal spacetime on the tangent bundle that allows for extra degrees of freedom due to the higher-dimensionality of the underlying geometry which intrisi Figure 1: _Upper graph: The evolution of the effective dark energy density parameter \(\Omega_{DE}\) and of the matter density parameter \(\Omega_{m}\), as a function of the redshift \(z\), for \(\mu=1\), \(\beta=0.1\) and \(\nu=0.1\). Lower graph: The evolution of the corresponding dark-energy equation-of-state parameter \(w_{DE}\). We have imposed \(\Omega_{m0}\approx 0.3\) at present time._ anisotropic direction dependent dark gravity in the metric structure. This higher order internal geometric structure is interpreted as the contributions of dark matter and dark energy. In particular, in this framework we examined two cases of significant interest; namely the conformal anisotropic Minkowski spacetime and the conformal anisotropic FLRW-cosmology. A first application of this geometric framework is given by using a Minkowski metric structure for the underlying manifold. The interest of this case lies in its potential cosmological application for the study of a post-inflation universe which evolves towards flatness, because this metrical model might be connected to an anisotropic generalization of a de-Sitter metric spacetime in which a conformal structure for the spacelike metric, (e.g, in a Friedmann metric space) is caused by the scale factor. The study of the anisotropic conformal Minkowski space, especially rel. (66, 67), reveals that even though the underlying base manifold is flat, dark matter, represented by the conformal factor, curves the space-time. A further study of special types of conformal factors that constitute solutions to the field equations for this spacetime, reveal a first indication in eq. (92) that the conformal factor, which we nevertheless consider given a priori (e.g. determined by observational or experimental data), is potentially connected to the thermodynamic variables of energy and matter. A future dynamical analysis of this model, similar to that studied in [65, 71], could provide critical points which are vital regions of the evolution of the universe. Relating these results with current observational data could lead to a more complete understanding of this anisotropic geometric framework of gravity and cosmology, as well as of the contribution of dark matter and dark energy to the evolution of the universe. In particular, a deeper study of the conformal factor could be performed by imposing certain extra physical conditions on this model which are consistent with observational data. For instance, one could assume that the horizontal space tends towards flatness for large time, i.e. that the R-curvature tensor given in eq. (63) tends towards zero as the time parameter tends to infinity. Furthermore, assuming that the anisotropy of the universe reduces for large time, i.e. that the universe tends towards isotropy, one could argue that the vertical S-curvature given in eq. (65) should diminish in the limit as time tends to infinity. Due to the afore-mentioned connection of the conformal factor to the thermodynamic structure of energy and matter, the thermodynamic implications of such a future study could potentially be linked to the notion of a cosmological entropy [74, 75]. A second application studied in this work, is an anisotropic conformal FLRW space which is, naturally, of significant cosmological interest. According to the the generalized anisotropic conformal Friedmann-like equations for the horizontal subspace on the tangent bundle that we derived, rel. (130,131), we find that the classical Riemannian structure appears perturbed by the inclusion of extra terms which arise naturally by the geometry of the tangent bundle, are linked to the higher order structure of this framework and are interpreted as the additional gravitational influence of dark matter and dark energy in the cosmological evolution of the universe. We find, in particular, that the classical Friedmann equations of general relativity, as well as Figure 2: _The theoretically predicted apparent minus absolute magnitude for for \(\mu=1\), \(\beta=0.1\) and \(\nu=0.1\) (red-dashed) and for \(\mu=1\), \(\beta=0.2\) and \(\nu=0.3\)(green-dotted). The observational points correspond to the \(580\) SN Ia data points from [73], and for completeness and comparison we depict the prediction of \(\Lambda\)CDM cosmology with the black-solid curve._ the generalized Friedmann equations in [47] with dynamical cosmological parameter can be recovered if we interpret the vertical scalar curvature \(S\) as the varying cosmological constant in much the same way as in [47]. This cosmological parameter is produced internally through the geometry of the tangent bundle instead of being added ad hoc as in the classical case and could potentially be quantitatively studied for a given conformal factor.One such first approach at a more concrete example of a simple conformal factor is provided and connected with observational constraints on both dark energy and dark matter. We find that this special case is very consistent with observational results as well as with \(\Lambda\)-CDM, which could suggest that this model is promising and that further work on combining recent observational data and constraints, as in [76], with our theoretical model could potentially yield ever more accurate conformal factors to fit the observational results. In addition, a future study of the bounce conditions applied to this model could prove fruitful, as they could endow the conformal factor, and hence the contribution of dark matter and dark energy, with essential cosmological information related to the dynamic anisotropic evolution of the universe. For this purpose, a careful investigation of the equation of state of the anisotropic generalized thermodynamic variables of the cosmological fluid, and of the energy conditions that may ensue from the generalized anisotropic conformal Friedmann-like equations (130, 131) might prove invaluable for a deeper understanding of the connection between dark gravity, cosmology and anisotropy. In conclusion, this geometric framework of conformal gravity on the tangent bundle that incorporates the gravitational influence of dark matter and dark energy could allow for both a qualitative and a quantitative analysis of the cosmological aspects of the evolution of the universe, for instance in a future study that includes an application of this model using observational data. In particular, a quantitative study of the deflection angle using this model may provide for a correction due to dark gravity of the already known results given for an anisotropic Finsler-Randers model in [65]. Moreover, potential links between dark energy and dark matter on galactic scales could be studied, since the behavior of the dark matter cosmological fluid on a large scale could reveal a relation with dark energy [77, 78, 79]. Connected to such a future endeavour could possibly be the model of the Chaplygin gas [80] whose study as a dark cosmological fluid instead of the perfect fluid model, in conjunction with the present geometric model could potentially yield interesting results. Finally, an especially interesting future prospect of this mathematical framework would be the application of the present work in the development of a galactic model combining the already existing dark matter halo theory [81] and related observational data in order to study structure formations due to anisotropy. ## Acknowledgments First, we wish to thank the unknown referee(s) for their indispensable comments and suggestions that helped improve the present article. We would like to thank Dr. S. Konitopoulos for his insightful comments and fruitful discussions. We would also like to especially thank Dr. E.N. Saridakis for his invaluable help and astute comments. Finally, we thank Dr. F. K. Anagnostopoulos for our interesting discussions. ## Appendix A Appendix section Let \[\gamma_{\mu\nu}=\begin{pmatrix}-1&0&0&0\\ 0&\frac{a^{2}}{1-kr^{2}}&0&0\\ 0&0&(ra)^{2}&0\\ 0&0&0&(ra\sin\theta)^{2}\end{pmatrix} \tag{144}\] Let the non-linear connection be of Cartan-type; namely we take \(N^{a}_{\ {}_{\kappa}}=\gamma^{a}_{\ {}_{\rm{b}\kappa}}y^{b}\). Then the components of the non linear connection are as follows: \[N^{0}_{\ 0}=0 \tag{145}\] \[N^{0}_{\ 1}=\frac{a\dot{a}}{1-\kappa r^{2}}y^{1}\] (146) \[N^{0}_{\ 2}=a\dot{a}r^{2}y^{2}\] (147) \[N^{0}_{\ 3}=a\dot{a}r^{2}\sin^{2}\theta y^{3}\] (148) \[N^{1}_{\ 0}=\frac{\dot{a}}{a}y^{1}\] (149) \[N^{1}_{\ 1}=\frac{\dot{a}}{a}y^{0}+\frac{\kappa r}{1-\kappa r^{2}}y ^{1}\] (150) \[N^{1}_{\ 2}=-r(1-\kappa r^{2})y^{2}\] (151) \[N^{1}_{\ 3}=-r(1-\kappa r^{2})y^{2}\sin^{2}\theta y^{3}\] (152) \[N^{2}_{\ 0}=\frac{\dot{a}}{a}y^{2}\] (153) \[N^{2}_{\ 1}=\frac{1}{r}y^{2}\] (154) \[N^{2}_{\ 2}=\frac{\dot{a}}{a}y^{0}+\frac{1}{r}y^{1}\] (155) \[N^{2}_{\ 3}=-\sin\theta\cos\theta y^{3}\] (156) \[N^{3}_{\ 0}=\frac{\dot{a}}{a}y^{3}\] (157) \[N^{3}_{\ 1}=\frac{1}{r}y^{3}\] (158) \[N^{3}_{\ 2}=\cot\theta y^{3}\] (159) \[N^{3}_{\ 3}=\frac{\dot{a}}{a}y^{0}+\frac{1}{r}y^{1}+\cot\theta y^{2} \tag{160}\] Indeed we can clearly see that such a non-linear connection is not integrable.
2303.09022
On the Importance of Three-Body Decays of Vector-Like Quarks
It is a common feature of vector-like extensions of the electroweak sector to have near degenerate states, such as electroweak doublets. In simplified models, it is usually assumed that these have decay widths saturated by two-body channels. As a consequence, experimental searches can be done focusing on only one of the states of the doublet. Taking as an example case the light exotic electroweak doublet present in the Minimal Composite Higgs Model, we show that including three-body decays in the pair production process makes this separation unfeasible, since both states of the doublet will be present and contribute significantly to the signal. In addition, by recasting present searches in multileptonic channels, with a simplified cut-and-count analysis, a relevant increase in discovery reach or exclusion potential is obtained; this indeed motivates a more detailed analysis. This study shows how an inclusive search strategy, taking into account both the near degeneracy and the presence of three-body decays, will have greater discovery power and be more natural from a model building perspective.
Carlos Bautista, Leonardo de Lima, Ricardo D. Matheus, Aurore Savoy-Navarro
2023-03-16T01:27:15Z
http://arxiv.org/abs/2303.09022v1
# On the Importance of Three-Body Decays of Vector-Like Quarks ###### Abstract It is a common feature of vector-like extensions of the electroweak sector to have near degenerate states, such as electroweak doublets. In simplified models, it is usually assumed that these have decay widths saturated by two-body channels. As a consequence, experimental searches can be done focusing on only one of the states of the doublet. Taking as an example case the light exotic electroweak doublet present in the Minimal Composite Higgs Model, we show that including three-body decays in the pair production process makes this separation unfeasible, since both states of the doublet will be present and contribute significantly to the signal. In addition, by recasting present searches in multileptonic channels, with a simplified cut-and-count analysis, a relevant increase in discovery reach or exclusion potential is obtained; this indeed motivates a more detailed analysis. This study shows how an inclusive search strategy, taking into account both the near degeneracy and the presence of three-body decays, will have greater discovery power and be more natural from a model building perspective. ## 1 Introduction Vectorlike quarks (VLQs) are a common feature of many models of physics beyond the Standard Model (SM), aiming to naturally obtain a hierarchy between the electroweak scale and new physics at the TeV scale. Models such as composite Higgs models [1; 2; 3; 4], warped extra-dimensional models [5; 6] and Little-Higgs models [7; 8; 9; 10; 11] implement a composite strongly coupled sector as the high energy completion of the SM, with the Higgs doublet being constructed from pNGBs from a dynamical symmetry breaking happening at some UV scale (beyond a few TeV). In this kind of dynamical models, fermion masses are generated by higher dimensional operators that mix the SM fermionic sector with the strong sector, in a scheme called partial compositeness [12]. The resulting mass spectrum is composed of the lighter SM chiral fermions and heavier vectorlike partners. The first two families of quarks and leptons are expected to have a small mixing with the strong sector, both from the point of view of theory and experiment [13], as their masses lie far below the EW scale, so viable models have the partners of these fermions well into the UV, if at all present. The same is not true for the third generation, as naturalness favors light top partners [14] and current constraints allow for the existence of these states around 1.5 TeV (the exact constraint depending on the model). Here we will focus on VLQs arising in the Minimal Composite Higgs Model (MCHM) [15], that obtains the EW doublet as the pNGB of the \(SO(5)/SO(4)\) breaking pattern, and preserves custodial symmetry. The model has been comprehensively reviewed in [13] and we will not cover it in detail. For our purposes it suffices to know that the strong sector fermions fit into complete representations of \(SO(4)\) that we can group together to form representations of \(SO(5)\). From the point of view of phenomenology, that means that top partners usually do not come alone, with some considerable tuning needed to push most of the new vectorlike states away from the lightest one. Even in the simplest embedding, that consists of a \(SO(4)\) fourplet and a \(SO(4)\) singlet, there are five vectorlike states: two top partners, a bottom partner, and two exotic states with hypercharge \(7/6\) and electric charges of \(2/3\) and \(5/3\). As we review below, for most points in the parameter space at least two of these states (an electroweak doublet) will be degenerate or near-degenerate in mass, and in many cases more than two will be close together. An important result of [16] is that for a big part of the parameter space the top partner has sizeable 3-body decays. The direct experimental searches for vectorlike top partners and exotic VLQs, on the other side, have focused mostly on model independent searches based around two main assumptions [17; 18; 19; 20; 21; 22; 23; 24; 25]: 1. There is only one VLQ contributing to the signal chosen. Other BSM resonances are much heavier, absent or decay into different final states. Separate searches are carried out for the two most popular VLQs: the top-partner \(T\) and the exotically charged \(X_{5/3}\). 2. The decay width is saturated by a few 2-body decay modes of the VLQs. Specifically, the \(X_{5/3}\) is assumed to decay only through \(X_{5/3}\to tW^{+}\) and the T has three decay modes: \(T\to bW^{+}\), \(T\to tZ\) and \(T\to th\), with searches making different assumptions on the branching ratios of these three channels, but always considering that they add up to one. These two assumptions have an important interplay, as limiting the decays to 2-body channels is what allows the \(T\) and the \(X_{5/3}\) to be searched for separately, even if they are close in mass. We will show that as soon as one considers 3-body decays both resonances will contribute to the same final states. The aim of this work is thus to evaluate the impact of considering the typical situation of complete models for VLQs, which in general violate assumptions (a) and (b) above. In section 2 we recast existing searches by relaxing assumption (b) and allowing for 3-body decay channels, obtaining an estimate on how much the exclusion limits for the top partners \(T\) and \(X_{5/3}\) are expected to independently change. In section 3 we relax also assumption (a) which together with 3-body decays means that many VLQ resonances can contribute to the same signal. In this case the searches for \(T\), \(X_{5/3}\) and other VLQs are not independent anymore and, using a typical point in the parameter space of the MCHM\({}_{5}\), we propose an inclusive search strategy for new physics signals associated with the MCHM\({}_{5}\). We summarize our results in section 4. ## 2 Effects of a three body decay channel in VLQ searches The usual searches of vectorlike quarks assume that they have only 2-body decay channels [19; 20; 21; 22; 23; 24; 25]. Here, we make a rough estimation of the effect of the inclusion of an additional three body channel to the decays of the lightest top partner \(T^{(1)}\) (so named to differentiate it from other top partners present in the MCHM) and the exotically charged \(X_{5/3}\) in regards to the mass exclusion for those states. Our strategy will be to follow as closely as possible the experimental analyses used to search for the pair production of both resonances, specifically those in references [26] and [27], and apply the following simple steps: * simulate the pair production and decay, for a set of combinations of branching ratios, including those used in the experiments and adding new ones, with 3-body decays; * apply cuts and detector simulation that are as close as possible to that used by experiments, in order to obtain the total number of signal events for each choice of branching ratios; * obtain ratios between number of events in different scenarios and use those ratios to recast the existing limits to the masses of \(T^{(1)}\) and \(X_{5/3}\). The strategies applied to the \(T\) and \(X_{5/3}\) experimental searches are similar, and briefly summarized here. The detailed description can be found in the references [26; 27]. First, a set of regularly spaced values for the mass of the VLQ is chosen. For each mass value, a sample of pair-produced VLQ is generated at Leading Order (LO) with MADGRAPH5 aMC@NLO. For each considered VLQ 2-body decay, the generator is interfaced with PYTHIA 8 for parton showering and fragmentation and including the final states decay into the foreseen signatures. The simulated search (signal) samples are then processed through the full GEANT-based detector simulation. This is repeated for every 2-body channel under consideration. Furthermore, for each search channels, defined by their final signal signatures the related SM backgrounds are likewise simulated after processed with different corresponding generators. The samples are then normalized using a next-to-next to leading order (NNLO) calculation of the cross section and the different decay channels are weighted to reflect the choices of branching ratios. For each search channel a cut-based pre-selection followed by a statistical based method and/or neural network (NN) analysis are applied to both the simulated signal and backgrounds data. This processing chain performs the event selection, categorization and reconstruction, enhancing signal to background ratio. Once the analysis with simulated data is validated, the same analysis strategy is applied to the real data. A statistical method comparison between real data and simulated ones is performed. This allows to determine the upper limit to the cross section of pair production of the VLQ at 95% confidence level, for each value of the VLQ mass. Since the pairs are produced through standard QCD interactions, the only new physics parameter controlling the production cross section is the VLQ mass, so the upper limit on the cross section is directly converted to a lower limit on the resonance mass. In the MCHM\({}_{5}\) the VLQs are obtained from the mixing of the elementary fields \(q_{L}=(t_{L},b_{L})\) and \(t_{R}\) (having the same transformations under the SM gauge group as SM quarks) with the composite resonances embedded in a fiveplet of \(SO(5)\) that decomposes under \(SO(4)\) as a fourplet, \(\Psi_{4}\), and a singlet, \(\Psi_{1}\): \[\Psi_{4} \sim (X_{5/3},X_{2/3},T,B)\,\] \[\Psi_{1} \sim \tilde{T}\, \tag{1}\] where the \(T\), \(B\) and \(\tilde{T}\) transform as \(t_{L}\), \(b_{L}\) and \(t_{R}\) respectively and the \(X_{Q}\) are exotic states with hypercharge \(Y=7/6\) and electric charge \(Q\). Up to electroweak symmetry breaking effects, the masses of the resonances are: \[M_{X_{Q}}=|M_{4}|,\ M_{T,B}=\sqrt{M_{4}^{2}+y_{L}^{2}f^{2}};\ M_{\tilde{T}}= \sqrt{M_{1}^{2}+y_{R}^{2}f^{2}}, \tag{2}\] where \(M_{1,4}\) are the vectorlike masses of \(\Psi_{1,4}\) and \(y_{L,R}\,f\) controls the strength of the mixing of the resonances with \(t_{L,R}\). See [16] for further details. From these expressions it is clear that the doublets are near degenerate, as we stated before. The diagonalization of the charge 2/3 mass matrix, involving the states \(t\), \(X_{2/3}\), \(T\) and \(\tilde{T}\), will produce the experimentally observed chiral top quark and three vectorlike top partners, which we denote by \(T^{(1)}\), \(T^{(2)}\) and \(T^{(3)}\) in order of increasing mass. From the approximate expressions of Eq. (2), one sees that \(T^{(1)}\) is typically composed of mostly \(X_{2/3}\subset\Psi_{4}\) if \(|M_{4}|<\sqrt{M_{1}^{2}+y_{R}^{2}f^{2}}\) or \(\tilde{T}\subset\Psi_{1}\) otherwise. The three body decays of \(T^{(1)}\) are also highly dependent on its composition: \[T_{L}^{(1)}=U_{L,1}t_{L}+U_{L,2}T_{L}+U_{L,3}X_{2/3L}+U_{L,4}\tilde{T}_{L} \tag{3}\] \[T_{R}^{(1)}=U_{R,1}t_{R}+U_{R,2}T_{R}+U_{R,3}X_{2/3R}+U_{R,4}\tilde{T}_{R} \tag{4}\] with \(L\) and \(R\) indicating the chiralities of each state,and \(U_{L,R}\) the corresponding unitary rotations to the mass basis. We define: \[\sin^{2}\theta=\frac{\eta_{L}^{F}+\eta_{R}^{F}}{2}\ \ \text{and}\ \ \ \cos^{2}\theta=\frac{\eta_{L}^{S}+\eta_{R}^{S}}{2}, \tag{5}\] where \(\eta_{L(R)}^{F}\) and \(\eta_{L(R)}^{S}\) are respectively the fourplet and singlet contributions for each chirality: \[\eta_{L(R)}^{F}=U_{L(R),2}^{2}+U_{L(R),3}^{2} \tag{6}\] \[\eta_{L(R)}^{S}=U_{L(R),1}^{2}+U_{L(R),4}^{2} \tag{7}\] The angle \(\theta\) in (5) characterizes the nature of \(T^{(1)}\), with \(\theta=\pi/2\) being a pure fourplet and \(\theta=0\) a pure singlet. We will divide our parameter space in two regions by saying that \(T^{(1)}\) is _fourplet-like_ if \(\theta\geq\pi/4\) and is _singlet-like_ if \(\theta<\pi/4\). In order to study the behaviour of three body decays in these two regions we scan over \(0.8\ \text{TeV}\leq f\leq 2\ \text{TeV},1\ \text{TeV}\leq|M_{1,4}|\leq 3\ \text{TeV},0.5\leq y_{L}\leq 3\), with \(y_{R}\) fixed by the top mass, and select from these the points that pass experimental constraints, as detailed in [16]1. The results are shown in figure 1. Footnote 1: Using the free phases of the fields, we may take all parameters except for one to be positive, which we take to be \(M_{1}\)[16]. Figure 1 shows that fourplet-like \(T^{(1)}\) will on average have a bigger branching ratio on 3-body decays with a prevailing decay in \(T^{(1)}\to W^{+}W^{-}t\), while the singlet-like case is the opposite, with no clear prevalence of any channel and generally smaller 3-body decay branching ratios. This motivates us to focus on the fourplet-like case in what follows. The fourplet-like scenario is also more interesting for the \(X_{5/3}\), as it will be one of the lightest states and in fact near degenerate with \(T^{(1)}\), with splitting caused by electroweak effects and smaller than \(m_{W}\). In this case, the only allowed 2-body decay is \(X_{5/3}\to W^{+}t\). The possible 3-body decays are listed in figure 2 which shows the distribution of \(X_{5/3}\) branching ratios for the same model points used before. One can see that the 3-body decays can be sizeable and dominated by two channels: \(X_{5/3}\to W^{+}th\) and \(X_{5/3}\to W^{+}tZ\). ### Effect on the \(T^{(1)}\) search In this section we will focus on the \(W^{+}W^{-}t\) decays of the fourplet-like \(T^{(1)}\) and the Feynman diagrams that mainly contribute2 to the decay are shown in figure 3. The three channels in figure 3 have contributions of the same magnitude and interfere positively to increase the total three-body decay width. Regarding the two-body decays, the fourplet-like \(T^{(1)}\) has \(\mathrm{Br}\!\left[T^{(1)}\to bW^{+}\right]\sim 0\), and \(\mathrm{Br}\left[T^{(1)}\to tZ\right]\sim\mathrm{Br}\left[T^{(1)}\to th\right]\). Figure 2: Distribution of the branching ratios of the 5/3 charged resonance (\(X_{5/3}\)) decays for fourplet-like points. Figure 1: Distributions of the branching ratio of the \(T^{(1)}\to W^{+}W^{-}t\) channel (in blue) and the sum of the remaining three body decay channels (in orange). The remaining three body channels are \(\bar{t}tt\), \(\bar{b}hW^{+}\), \(\bar{b}ZW^{+}\), \(b\bar{b}t\), \(hht\), \(hZt\) and \(ZZt\). n order to estimate the effect of the three-body decay to the existing \(T^{(1)}\) search we simulate \(p\bar{p}\to T^{(1)}\bar{T}^{(1)}\) for the same set of masses used in [26], choosing \(m_{T^{(1)}}\) in the range \([0.9~{}\text{TeV},1.8~{}\text{TeV}]\) in steps of 100 GeV. The pair production is followed by inclusive decays into all relevant two and three body channels, namely: \(th\), \(tZ\), \(Wb\) and \(WWt\). The events are then showered and hadronized in Pythia and finally passed to Delphes for a fast detector analysis3. All simulations are done at LO, but the final cross section in each channel is rescaled to reflect a particular combination of branching ratios: Footnote 3: we used the default CMS card with few modifications in their reconstruction algorithms to follow more closely what was done in [26]. The jet reconstruction was made using the anti-KT(AKT) algorithm with a radius of 0.4 and only jets with \(p_{T}>30\) GeV and \(|\eta|<2.4\) were selected and there is a requirement of isolation for leptons varying with momenta. For details see [28]. \[\sigma_{p\bar{p}\to T^{(1)}\bar{T}^{(1)}\to D_{1}\bar{D}_{2}}=\sigma_{p\bar{p }\to T^{(1)}\bar{T}^{(1)}}\times F\left[\text{BR}(D_{1}),\text{BR}(D_{2})\right] \tag{8}\] where \(D_{1}\) and \(D_{2}\) label the possible decay channels and: \[F\left[\text{BR}(D_{1}),\text{BR}(D_{2})\right]=\begin{cases}[\text{BR}(D)]^{2 }&\text{, if }D_{1}=D_{2}=D\\ 2\times\text{BR}(D_{1})\times\text{BR}(D_{2})&\text{, if }D_{1}\neq D_{2}\end{cases} \tag{9}\] One can then analyze different scenarios, we will focus on the three possibilities listed in table 1, where the first two rows are the ones used in [26] and the third is the typical fourplet-like behaviour in the \(\text{MCHM}_{5}\) (for \(T^{(1)}\) masses around 1.5 TeV, \(\text{BR}(W^{+}W^{-}t)\) can be larger for higher masses [16]). In [26] three signal channels are considered: single-lepton, same-sign dilepton (2SSL) and multilepton, and the combined constraint is shown in figure 4. Here we focus on the 2SSL channel, where a more straightforward cut-and-count analysis was performed by CMS. It is important to understand that, since all 2-body and 3-body decays of a pair \begin{table} \begin{tabular}{c|c c c c} Scenario & \(\text{BR}(W^{+}b)\) & \(\text{BR}(th)\) & \(\text{BR}(tZ)\) & \(\text{BR}(W^{+}W^{-}t)\) \\ \hline “Simplified singlet” & 0.5 & 0.25 & 0.25 & 0 \\ “Simplified doublet” & 0 & 0.5 & 0.5 & 0 \\ “Fourplet-like” & 0 & 0.45 & 0.45 & 0.1 \\ \end{tabular} \end{table} Table 1: Scenarios considered in the \(T^{(1)}\) analysis and their corresponding branching ratio configurations. Figure 3: Feynman diagrams of the main contributions to \(T^{(1)}\to W^{+}W^{-}t\) in fourplet-like scenarios. of \(T^{(1)}\) can contribute to those channels, the main effect of changing the branching ratios comes from the fact that some decay channels may be more "resistant" to the cuts in the analysis, specially the selection on the number and charge of leptons. The number of surviving events will be given by: \[N^{\rm CUT}=\sum_{D_{1},D_{2}}\sigma_{p\bar{p}\to T^{(1)}\widetilde{T}^{(1)} \to D_{1}\bar{D}_{2}}\times\xi_{D_{1},D_{2}}\times\mathcal{L}, \tag{10}\] where \(\mathcal{L}\) is the luminosity and \(\xi_{D_{1},D_{2}}\) is a product of detector and cuts efficiencies for each channel. In the 2SSL channel, exactly two isolated leptons with the same sign of electric charge are demanded. With the following cuts4: Footnote 4: All cuts follow ref [26] and are justified there. \(H_{T}^{\rm lep}\) is cut at different values for different datasets used in their analysis, here we use the value for the 2017-2018 data which contains most of the analyzed luminosity. * leading(subleading) lepton: \(p_{T}>40(30)\) GeV; * all leptons: \(|\eta|<2.4\); * invariant mass of the 2SSL pair \(m_{ll}>20\) GeV and outside the Z window: [76.1 GeV, 106.1 GeV]; * number of jets \(N_{j}\geq 4\) (AKT with \(R=0.4\), \(p_{T}>30\) GeV and \(|\eta|<2.4\)); * \(H_{T}^{\rm lep}>400\) GeV. On the left of figure 5 we show the number of events passing the cuts for each \(T\) mass in the different scenarios, on the right we show the ratio between those numbers. One expects that, to first order, an increase in the number of events will lead to a proportional decrease in the experimental upper limit, and we make that assumption here. We can check this Figure 4: Expected and observed limits of the signal cross section upper limit at 95% CL for the simplified singlet (left) and simplified doublet (right) scenarios obtained by combining the analyses done in [26] from single lepton, same-sign dilepton and multilepton channels. The band around the theoretical prediction shows the theoretical uncertainty. Figure extracted from [26]. assumption using the simplified doublet to simplified singlet ratio (blue curve on the right of figure 5). In the mass region analysed this ratio is around 2.1, we can compare this with the ratio between the observed upper limits for these two scenarios in [26]. Figure 4 only shows the limits for the combination of all channels, but considering the 2SSL channel alone that ratio is around 1.8 [29], which is similar to the one we obtain. We can now focus on the comparison between the simplified doublet and fourplet-like scenarios. The red curve in figure 5 shows a ratio around 1.5 in the direction of increasing the number of events. The main effect here is that the presence of a 3-body decay into \(WWt\) increases the probability of finding same sign leptons, even with a small branching ratio into that channel (we have \(\text{BR}[W^{+}W^{-}t]=0.1\)). We expect the same effect to be present in all multi-lepton channels. We can now divide the simplified doublet upper limit by the ratio for each mass to estimate the upper limit of the fourplet-like scenario5, with results shown on figure 6. In this very rough approximation the present exclusion would increase to 1.6 TeV from the 1.5 TeV obtained for the doublet in [26]. Despite the roughness of this analysis, we firmly believe it motivates a new analysis by CMS that relaxes the assumption of 2-body decays only, as taking the 3-body decays into consideration will probably increase the mass exclusion using the same data available today (and such decays are present in most realistic MCHMs). Footnote 5: Here we make another approximation, as the limits in figure 4 are for the three combined channels, and the ratios were obtained for the 2SSL channel alone. ### Effect on the \(X_{5/3}\) search In the case of the \(X_{5/3}\) we follow very closely the strategy of the previous section, now using [27] as the experimental search to be recast. In [27] a single decay channel is considered, \(X_{5/3}\to W^{+}t\), and the search is done separately for both pure left-handed and right-handed \(X_{5/3}\). The cuts applied to the two chiralities are the same though, and the limits obtained are similar (1.33 TeV for \(X_{5/3}^{R}\) and 1.30 TeV for \(X_{5/3}^{L}\)), so we can expect to get a good estimate for the inclusion of 3-body decays in the vectorlike case treated here by using the same cuts. Two channels are analysed, the 2SSL and the single-lepton case, and Figure 5: (left) Number of events passing the 2SSL cuts in each scenario from table 1. (right) Ratio of the number of events between the fourplet-like and the simplified doublet scenarios (in red) and between the simplified doublet and the simplified singlet scenario (in blue). results are presented for each channel and also for the combination. The results obtained for the 2SSL channel, which is the addressed channel here, can be seen in figure 7. Since we are focusing on the fourplet-like scenario, the main 3-body decays are \(X_{5/3}\to W^{+}th\) and \(X_{5/3}\to W^{+}tZ\), the relevant diagrams are shown in figures 8 and 9. In the majority of the parameter space points scanned the branching ratios in these two channels are similar, so we work with the 3-body scenario shown in table 2. The simulation and cut flow follows closely what was done in section 2.1, with the following changes: the subleading lepton is required to have \(p_{T}>35\) GeV, number of jets Figure 6: Expected and observed upper limits taken from Fig. 4 for the simplified singlet (in blue) and simplified doublet (in red) scenarios. The purple line represents the estimated limits in the fourplet- like scenario; 1L+2SSL+MultiLep refers to the results from the 3 corresponding channels analysed in [26]. Figure 7: Expected and observed upper limits of the signal cross section at 95% CL for an LH (left) and RH (right) \(X_{5/3}\) from the same-sign dilepton search performed by [27]. The band around the theoretical prediction shows the theoretical uncertainty. Figure extracted from [27]. \(N_{j}\geq 2\), the total number of constituents6\(N_{\text{const}}\geq 5\) and \(H_{T}^{\text{lep}}>1200\text{ GeV}\). The ratio between the number of events passing the cuts in each scenario can be seen in figure 10. The effect of the 3-body decays goes in the same direction but is clearly smaller than in the \(T^{(1)}\) case, with an increase in the number of events between 5% and 10%. This is a consequence of the fact that the 3-body decays now introduce extra \(Z\) or \(h\), instead of extra \(W\)'s, and those do not contribute as strongly to the same sign dilepton channel. The effect is correspondingly smaller in the recasting to the mass exclusion limit, so no significant change to the limit is obtained. Footnote 6: The number of constituents is equal to number of jets plus number of leptons beyond the two considered for the lepton pair \begin{table} \begin{tabular}{c|c c c} Scenario & BR(\(W^{+}t\)) & BR(\(W^{+}tZ\)) & BR(\(W^{+}th\)) \\ \hline _2-body_ & 1 & 0 & 0 \\ _3-body_ & 0.8 & 0.1 & 0.1 \\ \end{tabular} \end{table} Table 2: Scenarios considered in the \(X_{5/3}\) analysis and their corresponding branching ratio configurations. Figure 8: Feynman diagrams of the three body decay \(X_{5/3}\to W^{+}tZ\). Figure 9: Feynman diagrams of the three body decay \(X_{5/3}\to W^{+}th\). nclusive search for vectorlike resonances in the presence of 3-body decays Now, motivated by the fact that including 3-body decays can increase the experimental sensitivity to VLQ, we also relax the very common assumption that there is only one light VLQ. The simplest scenario in a complete model that provides us two light VLQs comes from the \(\mathrm{MCHM}_{5}\) with a fourplet-like \(T^{(1)}\). In [16] we have found a few benchmark points which are good representatives of this situation and we will use one of those, named \(C_{9}\) in [16], to propose a search strategy. The main phenomenological characteristics of the model can be seen in table 3. This point was chosen because it reproduces well the fourplet-like scenario of the previous chapter (specifically the 3-body decay branching ratios) and the masses of the low lying resonances are close to experimental limits7. Footnote 7: The updated search in [26] was published in the final stages of this work, after this analysis was finished, increasing the \(2\sigma\) constraint to \(m_{T^{(1)}}\approx 1.5\) TeV from previous constraints lying around 1.3 TeV to 1.4 TeV (depending of decay assumptions) [19; 30; 31; 32; 33; 34]. The results of this section can be easily extrapolated for small increases in mass. The key characteristic of a model with a fourplet-like \(T^{(1)}\) is the degeneracy in mass with the \(X_{5/3}\). Taken together with the presence of 3-body decays, this makes it non-trivial to separate searches for these two resonances. To clarify this point, consider the two diagrams in figure 11. Due to their degenerate masses the \(X_{5/3}\) and \(T^{(1)}\) cannot be both on-shell in the upper decay chain in those diagrams. This means that, for instance, diagram 11a can generate two different events: (i) the production of an on-shell \(X_{5/3}\), followed by its 3-body decay; or (ii) the \(X_{5/3}\)-mediated production of on-shell \(T^{(1)}+W\), followed by a 2-body decay of \(T^{(1)}\). The key point here is that the final states for these two situations are the same (an analogous situation occurs in diagram 11b, with the roles of \(T^{(1)}\) and \(X_{5/3}\) reversed). This also happens for other combinations of 2-body and 3-body decays, but the decays in figure 11 are the dominant 3-body decays in the fourplet-like scenario. Figure 12 shows how these two contributions can mix in kinematic variables, in 12b one can clearly see two features: (i) the peak generated at 1.3 TeV generated by decays of on-shell \(T^{(1)}\); and (ii) the bump in the region \(\mathrm{M}[t,Z]\lesssim 1.3\) generated by 3-body decays Figure 10: \(X_{5/3}\) resonances search: ratio of the number of events passing the kinematical cuts in the 2-body and the 3-body scenarios. off on-shell \(X_{5/3}\) (which force \(T^{(1)}\) off-shell). This illustrates the difficulty in separating the two signals: in the example a lot of \(tZ\) events will be coming from off-shell \(T^{(1)}\) even when the resonance itself is narrow, which is a counter-intuitive result. Even if the peak is properly identified, conclusions about the production cross-section will be affected by the fact that it is sitting on top of another new physics signal. The same applies to the invariant mass \(\mathrm{M}[W^{+},t,Z]\), in 12a, where we see a peak from on-shell \(X_{5/3}\) sitting on top of a off-shell \(X_{5/3}\) bump. It is important to realize that this is a general feature of the degeneracy in mass and \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \(T^{(1)}\) & \(T^{(2)}\) & \(T^{(3)}\) & \(B\) & \(X_{5/3}\) \\ \hline Mass (TeV) & 1.3 & 1.8 & 2.0 & 2.0 & 1.3 \\ \hline Width (GeV) & 7.8 & 13.4 & 6.8 & 5.5 & 6.7 \\ \hline Pair production \(\sigma\) (fb) & 6.6 & 0.50 & 0.17 & 0.21 & 6.7 \\ \hline BR(\(th\)) & 0.46 & 0.16 & 0.03 & - & - \\ BR(\(tZ\)) & 0.39 & 0.07 & 0.14 & - & - \\ BR(\(W^{+}b\)) & 0.02 & 0.20 & 0.14 & - & - \\ BR(\(W^{-}t\)) & - & - & - & 0.05 & - \\ BR(\(W^{+}t\)) & - & - & - & - & 0.86 \\ BR(\(W^{+}W^{-}t\)) & 0.10 & 0.12 & 0.01 & - & - \\ BR(\(W^{+}tZ\)) & - & - & - & - & 0.03 \\ BR(\(W^{+}ht\)) & - & - & - & - & 0.03 \\ BR(\(X_{5/3}W^{-}\)) & - & 0.13 & 0.01 & - & - \\ BR(\(T^{(1)}h\)) & - & 0.07 & 0.01 & - & - \\ BR(\(T^{(1)}Z\)) & - & 0.06 & 0.01 & - & - \\ BR(\(T^{(2)}h\)) & - & - & 0.18 & - & - \\ BR(\(T^{(2)}Z\)) & - & - & 0.42 & - & - \\ BR(\(W^{-}T^{(2)}\)) & - & - & - & 0.77 & - \\ Other BRs & 0.03 & 0.19 & 0.05 & 0.18 & 0.08 \\ \hline \hline \end{tabular} \end{table} Table 3: Masses, decay widths and branching ratios of the resonances in the benchmark point \(C_{9}\). Figure 11: Feynman diagrams of the processes involving both two- and tree-body decays of resonances. the presence of 3-body decays, and will be present in any model as long as the involved coupling constants are sizeable. In the MCHM\({}_{5}\) this is guaranteed by the fact that when the \(T^{(1)}\) and the \(X_{5/3}\) are coming from the same multiplet (i.e. we have a fourplet-like \(T^{(1)}\)), the couplings will be significant and the states degenerate, which implies the 3-body decays will be relevant. ### Signal Instead of going through the extra problem of trying to disentangle these two states, we here propose the alternate strategy of using this in our favor. By looking for new physics in an inclusive way, considering contributions of pair production of both the \(T^{(1)}\) and the \(X_{5/3}\) to the same channel, we will have increased sensitivity to the new physics. The dominant 3-body decays of the \(X_{5/3}\) are into \(W^{+}th\) and \(W^{+}tZ\). We can safely neglect the case where both \(X_{5/3}\) in the pair decay into three bodies, as the branching ratio is too small, so the dominant decay in the other leg will be into \(W^{+}t\). The same can be obtained from the pair production of fourplet-like \(T^{(1)}\), as the dominant 3-body decay is \(W^{+}W^{-}t\) and the 2-body decays are dominantly \(th\) and \(tZ\) (again we can neglect two 3-body decays). There is also a contribution from heavier top partners. Although the cross section of the \(T^{(2)}\) pair production is around 10% of the \(T^{(1)}\) pair production, the 2-body channels considered have lower branching ratios. In the case of the \(W^{+}W^{-}t\) channel, although the \(T^{(2)}\) branching ratio is a little higher, it does not get off shell contributions as big as those of the \(T^{(1)}\). Therefore the actual contribution of the \(T^{(2)}\) resonance is around 2% of the events generated and \(T^{(3)}\) is even smaller. Hence, results found here should also apply well to models where other top partners are not present. We start thus from \(t\bar{t}\)\(W^{+}W^{-}\) and a \(h\) or \(Z\) and, to maximize the number of events, we choose the \(b\bar{b}\) decay for the Higgs or the \(Z\) bosons. Considering the decays of the tops, our signal becomes \(W^{+}W^{-}W^{+}W^{-}b\bar{b}b\bar{b}\) (\(4W4b\)). The presence of the four W bosons allows us to explore multi-leptonic channels, and we will focus in two channels: one containing two leptons (meaning electrons or muons) of the same charge (2SSL) and the other containing 3 leptons irrespective of charge (3L). We also consider the 2SSL+3L channel, which Figure 12: Invariant mass distributions of the process in Figure 10(a) for the benchmark point \(C_{9}\) (fourplet-like point). Histograms generated with MadAnalysis 5 [35]. includes events selected for either of the previous channels. Since it would be unrealistic to demand the full reconstruction of four \(W\) and four \(b\), we will also have new physics contributions coming from 2-body decays only (which can imply less \(b\) quarks after decays), and we include those too. The processes contributing to the signal are listed in table 4. A noteworthy feature is that the cross sections in the second column of table 4 can not be consistently estimated by the product of the corresponding pair production cross sections and the branching ratios into 2- and 3-body channels listed in table 3, despite the fact that the VLQs are narrow. Such an estimation works in the case of 2-body decays only. In the case of 3-body decays the cross sections are around four times bigger than expected, and this is a direct consequence of the off-shell contributions discussed previously (the amount of "off-peak" events in figure 12 makes that clear). This is one of the advantages of this inclusive search. ### Backgrounds On the subject of backgrounds, the first important observation is that the \(4W4b\) signal is also generated by four top production in the SM. The \(t\bar{t}t\bar{t}\) signal has been intensively searched for [36, 37, 38] and we can profit from the accumulated background knowledge, since we will have the same backgrounds present in those searches. Those backgrounds are shown in table 5, where one can see that main irreducible backgrounds to the 2SSL and 3L channels come from the production of \(t\bar{t}\) pair in association with a boson and \(tZbjj\). These are followed in importance by the production of a top pair plus two bosons, but we will neglect the case were those bosons are a \(Z\) or a \(h\) as in both cases the decay into leptons is small when compared with the \(W\), so only \(t\bar{t}W^{+}W^{-}\) is included. Of course \(t\bar{t}t\bar{t}\) itself is a background in our case. ### Proposed Search Strategy All the cross-sections in tables 4 and 5 were obtained by simulation with Madgraph5 (v2.9) [39] at LO. The event samples are then passed through showering and hadronization, performed by Pythia8[40] (with jet matching done in the MLM matching scheme), and through Delphes3 (v3.5.0) [41] for a fast detector analysis. In Delphes the default card for the HL-LHC was used and jet clustering is done by FastJet [42, 43] using the anti-\(k_{T}\) algorithm with \(R=0.4\). The \(b\)-jet reconstruction is done with an efficiency of \(0.75(1-\frac{p_{T}}{5000\text{ GeV}})\). Leptons are isolated if the \(p_{T}\) sum of all the particles inside the cone with fixed radius \(\mathcal{R}=0.3\) around the lepton, divided by \(p_{T}\) of the lepton, is less than 0.1. We also require that all reconstructed particles (leptons and jets) have \(p_{T}>30\text{ GeV}\) and \(|\eta|<3\). The number of events obtained at \(\mathcal{L}=4\text{ ab}^{-1}\) for signal and backgrounds after this minimal set of requirements can be seen in the "No cuts" column of tables 6, 7 and 8 (respectively for the 2SSL, 3L, and 2SSL+3L signals). The columns in tables 6, 7 and 8 show the effect of progressive cuts in the number of events for each of the signal and background channels, as well as the total signal (S) over total background (B) and \(S/\sqrt{B}\). Our signal contains 4 b-jets in the final state but most backgrounds do not (the only exception being the four tops background), so the obvious first cut is to demand a number \(N_{b}\) of b-tagged jets, we found that optimal results were obtained for \(N_{b}\geq 3\), which is applied in the three search channels. As the signal is generated from the decay of a pair of heavy particles we expect a lot of transverse momentum to be produced, but in the chosen channels this momentum will be distributed among jets and hard leptons. We thus define \(H_{T}^{lep}\) as the scalar \(p_{T}\) sum of all reconstructed jets and leptons in the event. The minimum \(H_{T}^{lep}\) value turns out to be the most relevant cut in terms of increasing signal to background ratio, and has been optimized to different values for the three channels, as can bee seen in tables 6, 7 and 8. Finally we impose a cut on the missing energy (\(\not\!\!E_{T}\)) as we also expect neutrinos to be produced from the leptonic decays of the \(W\) bosons, and because that will help ensure we are excluding contributions from rarer reducible backgrounds that can generate our signal through miss-identification (\(WZ\), \(ZZ\), \(W^{\pm}W^{\mp}\)). The optimal values for the \(\not\!\!E_{T}\) cut are indicated on the tables (we \begin{table} \begin{tabular}{|c|c|c|c|} \hline Process & \(\sigma\) [fb] & decay mode & \(\sigma\times\) BR [ab] \\ \hline \(X_{5/3}X_{5/3}\to t\overline{t}W^{+}W^{-}\) & 4.87 & \(W_{l^{\pm}}W_{l^{\pm}}W_{\rm had}W_{\rm had}\) & 208 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\rm had}\) & 133 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}\) & 106 \\ \hline \(X_{5/3}X_{5/3}\to W^{+}W^{-}t\overline{t}h\) & 1.12 & \(W_{l^{\pm}}W_{l^{\pm}}W_{\rm had}W_{\rm had}\) & 27.6 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\rm had}\) & 17.7 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}\) & 1.41 \\ \hline \(TT\to W^{+}W^{-}t\overline{t}h\) & 1.01 & \(W_{l^{\pm}}W_{l^{\pm}}W_{\rm had}W_{\rm had}\) & 24.9 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\rm had}\) & 15.9 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}\) & 1.27 \\ \hline \(TT\to t\overline{t}hh\) & 1.37 & (\(hh\to b\bar{b}W^{+}W^{-}\)) & \\ & & \(W_{l^{\pm}}W_{l^{\pm}}W_{\rm had}W_{\rm had}\) & 14.6 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\rm had}\) & 9.32 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}\) & 0.75 \\ \hline \(X_{5/3}X_{5/3}\to W^{+}W^{-}t\overline{t}Z\) & 1.1 & \(W_{l^{\pm}}W_{l^{\pm}}W_{\rm had}W_{\rm had}Z_{b\bar{b}}\) & 7.15 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\rm had}Z_{b\bar{b}}\) & 4.58 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}Z_{b\bar{b}}\) & 3.66 \\ \hline \(TT\to W^{+}W^{-}t\overline{t}Z\) & 0.86 & \(W_{l^{\pm}}W_{l^{\pm}}W_{\rm had}W_{\rm had}Z_{b\bar{b}}\) & 5.59 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}Z_{b\bar{b}}\) & 3.57 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}Z_{b\bar{b}}\) & 0.29 \\ \hline \(TT\to t\overline{t}Zh\) & & \(W_{l^{\pm}}W_{l^{\mp}}Z_{l}\) & 4.29 \\ \cline{3-4} & & \((h\to W^{-}W^{+})\) & \\ & & \(W_{l^{\pm}}W_{l^{\pm}}W_{\rm had}W_{\rm had}Z_{b\bar{b}}\) & 3.33 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\rm had}Z_{b\bar{b}}\) & 2.13 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}Z_{b\bar{b}}\) & 0.17 \\ \hline \(T\bar{T}\to t\overline{t}ZZ\) & 1.03 & \(W_{l^{\pm}}W_{l^{\mp}}Z_{l}Z_{b\bar{b}}\) & 0.98 \\ \hline \end{tabular} \end{table} Table 4: Signal processes for the point \(C_{9}\) in the 2SSL search at LO and \(\sqrt{s}=14\) TeV. Here T stands for \(T^{(1)}\), \(T^{(2)}\) or \(T^{(3)}\). The second column indicates cross section before decays, the third indicates the decay mode of the vector bosons (with \(V_{l}\), \(V_{\rm had}\) and \(V_{b\bar{b}}\) meaning decays into leptons, hadrons and \(b\bar{b}\) respectively) and the forth is the cross section after the indicated decay (with \(t\to bW\) and \(h\to b\bar{b}\) where not otherwise indicated). The cross sections were computed at LO and \(\sqrt{s}=14\) TeV. give two possible values for the 2SSL+3L channel). ## 4 Conclusions and Outlook Vector-like top partners are ubiquitous in models attempting to address the naturalness puzzle of the Standard Model. Using as a concrete example of such an extension, the MCHM\({}_{5}\), we have shown that these particles can have sizeable three-body decays, and that taking these channels into account can improve significantly the exclusion limits obtained by previous analyses. Specifically, for the pair production of the lightest top partner from the fourplet state, with one of the legs decaying to \(W^{+}W^{-}t\), we estimate that the present exclusion limit from CMS in the same-sign dilepton channel [26], which assumes the width is saturated by two-body decays, would increase from 1.5 TeV up to 1.6 TeV, as shown in figure 6. This strongly motivates a more inclusive search for these states to be performed by the experiments. Although we focused here on the MCHM\({}_{5}\), it must be emphasized that we expect these features to be generic in any model containing a vector-like doublet. Most studies so far have considered a SM-like doublet (top and bottom partner), with the supposedly conservative assumption of a two-body saturated width. However, we see that, on the contrary, a doublet naturally leads to near degenerate states and sizeable three-body decays. Thus, simplified models built on this assumption are in fact not capturing model independent \begin{table} \begin{tabular}{|c|c|c|c|} \hline Backgrounds & \(\sigma\) [fb] & decay mode & \(\sigma\times\text{BR [fb]}\) \\ \hline \(t\overline{t}W^{\pm}\) + jets & 574.5 & \(W_{l^{\pm}}W_{l^{\pm}}W_{\text{had}}\) & 18.14 \\ & & \(W_{l^{\pm}}W_{l^{\pm}}W_{l^{\mp}}\) & 5.81 \\ \hline \(t\overline{t}Z\) + jets & 743.1 & \(W_{l^{\pm}}W_{\text{had}}Z_{l}\) & 12.73 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}Z_{l}\) & 2.04 \\ \hline \(t\overline{t}h\) & 479.9 & \((h\to W^{-}W^{+})\) & \\ & & \(W_{l^{\pm}}W_{\text{had}}W_{l^{\pm}}W_{\text{had}}\) & 4.42 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\text{had}}\) & 2.82 \\ & & \(W_{l^{\pm}}W_{\text{had}}Z_{l}Z_{\text{had}}\) & 0.54 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}\) & 0.22 \\ \hline \(tZbjj\) & 317 & \(W_{l^{\pm}}Z_{l}\) & 4.6 \\ \hline \(t\overline{t}t\overline{t}\) & 11.8 & \(W_{l^{\pm}}W_{l^{\pm}}W_{\text{had}}W_{\text{had}}\) & 0.51 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\text{had}}\) & 0.32 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}\) & 0.03 \\ \hline \(t\overline{t}W^{+}W^{-}\) & 9.88 & \(W_{l^{\pm}}W_{\text{had}}W_{l^{\pm}}W_{\text{had}}\) & 0.42 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{\text{had}}\) & 0.27 \\ & & \(W_{l^{\pm}}W_{l^{\mp}}W_{l^{\pm}}W_{l^{\mp}}\) & 0.03 \\ \hline \end{tabular} \end{table} Table 5: Most important background processes contributing to the 2SSL, 3L or 2SSL+3L channels, following the same conventions of table 4. Here “+jets” refers to 0, 1 or 2 jets generated at the hadronization stage of the simulation and \(j\) stands for a hard light jet, generated at parton level simulation. The cross sections were computed at LO and \(\sqrt{s}=14\) TeV. physics, but instead imposing constraints on their possible UV completions, needed to suppress the three-body channel. Furthermore, the narrow spectrum leads to large contributions to the production cross section in the three-body decay channels, coming from one of the states being slightly off-shell. The effect can make the cross-section as large as four times the naive estimate from cross-section times branching ratio for narrow states (see figure 12). This feature makes it difficult to search for one of these states in isolation, hence an inclusive search can be more profitable. We propose such a search focusing on multileptonic channels (2SSL, 3L and their combination) following the same cut flow used for VLQ searches in these channels. Our results are summarized in tables 6, 7 and 8. With the benchmark point we explored, which predicts a fairly light resonance at 1.3 TeV, the HL-LHC could reach a \(S/\sqrt{B}\) of 15. From this result, and scaling the cross section as \(M_{T,X}^{-4}\), we can extrapolate to higher masses and the reach at the HL-LHC would be 1.6 TeV at three sigma, and 1.5 at five sigma. One must notice that this is obtained in a simplified cut-and-count analysis with just two multi-leptonic channels, so a complete analysis will certainly enhance the discovery reach or exclusion potential. Therefore, it is of the utmost importance that the forthcoming analysis implement 3-body decays and an inclusive search for these resonances. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{3}{|c|}{Number of events - 2SSL (\(\mathcal{L}=4\) ab\({}^{-1}\))} \\ \cline{3-6} \multicolumn{2}{|c|}{Process} & \multicolumn{1}{|c|}{No cuts} & \(N_{b}\geq 3\) & \(N_{b}\geq 3\), & \(N_{b}\geq 3\), \\ & & & & \(H_{T}^{\rm lep}>1.8\) TeV & \(H_{T}^{\rm lep}>1.8\) TeV \\ & & & & & \(\not\!\!E_{T}>150\) GeV \\ \hline \multirow{6}{*}{**Luminosity**} & \multirow{2}{*}{Process} & \(X\bar{X}\to t\overline{t}W^{+}W^{-}\) & 356 & 40 & 20 & 15 \\ & & \(T\bar{T}\to t\overline{t}hh\) & 39 & 14 & 5 & 3 \\ & & \(T\bar{T}\to t\overline{t}Zh\) & 17 & 7 & 4 & 2 \\ & & \(T\bar{T}\to t\overline{t}ZZ\) & 12 & 2 & 1 & 1 \\ \hline \multirow{6}{*}{**Luminosity**} & \multirow{2}{*}{Process} & \(X\bar{X}\to W^{+}W^{-}t\overline{t}h\) & 48 & 24 & 15 & 11 \\ & & \(T\bar{T}\to W^{+}W^{-}t\overline{t}h\) & 42 & 21 & 13 & 9 \\ & & \(T\bar{T}\to W^{+}W^{-}t\overline{t}Z\) & 15 & 6 & 4 & 3 \\ & & \(X\bar{X}\to W^{+}W^{-}t\overline{t}Z\) & 12 & 5 & 3 & 2 \\ \hline \multirow{6}{*}{**Luminosity**} & \multirow{2}{*}{Process} & \(t\overline{t}W^{\pm}\) & 20536 & 691 & 19 & 9 \\ & & \(t\overline{t}Z\) & 7062 & 237 & 4 & 2 \\ & & \(t\overline{t}h\) & 3893 & 132 & 1 & 0 \\ & & \(t\overline{t}t\overline{t}\) & 658 & 288 & 6 & 3 \\ & & \(t\overline{t}W^{+}W^{-}\) & 597 & 30 & 1 & 0 \\ & & tZ bjj & 761 & 18 & 0 & 0 \\ \hline \multicolumn{2}{|c|}{} & \(S/B\) & & 0.1 & 2.2 & 3.2 \\ \hline \multicolumn{2}{|c|}{} & \(S/\sqrt{B}\) & & 3.2 & 11.9 & 12.1 \\ \hline \end{tabular} \end{table} Table 6: Number of events surviving the cuts implementation in the 2SSL search channel. ## 5 Acknowledgements The authors thank Geum Bong Yu for useful discussions and references. This work was supported by the Sao Paulo Research Foundation (FAPESP) under grants #2018/25225-9, #2021/14335-0. This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001.
2308.04674
Addressing Racial Bias in Facial Emotion Recognition
Fairness in deep learning models trained with high-dimensional inputs and subjective labels remains a complex and understudied area. Facial emotion recognition, a domain where datasets are often racially imbalanced, can lead to models that yield disparate outcomes across racial groups. This study focuses on analyzing racial bias by sub-sampling training sets with varied racial distributions and assessing test performance across these simulations. Our findings indicate that smaller datasets with posed faces improve on both fairness and performance metrics as the simulations approach racial balance. Notably, the F1-score increases by $27.2\%$ points, and demographic parity increases by $15.7\%$ points on average across the simulations. However, in larger datasets with greater facial variation, fairness metrics generally remain constant, suggesting that racial balance by itself is insufficient to achieve parity in test performance across different racial groups.
Alex Fan, Xingshuo Xiao, Peter Washington
2023-08-09T03:03:35Z
http://arxiv.org/abs/2308.04674v1
# Addressing Racial Bias in Facial Emotion Recognition ###### Abstract Fairness in deep learning models trained with high-dimensional inputs and subjective labels remains a complex and understudied area. Facial emotion recognition, a domain where datasets are often racially imbalanced, can lead to models that yield disparate outcomes across racial groups. This study focuses on analyzing racial bias by subsampling training sets with varied racial distributions and assessing test performance across these simulations. Our findings indicate that smaller datasets with posed faces improve on both fairness and performance metrics as the simulations approach racial balance. Notably, the F1-score increases by \(27.2\%\) points, and demographic parity increases by \(15.7\%\) points on average across the simulations. However, in larger datasets with greater facial variation, fairness metrics generally remain constant, suggesting that racial balance by itself is insufficient to achieve parity in test performance across different racial groups. Machine Learning, Facial Emotion Recognition, Facial Emotion Recognition, Facial Emotion Recognition ## 1 Introduction Emotion recognition, commonly referred to as facial expression recognition (FER), encompasses the identification and analysis of facial expressions displayed by individuals in images or videos. This complex procedure consists of three key stages: face detection, feature extraction, and emotion classification. (Ko, 2018). FER finds extensive utility across various domains including human-computer interaction (HCI) (Picard, 1999), media analytics (Zhao et al., 2019), robotics (Tao and Tan, 2005), and health informatics (Voss et al., 2019; Washington et al., 2022). Historically, automatic emotion recognition predominantly relied on the extraction of domain-specific features such as facial action units (Hamm et al., 2011). However, with the rapid progression of machine learning (ML) and deep learning (DL) techniques, deep neural networks (DNNs) have emerged as a prominent approach for developing facial emotion recognition models. Such models necessitate expansive datasets to ensure robustness and accuracy in their predictions. In recent years, researchers have proposed DL models which leverage more expansive datasets for training (Li and Deng, 2018). Despite the impressive achievements of deep learning methods in FER, a significant challenge arises from the presence of racial bias in such models. This issue, which is well documented in existing literature (Chen and Joo, 2021; Domnich and Anbarjafari, 2021; Sham et al., 2023; Xu et al., 2020), necessitates immediate attention to mitigate discriminatory outcomes and to provide equitable opportunities for individuals of diverse ethnicities and skin colors. Addressing and rectifying the biases within DNNs is of paramount importance for real-world translation of such models. Biases within DNNs can primarily be attributed to two fundamental sources: the training data and the algorithms themselves (Mehrabi et al., 2022). Given that models learn from input data, any biases present within the underlying datasets are inherently ingrained within the learning process of the algorithms. Furthermore, the design of feature extraction processes for these models may introduce biases that disproportionately affect different racial groups. An illustrative example is the consideration of skin color as a learned feature extracted during the deep learning process, which can ultimately lead to unfair predictions. To address the issue of racial bias in FER datasets, we conduct a simulation study on AffectNet and CAFE datasets (Mollahosseini et al., 2017; LoBue and Thrasher, 2015). In each simulation, we select a specific race as the simulated race, and ensure other races equal representations. We train the FER model using sub-sampled data with varying proportions of the simulated race, measuring the accuracy, F1-scores, and fairness metrics. We find that the racial balance of the training set has some influence on the test race-specific F1-score, but mitigating balance alone is insufficient to address other types of bias such as annotator bias. ## 2 Related Works ### Deep-learning-based Emotion Recognition Deep learning models serve as the foundation for each stage of FER (Ko, 2018). Training DNNs for FER requires the utilization of diverse datasets, encompassing varying numbers of labels and data types (Mollahosseini et al., 2017; LoBue Thrasher, 2015; Lucey et al., 2010; Lyons et al., 1998; Li et al., 2017; Goodfellow et al., 2013). Li and Deng provided an overview of popular datasets designed specifically for deep emotion recognition (2018). In the realm of deep-learning-based FER approaches, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are frequently employed. Khorrami et al. demonstrated that CNNs excel at accurately extracting facial action units (FAUs), thereby yielding promising classification performance on the CK+ dataset (2015). Lopes et al. proposed a combination of CNNs and image pre-processing techniques, such as cropping and normalization, to reduce the number of convolutional layers and alleviate the need for extensive training data. This approach resulted in improved overall accuracy and computational efficiency on the CK+, JAFFE, and BU3DFE datasets (2017). Agrawal and Mittal introduced a novel CNN model that investigated the influence of kernel size and the number of filters on the final classification accuracy, leading to further improvements in performance (2020). Recent studies have also explored the integration of generative adversarial networks (GANs) within CNNs for data augmentation and training purposes (Peng et al., 2020; Porcu et al., 2020). Incorporating CNNs with RNNs, Zhu et al. proposed a novel FER approach that integrates features learned from each layer of a CNN within a bidirectional RNN architecture (2017). RNNs have also proven effective in capturing dynamic facial actions within multimodal FER, as demonstrated by Majumder et al. (Majumder et al., 2019; Hasani & Mahoor, 2017). Despite achieving high overall accuracy, the performance across different racial groups has yet to be thoroughly and systematically studied. While multicultural FER models have demonstrated their effectiveness in improving accuracy, their impact on fairness has yet to be explicitly addressed (Sohail et al., 2022; Ali et al., 2016). ### Fairness in Machine Learning Numerous research endeavors have been dedicated to addressing issues of unfairness within ML systems (Du et al., 2020; Mehrabi et al., 2022; Oneto & Chiappa, 2020; Mehrabi et al., 2022). The underlying causes of biases in ML have been explored in several publications (Chouldechova & Roth, 2018; Martinez-Plumed et al., 2019). Evaluation metrics, shaped by social contexts, are employed to gauge the fairness of these models (Chouldechova & Roth, 2018; Castelnovo et al., 2021). ML fairness improvement methods are generally categorized into pre-processing, in-processing, and post-processing techniques, contingent upon the stage at which the fairness correction method is applied (Du et al., 2020; Mehrabi et al., 2022; Oneto & Chiappa, 2020; Mehrabi et al., 2022). Additionally, AI fairness toolkits have been developed, harnessing these methods (Bird et al., 2020; Bellamy et al., 2018; Wexler et al., 2020; Saleiro et al., 2018). Several prior studies have delved into the examination and alleviation of racial bias in FER. Raina et al. utilized artificial facial images and observed racial bias in FER models (2022). Sham et al. conducted an investigation into racial bias in popular state-of-the-art FER methods and revealed that the presence of uneven or insufficient representation within the training data leads to biased performance outcomes (2023). Additionally, biases in expression labeling within datasets, influenced by the impact of races on emotion perceptions, was identified as contributors to unfairness (Rhue, 2018; Chen & Joo, 2021). Conversely, Chen and Joo's study did not report any systematic labeling biases for races, attributing the absence to imbalanced racial representations in the dataset (2021). Although some methods have demonstrated effectiveness in correcting FER racial bias (Xu et al., 2020), the results highly depend on the datasets and models. Due to the involvement of highly subjective labels and high-dimensional inputs in emotion recognition, the field has yet to address the issue of fairness comprehensively in this domain and similar areas with complex and heterogeneous data streams. ## 3 Methods ### Datasets We employ two datasets to investigate racial bias. The first dataset is the Child Affective Facial Expression (CAFE) dataset, a collection of images featuring children posing specific emotions (LoBue & Thrasher, 2015). The second dataset is AffectNet, a widely recognized large-scale dataset for general facial emotion recognition (Mollahosseini et al., 2017). To align the datasets, we filter the examples within AffectNet to include only those with emotion labels matching those in the CAFE dataset, specifically neutral, sadness, happiness, surprise, anger, disgust, and fear. Additionally, we exclude grayscale images, which also contributes to more accurate race estimates. We calculate the per-pixel squared error (summed across the three channels) using the mean pixel value of each image. Images with an average per-pixel squared error below a threshold are considered grayscale and are removed from the training set. Consequently, the final training size for AffectNet is N = 259,280. A similar procedure was followed for the validation and test sets, resulting in sizes of N = 1,700 and 1,484, respectively. To separate the faces in CAFE from their white background, we utilize OpenCV bounding boxes, following a methodology that AffectNet uses during its data collection process (Mollahosseini et al., 2017; Bradski, 2000). We exclude images in which a face could not be adequately bounded, resulting in 1,178 usable images. Because the CAFE dataset involves participants posing for multiple emotions, we opt to split the data at the participant level when generating the training, validation, and test sets, resulting in sizes of N = 713, 227, and 222, respectively. ### Race Estimates We require race labels to analyze the bias. CAFE provides participants' self-reported ground truth race labels: European-American, African-American, (East) Asian, Latino, and South Asian. However, for the AffectNet dataset, race information is not available; we approach this problem by estimating race labels. We utilize models trained on labeled race datasets, specifically FairFace, which exhibits greater racial balance compared to similar datasets, and models trained on FairFace demonstrate improved performance on non-white faces relative to other models and datasets (Karkkainen and Joo, 2021). Our model uses the paper's original weights to predict the race labels, and the counts and proportions of the two datasets are presented in Tables 1 and 2. As expected, European-American faces make up the majority of the training set distributions for both CAFE and AffectNet, comprising 40.4% and 67.3% of their respective datasets. The FairFace-based model categorizes the AffectNet faces into seven categories: European-American, African-American, (East) Asian, Latino, South Asian, Middle Eastern, and Southeast Asian. It is possible to exclude the Middle Eastern and Southeast Asian groups, from the AffectNet experiments to align the racial categories with those of CAFE. However, we choose to retain all races in the experiments assuming that there could be latent information from these additional race categories that affects the training process. ### Simulating Racial Composition To investigate the impact of racial representation in the training set on test performance, we select a specific race (henceforth referred to as the simulated race) and vary their proportion. The sampling process ensures an equal representation for non-simulated races by sampling \(N\) examples for each race. Then we set a ratio (\(R\)) of the simulated race to a non-simulated race and sample \(N*R\) examples of the simulated race. All sampling is done without replacement. We analyze the effects of under-representation and over-representation of the simulated race by varying the ratio. In the simulations, we fine-tune a ResNet-50 model on the sub-sampled training set. The model's performance is evaluated on the validation set throughout the epochs, and the weights yielding the highest accuracy on the validation set are used to evaluate on the test set. While accuracy serves as one evaluation metric, we also consider the race-specific F1-score, which is calculated after filtering to the simulated race. Additionally, we explore fairness metrics and extend their applicability to multi-class classification problems. Demographic parity and equality of odds are two fairness principles that are commonly used in the algorithmic fairness literature (Barocas et al., 2019). Demographic parity ensures that the positive prediction rate remains consistent across sensitive attributes. One approach to quantify this principle is the use of the ratio of the smallest positive prediction rate to the largest. A value of one suggests that the model achieves demographic parity (Bird et al., 2020). To implement this metric, we transform the multi-class problem into multiple one-versus-rest sub-problems. For each emotion, we compute the demographic parity ratio across \begin{table} \begin{tabular}{c c c} \hline \hline Race & Count & Proportion \\ \hline European-American & 281 & 0.404 \\ African-American & 142 & 0.205 \\ East Asian & 103 & 0.148 \\ Latino & 86 & 0.124 \\ South Asian & 82 & 0.118 \\ \hline \hline \end{tabular} \end{table} Table 1: The racial distribution in CAFE’s training set. Participants self-report their race prior to data collection. A plurality of participants identify as European-American, indicating the potential for downstream racial bias in the model. \begin{table} \begin{tabular}{c c c} \hline \hline Race & Count & Proportion \\ \hline European-American & 174,382 & 0.673 \\ African-American & 19,131 & 0.074 \\ East Asian & 15,833 & 0.061 \\ Latino & 23,488 & 0.091 \\ Middle Eastern & 18,120 & 0.070 \\ South Asian & 4,786 & 0.018 \\ Southeast Asian & 3,540 & 0.014 \\ \hline \hline \end{tabular} \end{table} Table 2: The racial distribution in AffectNet’s training set. AffectNet does not provide race information, so this distribution is estimated using a race model built from FairFace. European-American faces account for a much larger proportion of the dataset relative to CAFE’s composition. the different races and then average these ratios to obtain an overall measure of demographic parity. Equality of odds requires parity in both true-positive and false-positive rates across sensitive attributes. A similar procedure to that used for the demographic parity ratio can be employed to derive two separate ratios: the true-positive parity ratio and the false-positive parity ratio (Bird et al., 2020). The equalized odds ratio is determined by selecting the smaller of the two ratios. This indicates that equality of odds is achieved when both parity ratios closely approximate one. ## 4 Experiments We conduct three sets of simulations in our study. The first simulation focuses on the CAFE dataset, which we sample at the participant level. Given the limited size of the training set, we set the number of participants \(N\) to be \(5\) and the ratio \(R\) ranges from 0 to 2.0 with increments of 0.2. This approach ensures that each simulation comprises a whole number of participants, and the sizes of each non-simulated races are approximately equal, with 40-50 observations per racial group. The second simulation involves the AffectNet dataset with \(N\) set to 50 observations, and \(R\) follows the same range and increments as in the previous simulation. This simulation tests the consistency of results between AffectNet and CAFE in the context of a small dataset size regime. In the third simulation, we use the AffectNet dataset with \(N\) set to 3500 observations, allowing for a larger training set. This simulation investigates whether the trends observed in the first and second simulations generalize to larger data regimes, which is relevant for the typical application of facial emotion recognition. Throughout the experiments, we maintain consistent training hyperparameters. We use a fixed learning rate of \(1e-4\) with an Adam optimizer (with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\)) and L2-weight decay on the model parameters. Cross-entropy loss is backpropagated at each batch step, and each simulation undergoes training for 5 epochs. To account for the emotion label imbalance within the sub-sampled dataset, we apply a weight to the loss function based on the number of ground truth labels. These experiments were conducted using an NVIDIA K80 GPU unit. ### Results CAFE SimulationWe display metrics from various levels of \(R\) in four race simulations: African-American, East Asian, European-American, and Latino. We calculate weighted F1-score and accuracy at a race level by filtering the predictions to the simulated race. We calculate demographic parity ratio and equalized odds ratio without any filters. Our hypothesis anticipates that race-specific F1-score and fairness metrics would improve as the dataset becomes more racially balanced. However, as the simulations over-sample and the simulated race becomes over-represented, we expect fairness metrics to plateau or decline, as the model's fairness performance for the non-simulated races is likely to deteriorate. The simulations conducted on the CAFE dataset align with expectations on some metrics. Figure 2 shows that the F1-score and the demographic parity ratio increase (\(+27.2\%\) and \(+15.7\%\) points respectively on average) as the dataset becomes balanced and stabilize when the dataset over-samples the simulated race. On the other hand, the equalized odds ratio exhibits greater inconsistency, with only the Latino simulations displaying a clear upward trend, while the other races exhibit random or downward trends. Figure 1: Example of the procedure for simulating East Asian representation in a dataset. This represents a contrived case where East Asian is over-represented in the training set. In actual simulations, \(N\) is much larger, with our largest experiment using \(N=3500\). Additionally, in Figure 3 we present the disaggregated F1-scores for each race and label before their aggregation. The disaggregated results reveal that a significant portion of the F1-score improvement stems from emotions such as neutral, sad, and fear. Interpreting changes in East Asian F1-scores proves challenging, possibly due to the limited presence of Asian participants in the test set. Moreover, surprise and disgust appear to be more challenging emotions to predict, which could explain the seemingly random or marginal trends observed. AffectNet Small SimulationThe performance on the small sub-sample of AffectNet, achieving 15.2% race-specific F1-score and 0.286 demographic parity ratio on average, is noticeably inferior to the CAFE simulations. The limited size of the training set and the substantially greater variation in emotion distribution from the "wild" images in AffectNet likely contribute to this discrepancy. Despite the potential for model overfitting, the overall trend shown in Figure 4 indicates that the model's performance does not significantly change as the dataset becomes more racially balanced, as evidenced by the nearly random trends observed in the F1-score and fairness metrics. AffectNet Large SimulationThe performance on the larger AffectNet simulation, although overfitting less, does not exhibit increases in F1-score and fairness metrics as the dataset becomes racially balanced. Figure 5 shows the race-specific F1-score fluctuating around 55% on average for all race simulations with no visible trends. Both demographic parity ratio and equalized odds ratio also stay Figure 4: AffectNet racial composition simulations using small subsample sizes (\(N=50\)) with all test metrics. Race-specific F1-score and demographic parity ratio have poor performance likely due to overfitting, and the lack of an observable trend in any of the simulations suggests minimal correlation between racial balance and test performance. Figure 3: CAFE racial composition simulations with unaggregated F1-scores show racial balance is correlated with increases in most emotion-specific F1-scores. The exceptions are ‘surprise’ and ‘disgust’. Figure 2: CAFE racial composition simulations with all test metrics. Each cell shows a varied simulated race with all non-simulated races held constant. CAFE is sampled at the participant level (\(N=5\)). Every race shows improvement in test performance for the race-specific F1-score and demographic parity ratio when the simulations move towards racial balance. roughly constant or even trend downwards. Furthermore, the unaggregated F1-scores demonstrate minimal variation, with the exception of anger, which experiences a temporary improvement when the Asian simulations are over-sampled. ## 5 Discussion The simulations conducted on CAFE indicate that racial balance of the training set has some influence on the test race-specific F1-scores. However, only certain fairness metric trends align with our expectations. Demographic parity ratio increases as the training set becomes balanced and then plateaus, but equalized odds ratio, a stricter metric, shows random trends through different racial distribution simulations. The AffectNet simulations exhibit mostly random trends across scenarios. Nevertheless, even for CAFE, biases persist within the models since the fairness metrics fail to approach a value of 1. This suggests that there may be sources of bias that the simulations are unable to capture. One possible explanation could be biased race estimations obtained from the FairFace model. Upon examining a small sample of faces categorized by this model, we observe that White, Black, Latino, and East Asian faces are estimated accurately. Other race and ethnicity categories, however, appear to be more prone to misclassification, particularly Middle Eastern faces based on lighter skin tones and Indian faces based on darker skin tones. This bias could potentially impact the training process even though the simulations focus on races estimated with reasonable accuracy. However, since the race labels for AffectNet are not available, it is challenging to ascertain the extent of estimator bias. As part of future extensions, it may be worthwhile to exclude the less-consistent racial categories to explore whether the estimator bias contributes to the lack of trends in the AffectNet simulation. Another source of bias that cannot be addressed through the simulation of racial compositions is annotation bias. There is evidence in the psychology literature suggesting that individuals are less accurate in determining facial expressions for races different from their own, resulting in potentially disproportionate labeling biases (Zhongqing Jiang et al., 2023; Chen and Joo, 2021). Annotation bias could be particularly problematic for AffectNet since its annotations were derived from only 12 labelers, with most labels being annotated by a single individual1(Mollahosseini et al., 2017). It would be possible to quantify and analyze this bias by collecting data on each observation, the race of the labelers, and the annotation process. CAFE, in fact, provides information about the aggregate race distribution of the labelers (LoBue and Thrasher, 2015), and this information has been incorporated into the training process in prior work (Washington et al., 2021). Footnote 1: The authors note that a two annotators labeled a subset of the data and there is agreement between the two annotators ranging from 50.8% on neutral labels to 79.6% on happy labels. Given the persistence of these non-compositional biases within AffectNet, traditional bias mitigation techniques like loss re-weighting or fairness regularization may not yield strong results. This underscores the need for the fairness community to explore alternative methods of bias mitigation, particularly in settings involving high-dimensional image Figure 5: AffectNet racial composition simulations using larger subsample sizes (\(N=3500\)) with all test metrics. Although the larger training set resolves overfitting to a degree, the simulations still lack a visible trend between racial balance and test performance. Figure 6: The unaggregated F1-scores for the AffectNet simulations show similar constant trends regardless of racial composition. ‘Angry’ has a moderate increase in the East Asian simulations, but only when they are over-sampled. inputs and subjective labels. ## 6 Conclusion There is an ongoing need for addressing fairness in facial emotion recognition. By simulating different racial distributions within the training sets, we demonstrate the impact of compositional racial imbalance on test performance and disparities in the CAFE dataset. Moreover, we extend this analysis to AffectNet, which comprises non-posed, "in the wild" expressions. The results reveal the persistent presence of bias across simulations of racial composition in the training set, with no improvement observed in the performance of race-specific F1-scores even when enforcing racial balance within the simulation. To further advance research in this domain, we propose exploring additional avenues of inquiry, such as re-simulating the AffectNet experiments while excluding racial groups that are inaccurately estimated during the pre-processing stage.
2310.19309
A simple quantum algorithm to efficiently prepare sparse states
State preparation is a fundamental routine in quantum computation, for which many algorithms have been proposed. Among them, perhaps the simplest one is the Grover-Rudolph algorithm. In this paper, we analyse the performance of this algorithm when the state to prepare is sparse. We show that the gate complexity is linear in the number of non-zero amplitudes in the state and quadratic in the number of qubits. We then introduce a simple modification of the algorithm, which makes the dependence on the number of qubits also linear. This is competitive with the best known algorithms for sparse state preparation
Debora Ramacciotti, Andreea-Iulia Lefterovici, Antonio F. Rotundo
2023-10-30T07:05:15Z
http://arxiv.org/abs/2310.19309v1
# A simple quantum algorithm to efficiently prepare sparse states ###### Abstract State preparation is a fundamental routine in quantum computation, for which many algorithms have been proposed. Among them, perhaps the simplest one is the Grover-Rudolph algorithm. In this paper, we analyse the performance of this algorithm when the state to prepare is sparse. We show that the gate complexity is linear in the number of non-zero amplitudes in the state and quadratic in the number of qubits. We then introduce a simple modification of the algorithm, which makes the dependence on the number of qubits also linear. This is competitive with the best known algorithms for sparse state preparation. ###### Contents * 1 Introduction * 2 Grover-Rudolph algorithm * 3 Grover-Rudolph for sparse vectors * 4 Permutation Grover-Rudolph * 5 Conclusions * A Optimizing the gates * B Implementing permutation matrices ## 1 Introduction Given a classical vector \(\psi\in\mathbb{C}^{N}\), the goal of state preparation is to build a unitary \(U_{\psi}\), such that \(U_{\psi}\left|0\right\rangle=\left|\psi\right\rangle,\) where \(\left|\psi\right\rangle\) is a quantum state whose amplitudes are given by \(\psi\). This is the first step of many algorithms, such as the quantum simulation of physical systems [1, 2], quantum machine learning [3], and quantum linear solvers [4, 5]. For this reason, state preparation is a subroutine of fundamental importance in quantum computing, and it is an object of ongoing research. Early state preparation algorithms are described in [1, 6, 7]. The basic idea of these algorithms is the same: it was independently introduced in [6] and [7], and had already been present in earlier works such as [1, 8]. Following what is now the standard notation, we collectively refer to these algorithms as Grover-Rudolph. In recent years, several works have tried to design new state-preparation algorithms with better worst-case asymptotic scaling, e.g. [9, 10, 11], and have uncovered an interesting trade-off between space and time complexity in state preparation. In this paper, we take a more practical point of view. We focus on _sparse_ vectors, i.e. vectors with only a few nonzero elements. This is a special class of vectors, which often appears in practical applications, such as quantum linear solvers [4, 5]. Recent works that have considered state preparation algorithms tailored for sparse vectors are [12, 13, 14, 15]. These algorithms have a complexity linear in both the sparsity of the vector and the number of qubits. The number of gates required to prepare a generic state with the Grover-Rudolph algorithm scales exponentially in the number of qubits, so this algorithm is sometimes overlooked as an option to prepare sparse states. Our first contribution is to explicitly show that Grover-Rudolph is able to prepare sparse vectors with a number of gates linear in the sparsity and quadratic in the number of qubits. This is a simple result, but, to the knowledge of the authors, it is not clearly stated and proved in the literature. We then introduce a small modification of the Grover-Rudolph algorithm, which brings down the complexity of preparing sparse vectors to linear in both the sparsity and the number of qubits. This shows that Grover-Rudolph is a competitive algorithm for preparing sparse states. The rest of the paper is organized as follows. In Sec. 2, we summarize the Grover-Rudolph algorithm. In Sec. 3, we specialize to sparse states and analyze the number of gates the Grover-Rudolph algorithm requires for their preparation. Finally, in Sec. 4, we introduce a simple variation of Grover-Rudolph algorithm, which we call Permutation Grover-Rudolph, and show that it has the same complexity as more recent algorithms designed for sparse vectors [12, 13, 14, 15]. ## 2 Grover-Rudolph algorithm In this section, we describe the Grover-Rudolph algorithm [6] for state preparation.1 Footnote 1: One can consider several versions of the Grover-Rudolph algorithm. The one we present here is similar to the one of [9], except for the use of phase gates, in place of \(R_{Z}\) rotation, and for skipping an optimization step. See the main text for further explanations. Let \(\psi\in\mathbb{C}^{N}\) be a classical vector, we want to implement a unitary \(U_{\psi}\) such that \(U_{\psi}\left|0\right>=\left|\psi\right>,\) where \(\left|\psi\right>\) is a quantum state with amplitudes equal to \(\psi\). For simplicity, we assume that \(N=2^{n}\), so that we can encode the vector \(\psi\) in a \(n\)-qubit state.2 More precisely, we want that Footnote 2: If this is not the case, one can pad \(\psi\) with zeros until this condition is met. \[U_{\psi}\left|0\right>=\frac{e^{i\theta}}{\left\|\psi\right\|}\sum_{i_{1}\ldots i _{n}}\psi_{i_{1}\ldots i_{n}}\left|i_{1}\ldots i_{n}\right>, \tag{1}\] where the indices \(i_{k}\) take values in \(\{0,1\}\), \(\left\|\psi\right\|\) is the 2-norm of the vector, and \(\theta\in[0,2\pi)\) is some irrelevant global phase. The strategy of the Grover-Rudolph algorithm is to construct a series of coarse-grained versions of \(\psi\) and prepare them recursively using controlled rotations. More precisely, let \(\psi^{(k)}\), for \(k=1,\ldots,n-1\), be the following coarse-grained states with components \[\psi^{(k)}_{i_{1}\ldots i_{k}}=\arg(\psi^{(k+1)}_{i_{1}\ldots i_{k}0})\sqrt{| \psi^{(k+1)}_{i_{1}\ldots i_{k}0}|^{2}+|\psi^{(k+1)}_{i_{1}\ldots i_{k}1}|^{2 }}\,. \tag{2}\] The superscript \((k)\) keeps track of the number of required to encode \(\psi^{(k)}\). For notational convenience, we also introduce \(\psi^{(0)}\equiv 1\) and \(\psi^{(n)}\equiv\psi\). We prepare states \(\left|\psi^{(k)}\right>\), whose amplitudes are given by \(\psi^{(k)}\), by recursively appending a qubit in state \(\left|0\right>\) and performing the following transformation, \[\psi^{(k)}_{i_{1}\ldots i_{k}}\left|i_{1}\ldots i_{k}\right>\left|0\right> \rightarrow\psi^{(k)}_{i_{1}\ldots i_{k}}\left|i_{1}\ldots i_{k}\right>\left( \cos\theta^{(k)}_{i_{1}\ldots i_{k}}\left|0\right>+e^{i\phi^{(k)}_{i_{1} \ldots i_{k}}}\sin\theta^{(k)}_{i_{1}\ldots i_{k}}\left|1\right>\right)\!. \tag{3}\] The angles and phases should be chosen so that the new state is \(\,|\psi^{(k+1)}\rangle\), i.e. such that the term on the r.h.s. of (3) is equal to \(\sum_{j}\psi_{i_{1}\ldots i_{k}j}^{(k+1)}\,|i_{1}\ldots i_{k}j\rangle\). A short calculation shows that this requires \[\begin{split}&\theta_{i_{1}\ldots i_{k}}^{(k)}=2\arccos\frac{| \psi_{i_{1}\ldots i_{k}0}^{(k+1)}|}{|\psi_{i_{1}\ldots i_{k}}^{(k)}|}\,,\\ &\phi_{i_{1}\ldots i_{k}}^{(k)}=\arg(\psi_{i_{1}\ldots i_{k}1}^{( k+1)})-\arg(\psi_{i_{1}\ldots i_{k}0}^{(k+1)})\,,\end{split} \tag{4}\] where \(\arg(z)\) is the phase of a complex number \(z\), and when \(\psi_{i_{1}\ldots i_{k}}^{(k)}=0\) one should pick \(\theta_{i_{1}\ldots i_{k}}^{(k)}=0\). For \(k=0\), there are no controlling qubits, so we are simply performing a 1-qubit gate. The transformation Eq. (3) can be implemented by applying a y-rotation \(R_{y}(\theta_{i_{1}\ldots i_{k}}^{(k)})\) and a phase shift gate \(P(\phi_{i_{1}\ldots i_{k}}^{(k)})\),3 both controlled on the state of the first \(k\) qubits being \(|i_{1}\ldots i_{k}\rangle\), Footnote 3: The y-rotation acts on the computational basis as: \(R_{y}(\theta)\,|0\rangle=\cos(\theta/2)\,|0\rangle+i\sin(\theta/2)\,|1\rangle\) and \(R_{y}(\theta)\,|1\rangle=\cos(\theta/2)\,|0\rangle-i\sin(\theta/2)\,|1\rangle\). The phase shift gate acts on the computational basis as: \(P(\phi)\,|0\rangle=|0\rangle\) and \(P(\phi)\,|1\rangle=e^{i\phi}\,|1\rangle\). \[U_{k}=\sum_{i_{1}\ldots i_{k}}|i_{1}\ldots i_{k}\rangle\!\langle i_{1}\ldots i _{k}|\otimes\left(P(\phi_{i_{1}\ldots i_{k}}^{(k)})\cdot R_{y}(\theta_{i_{1} \ldots i_{k}}^{(k)})\right),\quad k=0,\ldots,n-1\,. \tag{5}\] Notice that the superscripts of the angles and phases indicate how many qubits control the transformation, and the subscripts which value the controls should have. For instance, \(\theta_{11}^{(2)}\) means that the rotation and phase gates are applied when the first two qubits are in state \(|11\rangle\). The steps we have just explained are summarized in Alg. 1. The algorithm takes as input the angles and phases needed to implement the \(U_{k}\)'s. We decide to store these in a list of dictionaries, \(L_{k}\) for \(k=0,\ldots,n-1\). The entries in the dictionary \(L_{k}\) are given by {key: value} pairs of the form \(\{(i_{1},\ldots,i_{k})\): \((\theta_{i_{1}\ldots i_{k}}^{(k)},\phi_{i_{1}\ldots i_{k}}^{(k)})\}\). For the special case \(k=0\), we set \(L_{0}=\{1\colon\,(\theta^{(0)},\phi^{(0)})\}\). This dictionary can be computed using Alg. 2. ``` 1:functionGroverRudolph(angles and phases dictionaries \(L_{k}\)) 2:\(|\psi\rangle\gets 1\) 3:for\(k=0,\ldots,n-1\)do 4:\(|a\rangle\leftarrow|0\rangle\) 5:for\((i_{1},\ldots,i_{k}),(\theta_{i_{1}\ldots i_{k}}^{(k)},\,\phi_{i_{1}\ldots i_{k}} ^{(k)})\) in \(L_{k}\)do\(\triangleright\) Implement \(U_{k}\) as in Eq. (5) 6:if\(|\psi\rangle=|i_{1},\ldots,i_{k}\rangle\)then 7:\(|a\rangle\gets P(\phi_{i_{1}\ldots i_{k}}^{(k)})\cdot R_{y}(\theta_{i_{1} \ldots i_{k}}^{(k)})\,|a\rangle\) 8:endif 9:endfor 10:\(|\psi\rangle\leftarrow|\psi\rangle\otimes|a\rangle\) 11:endfor 12:return\(|\psi\rangle\) 13:endfunction ``` **Algorithm 1** Grover-Rudolph We now analyze the complexity of both Alg. 1 and Alg. 2. Let's begin with the quantum part, Alg. 1. The algorithm consists of \(n\) unitaries, \(U_{k}\) for \(k=0,1,\ldots,n-1\). Each unitary involves \(k+1\) qubits and is made out of \(R_{y}\) rotations and phase gates controlled on \(k\) qubits (\(U_{0}\) is not controlled). In the worst-case, the \(k\)-th unitary is made out of \(O(2^{k})\) controlled gates. For each controlled gate, we need \(O(k)\) Toffoli gates. Summing over all \(k\), we arrive at a worst-case asymptotic scaling of order \(O(n2^{n})\). The complexity of the classical preprocessing required to determine the rotation angles and the phases is \(O(2^{n})\). To see this, consider the function FindAngles in Alg. 2. To find the angles, we need first to find the coarse-grained states \(\psi^{(k)}\). In the worst case, \(\psi^{(k)}\) has \(2^{k}\) nonzero components, so to find the next coarse-grained state we need \(O(2^{k})\) operations. In total, we need \(O(2^{n})\) operations to find all coarse-grained states. To find the angles, we need the same amount of operations. Notice that one could use the efficient gate decomposition from [9] to bring the worst-case gate complexity down to \(O(2^{n})\). We don't do this because when specializing to sparse vectors, most angles and phases are zero. The number of multi-controlled 1-qubit gates is then much smaller than in the worst case scenario, and the decomposition of [9] would in fact lead to much deeper circuits.4 Footnote 4: To be more explicit, consider Figures 1 and 2 from [9]. The decomposition proposed there works by replacing the multi-controlled rotations, with \(2^{k}\) CNOTs and \(2^{k}\) 1-qubit rotations, with angles defined by Eq. (3) in [9]. For sparse vectors, it turns out that most angles on the r.h.s. of Eq. (3) are zero. The angles on the l.h.s. on the other hand are typically all different from zero. So for sparse vectors, we actually end up with a less efficient circuit. ### A simple example Before continuing, we illustrate the Grover-Rudolph algorithm in a simple example. Consider a vector with 8 positive components \[\psi=\begin{bmatrix}0&\sqrt{\frac{1}{3}}&0&0&0&0&\sqrt{\frac{2}{3}}&0\end{bmatrix}, \tag{6}\] our goal is to prepare the corresponding quantum state \[\ket{\psi}=\sqrt{\frac{1}{3}}\ket{001}+\sqrt{\frac{2}{3}}\ket{110}. \tag{7}\] The most general circuit implementing Alg. 1 for 3 qubits is depicted in Fig. 1. Note that to shorten the notation, we have denoted the rotation gates with \(\theta\) instead of \(R_{y}(\theta)\), and phase gates with \(\phi\) instead of \(P(\phi)\). To find the angles and phases, we first need to compute the coarse-grained vectors, as in Eq. (2). Since the vector \(\psi\) is positive, we can interpret its entries as the square root of a probability, and visualize the coarse-graining procedure as in Fig. 2. The coarse-grained vectors \(\psi^{(k)}\) are obtained by iteratively binning together the probabilities in pairs and summing them. Figure 1: The general Grover-Rudolph circuit for preparing a 3-qubit state. The Grover-Rudolph procedure starts by preparing the first coarse-grained state \[\left|\psi^{(1)}\right\rangle=\sqrt{\frac{1}{3}}\left|0\right\rangle+\sqrt{\frac{ 2}{3}}\left|1\right\rangle. \tag{8}\] by applying a 1-qubit rotation to the \(\left|0\right\rangle\). This can be done by applying \(R_{y}(\theta^{(0)})\) with \(\theta^{(0)}=2\cos(1/\sqrt{3})\). We can then prepare the next coarse-grained state, \[\left|\psi^{(2)}\right\rangle=\sqrt{\frac{1}{3}}\left|00\right\rangle+\sqrt{ \frac{2}{3}}\left|11\right\rangle, \tag{9}\] by appending a qubit in state \(\left|0\right\rangle\), and rotating it depending on the state of the first qubit. Namely, when the first qubit is in state \(\left|1\right\rangle\), we need to rotate the second to \(\left|1\right\rangle\); when the first qubit is in state \(\left|0\right\rangle\), we should leave the second qubit in \(\left|0\right\rangle\). This can be done by picking \(\theta_{0}^{(1)}=0\) and \(\theta_{1}^{(1)}=\pi\). The last step is performed similarly. We need to pick angles \(\theta_{00}^{(2)}=\pi\) and \(\theta_{11}^{(2)}=0\). The end of this process yields the desired state \(\left|\psi\right\rangle\). ## 3 Grover-Rudolph for sparse vectors In this section, we analyze how well Grover-Rudolph performs for preparing sparse vectors. We consider vectors \(\psi\in\mathbb{C}^{N}\) which have only \(d\ll N\) nonzero elements. Above, we have seen that the worst-case complexity of Grover-Rudolph is exponential in the number of qubits. However, we intuitively expect that for sparse vectors it should be possible to prepare the state with only \(O(d)\) gates, as the vector has only \(d\) degrees of freedom. Below, we explicitly show this and find that Grover-Rudolph can prepare sparse states with \(O(dn^{2})\) gates. We assume that we know the number of nonzero elements in \(\psi\) and their locations. Namely, that we have access to \(\psi\) as a tuple of vectors, \((\lambda,\phi)\). The vector \(\lambda\) contains the locations of the nonzero entries of \(\psi\), and \(\phi\) contains their values. The length of both vectors is \(d\). Without loss of generality, we assume that the elements of \(\lambda\) are arranged in increasing order. The coarse-graining procedure from Eq. (2) doesn't increase the number of nonzero elements, hence all \(\psi^{(k)}\) have sparsity \(d_{k}\leq d\). Let \((\lambda^{(k)},\phi^{(k)})\) be the pair of vectors defining \(\psi^{(k)}\). Eq. (2) needs to be applied only when two consecutive elements of \(\psi\) are nonzero. From this follows that we can find each coarse-grained vector using at worst \(O(d)\) operations, for a total of \(O(dn)\) classical operations. Once we have these vectors, we can find the angles and phases with additional \(O(dn)\) operations. In Alg. 3, we provide the pseudocode for finding angles exploiting the sparsity of the vector. Notice that this is the only thing we need to change to take advantage of sparsity, Alg. 1 remains unchanged. Since each \(\psi^{(k)}\) has at most \(d\) nonzero elements, we find that each \(U_{k}\) has at most \(d\) nonzero angles and phases. From this it follows that each \(U_{k}\) can be implemented using \(O(kd)\) gates (\(d\) from the number Figure 2: Visualization of coarse-graining procedure from Eq. (2). The bars in the histograms correspond to probabilities, i.e. amplitudes squared. Note that \(\psi^{(3)}\equiv\psi\). of nonzero angles, \(k\) from the number of controlling qubits). Summing over \(k\), we arrive at an overall gate complexity of \(O(dn^{2})\). As expected, we conclude that Alg. 1 performs on sparse vectors much better than the worst-case complexity would suggest. ``` 1:functionFindSparseAngles(nonzero location and values \((\lambda,\phi)\)) 2:\((\lambda^{(n)},\phi^{(n)})\leftarrow(\lambda,\phi)\) 3:for\(k=n-1,n-2,\ldots,1\)do 4:\(\lambda^{(k)}\leftarrow[\ ]\), \(\phi^{(k)}\leftarrow[\ ]\) 5:\(L_{k}\leftarrow\{\ \}\)\(\triangleright\) Empty dictionary for angles and phases of unitary \(U_{k}\) 6:for\(l=0,1,\ldots,\text{len}(\lambda^{(k+1)})\)do 7:if\(\lambda_{l}^{(k+1)}\) is even and \(\lambda_{l+1}^{(k+1)}=\lambda_{l}^{(k+1)}+1\)then 8:\(x\leftarrow\) evaluate Eq. (2) \(\triangleright\) Use \(\psi_{i_{1}\ldots i_{k}j}^{(k+1)}\leftarrow\phi_{l+j}^{(k+1)}\) 9: Append \(x\) to \(\phi^{(k)}\) 10: Append \(\lambda_{l}^{(k+1)}/2\) to \(\lambda^{(k)}\) 11:\(l\gets l+1\)\(\triangleright\) Skip one iteration 12:else 13: Append \(\phi_{l}^{(k+1)}\) to \(\phi^{(k)}\) 14: Append \(\lfloor\lambda_{l}^{(k+1)}/2\rfloor\) to \(\lambda^{(k)}\) 15:endif 16: Compute \(\theta_{i_{1}\ldots i_{k}}^{(k)}\) and \(\phi_{i_{1}\ldots i_{k}}^{(k)}\) using Eq. (4) \(\triangleright\) Use \(\psi_{i_{1}\ldots i_{k}}^{(k)}\gets x\) 17:\(L_{k}[(i_{1},\ldots,i_{k})]\leftarrow(\theta_{i_{1}\ldots i_{k}}^{(k)},\phi_ {i_{1}\ldots i_{k}}^{(k)})\)\(\triangleright\)\((i_{1},\ldots,i_{k})\) bit representation of \(\lfloor\lambda_{l}^{(k+1)}/2\rfloor\) 18:endfor 19:endfor 20:return\(\{L_{k}\}\) 21:endfunction ``` **Algorithm 3** FindSparseAngles On practical instances, the performance of Grover-Rudolph might very well be better than the worst-case. So, we estimate the typical complexity of the algorithm by sampling random sparse vectors and explicitly counting the gates needed to prepare them. We compute the cost of the algorithm by counting the number of Toffoli, CNOT, and 1-qubit gates needed to implement it. We use standard constructions for controlled gates, see e.g. [16]. To implement a 1-qubit gate controlled on \(k\geq 2\) qubits being in a state given by the bit string \(x\), we use \(k-1\) ancilla qubits, \(2(k-1)\) Toffoli gates, 2 CNOT's, Figure 3: Gate count for preparing random states using Alg. 1, as a function of \(d\) at fixed \(n\) (a), and as a function of \(n\) at fixed \(d\) (b). We use a log-log scale in (a) and a lin-lin in (b). and \(4+2(n-|x|)\) 1-qubit gates. Here \(|x|\) is the Hamming weight of the bit string \(x\). In more detail: we consider 100 random complex vectors, for various values of \(d\) and \(n\), and we study how the gate count scales as a function of \(d\) and \(n\). The results are displayed in Fig. 3. Notice that we display only the count of Toffoli gates, as the plots for the count of CNOT and 1-qubit gates are very similar. By inspecting the diagram, we find that also the average-case complexity of the algorithm scales linearly in \(d\) and quadratically in \(n\). In this section, we have shown that the Grover-Rudolph algorithm works well on sparse vectors. However, the scaling we have found, \(O(dn^{2})\), doesn't quite match the best known algorithms for preparing sparse vectors [12, 13, 14, 15], which have a worst-case complexity of order \(O(dn)\). Notice that \(n\) is only logarithmic in the size of the vector, so it is possible that optimizing the Grover-Rudolph circuit might overcome this small overhead in practical applications. In App. A, we consider a simple optimization procedure that reduces the number of needed gates, at the cost of a small classical overhead. However, we find that this reduction is not significant enough. Therefore, in the next section we introduce a small variant of Grover-Rudolph for which we can prove a worst-case scaling of \(O(dn)\). ## 4 Permutation Grover-Rudolph We introduce a simple variation of Grover-Rudolph, for which we can prove the worst case complexity of order \(O(dn)\). The idea of this variant is as follows. First, we prepare a dense state whose amplitudes are given by the nonzero entries of \(\psi\). To prepare this state, we only need \(\lceil\log d\rceil\) qubits, and we can use Alg. 1. We then append a sufficient number of qubits, such that the total dimension of the Hilbert space becomes \(N\), and apply a permutation unitary which maps the nonzero amplitudes to their correct location. We show that this permutation can be efficiently implemented. The idea of preparing a dense state with all the nonzero entries and then permuting the basis states has already been used in [13]. Our algorithm differs in the implementation of the permutation and, as we discuss further below, has a better classical complexity. Similarly to the previous section, we assume we have access to \(\psi\) as a tuple of vectors \((\lambda,\phi)\). Each vector has size \(d\), with \(\lambda_{i}\in\{0,\ldots,N\}\) being the location of the \(i\)-th nonzero element of \(\psi\), and \(\phi_{i}\) being its value. We assume without loss of generality that the elements of \(\lambda\) are arranged in increasing order. As a first step, we prepare a dense vector \[|\tilde{\psi}\rangle=\sum_{i=0}^{d-1}\phi_{i}\,|i\rangle\, \tag{10}\] using standard Grover-Rudolph. We then add a sufficient number of qubits initialized in \(|0\rangle\), such that the total size of the Hilbert space becomes \(N\). Finally, we apply a permutation unitary that maps \(|i\rangle\rightarrow|\lambda_{i}\rangle\). There are of course many permutations mapping \(i\) to \(\lambda_{i}\). We build one, directly decomposed in cycles, as follows. Let \(i\in I\), with \(I=\{0,1,\ldots,d-1\}\), and for simplicity, consider \(i=0\). We initialize a cycle with elements \((0,\lambda_{0})\). If \(\lambda_{0}\geq d\), we can close the cycle, add it to our permutation, and go to the next available \(i\). If \(\lambda_{0}<d\), we set \(j=\lambda_{0}\), we remove \(j\) from \(I\), and we add \(\lambda_{j}\) to the cycle. We then repeat the steps above until we find a \(\lambda_{j}\) larger than \(d\). The steps are summarized in Alg. 4. To understand how the complexity of Alg. 4 scales, let \(P=\{c_{0},c_{1},\ldots c_{n_{c}-1}\}\) be the list of cycles returned by the algorithm. We denote by \(M_{k}\) the length of the \(k\)-th cycle in the list, with \(k=0,1,\ldots,n_{c}-1\). To generate this list, the algorithms loops over \(i=0,1,\ldots,d-1\) and does 3 blocks of operations: an if statement, an initialization, and a while loop. The first two operations take constant time, both \(O(n)\). The if statement is run every iteration, and the initialization step is run \(n_{c}\leq d\) times. So they both contribute \(O(dn)\) to the classical complexity. The while loop is run only for cycles with more than 2 entries, with each iteration taking \(O(n)\) time. Hence, we find a contribution of order \(O\big{(}n\sum_{k}\max(M_{k}-2,0)\big{)}\). We can upper bound the sum by \(\sum_{k}M_{k}\), which is the total length of the cycles in \(P\). In the worst case, we have \(\sum_{i}M_{i}=2d\), which happens for a permutation made of \(d\) 2-cycles. This happens when \(\lambda_{i}\geq d\) for all \(i\). To see this, let \(\lambda_{j}<d\) for some value of \(j\). Then in the permutation we replace two cycles of length 2 with one cycle of length 3, and \(\sum_{i}M_{i}\) decreases. We conclude that Alg. 4 has complexity of order \(O(nd)\). Once we have decomposed the permutation in cycles, we can use Alg. 7 (see App. B) to implement it. Alg. 7 implements a cycle of length \(M\) with \(O(Mn)\) classical operations. Notice that since, as we have argued above, \(\sum_{i}M_{i}=O(d)\), the overall classical complexity is still of order \(O(nd)\). Putting everything together, we find Alg. 5 for preparing sparse states. ``` functionPermGR(sparse vector \(\{(x_{0},\psi_{0}),\ldots,(x_{d-1},\psi_{d-1})\}\), number of qubits \(n\)) Apply Alg. 1 to prepare \(\ket{\tilde{\psi}}=\sum_{i=0}^{d-1}\psi_{i}\ket{i}\) Append \(n-\lceil\log_{2}d\rceil\) qubits in state \(\ket{0}\)\(\triangleright\) Add qubits until there are \(n\)\(P\leftarrow\textsc{SparsePerm}(\{x_{i}\})\) for\(c\in P\)do Apply Cycle(\(c\), \(n\))\(\triangleright\) See Alg. 7 endfor endfunction ``` **Algorithm 5** Permutation Grover-Rudolph The quantum complexity of this algorithm is given by the cost of the Grover-Rudolph step needed to prepare \(\tilde{\psi}\), and the cost of implementing the permutation \(\ket{i}\rightarrow\ket{\lambda_{i}}\). As explained in Sec. 2, the first is given by \(O(n2^{n})\), where \(n\) is the number of qubits. For the Grover-Rudolph step in this case, the number of qubits is only \(n=O(\log d)\), hence we find \(O(d\log d)\). For the second one, we can use Eq. (11), which states that the complexity of one cycle scales like \(O(Mn)\), where \(n\) is the number of qubits and M is the cycle length. The cost of the permutation scales like the sum of all cycles lengths times the number of qubits. In the worst case, we have \(d\) cycles of length 2, obtaining that the complexity of the permutation is \(O(dn)\). Putting everything together, we find that the worst-case complexity of the algorithm scales as \(O(dn)\). Notice that we could consider better algorithms for the Grover-Rudolph step, e.g. the algorithm of [9], which would scale as \(O(d)\) instead of \(O(d\log d)\). Since the complexity of the algorithm is dominated by the permutation step, we don't think this would make a significant difference. It would be interesting to consider better ways to implement the permutation. Similarly to what we did in Sec. 3, we numerically estimate the average-case complexity of Alg. 5. We consider random complex sparse vectors, for various values of \(d\) and \(n\), and compute the number of gates required by Alg. 5 to prepare them. The results are shown in Fig. 4. We find that scaling is linear in both \(d\) and \(n\), the same as in the worst-case analysis. From our analysis, we know that the worst-case cost of Alg. 5 scales better than that of Alg. 1. However, this speed-up might fail to appear for vectors of reasonable size and sparsity. For this reason, we study empirically the relative costs of these two algorithms. In Fig. 5, we plot the ratio between the number of Toffoli gates required by Alg. 5 and Alg. 1 (subjected to the optimization strategy in Alg. 6) to prepare the same random vectors used to generate the plots in Fig. 4. We find that Alg. 5 performs better than the optimized version of Alg. 1 already at moderate values of \(n\) and starting at densities, \(d/N\), between \(10^{-3}\) and \(10^{-2}\). Unsurprisingly, for larger values of \(n\) the transition happens sooner, i.e. at larger densities. Figure 4: Gate count for preparing random states using Alg. 5, as a function of \(d\) at fixed \(n\) (a), and as a function of \(N=2^{n}\) at fixed \(d\) (b). We use a log-log scale in (a) and a lin-lin in (b). Figure 5: Ratio between the number of gates required by Alg. 5 and the optimized gate count given by Alg. 1 in concert with Alg. 6 to prepare some random states, as a function of the density \(d/N\) at fixed \(n\) (a), and at fixed \(d\) (b). We use a log scale on the abscissa. Conclusions In this paper, we have studied the performance of the Grover-Rudolph algorithm for preparing sparse states. We have found that the usual version of the algorithm, see Alg. 1, has a gate complexity of order \(O(dn^{2})\). Here \(n\) is the number of qubits needed to encode the vector we want to prepare, \(\psi\), and \(d\) is the number of nonzero entries in \(\psi\). Moreover, we have introduced a simple modification of the algorithm which has a gate complexity of order \(O(dn)\), Alg. 5. This is competitive with the best known algorithms for preparing sparse vectors [12, 13, 14, 15]. The classical complexity of both Alg. 1 and Alg. 5 is \(O(dn)\). This is better than those of [12] and [13], which are \(O(nd^{2}\log d)\) and \(O(nd^{2})\) respectively, and it's equal to that of [14] and [15]. Ultimately, the decision of which algorithm to use depends on the specific properties of the vectors to prepare. The main point of this work is that, when considering sparse vectors, Grover-Rudolph should also be considered as an option. Finally, we point out that in both Alg. 1 and Alg. 5 there is space for improvements. In particular, it would be interesting to consider optimization procedures to reduce the number of controlled rotations and Toffoli gates in Alg. 1. We consider one such optimization procedure in App. A which shows promising results, for real vectors. In Alg. 5, it would be interesting to try to improve the permutation step, both at the level of the classical preprocessing, i.e. finding a different suitable permutation, and at the level of the quantum circuit needed to implement the permutation. ## Acknowledgement All the results were obtained using Python. The code is available on github.com/qubrabench/grover-rudolph. This work was supported by the Quantum Valley Lower Saxony, and the BMBF project QuBRA. Helpful correspondence and discussions with Joshua Ammermann, Lennart Binkowski, Tim Bittner, Domenik Eichhorn, Davide Incalza, Andrei Lotan, Tobias J. Osborne, Anurudh Peduri, Soren Wilkening, and Henrik Wilming are gratefully acknowledged. ## Appendix A Optimizing the gates We consider a simple optimization strategy in which we merge consecutive gates that have the same rotation angles and phases, and have controls differing only by one bit-flip. For example, consider the situation depicted in Fig. 6. The first gate is conditioned on '11' and performs a rotation with an angle \(\theta\), while the second gate is conditioned on '10' and executes a rotation with the same angle \(\theta\). Given the gates differ in only one control and share the same rotation angles, they can be combined into a single gate, controlled on the first qubit being in '1'. Notice that this reduction not only affects the total gate count but also leads to gates with one fewer controlling qubit. To explain how to perform the merge, we assume again that the angles and phases needed to implement the unitaries \(U_{k}\) are stored in dictionaries \(L_{k}\) as \(\{(i_{1},\ldots,i_{k})\colon(\theta^{(k)}_{i_{1}\ldots i_{k}},\phi^{(k)}_{i_{ 1}\ldots i_{k}})\}\). For every control \((i_{1},\ldots,i_{k})\) in dictionary \(L_{k}\), we loop over its neighbors and check whether the corresponding angles, if present, are the same. Here the neighbors are found by flipping one bit, i.e. they are neighbors in Hamming distance. When the angle is the same, we merge the two controls into one. To do this, we replace the one bit on which the original controls disagreed with 'e'. This simply helps to keep track of which qubits control the rotation. For example, '010' and '000' would be merged in '0e0'. We repeat Figure 6: Example of optimization procedure. this procedure until no new merging is possible. Notice that to find neighbors, we don't consider the entries set to 'e'. These steps are summarized in Alg. 6. ``` 1:functionOptimizeAngles(dictionary D) 2:Merging_success \(\leftarrow\) True \(\triangleright\) Flag to mark merging success 3:while (Merging_success = True) & (len(D) \(>1\)) do 4:Merging_success = Mergeable(D) 5:endwhile 6:return D 7:endfunction 8: 9:functionMergeable(dictionary D) 10:for\(k,\theta\) in D do 11:for\(i=0,\ldots,\)len(\(k\)) do 12:if\(k[i]\) = 'e' then continue 13:endif 14:\(k^{\prime}\leftarrow\) copy of \(k\) with \(i\)-th entry flipped 15:\(\theta^{\prime}\leftarrow\) D[\(k^{\prime}\)] 16:if\(\theta=\theta^{\prime}\) then 17: Remove \(k,k^{\prime}\) from D 18:\(k^{\prime\prime}\leftarrow\) copy of \(k\) with \(i\)-th entry set to 'e' 19: Add \(\{k^{\prime\prime}:\theta\}\) to D 20:return True 21:endif 22:endfor 23:endfor 24:return False 25:endfunction ``` **Algorithm 6** Optimize Angles The complexity of Alg. 6 is \(O(dn^{2}\log d)\). To see this, consider first the function Mergeable. This has complexity of order \(O(dn^{2})\), as can it be seen by considering the structure of the two nested for loops. The first for loop iterates over all the items in the dictionary, whose number is upper bounded by \(d\). The second for loop iterates over all bits in a key, whose number is upper bounded by \(n\). Finally, the operations inside the inner for loop have complexity \(O(n)\). Next we consider the while loop. This takes at most \(\log d\) repetitions, as can be seen by noticing that at any iterations of the while loop, only angles which have been merged in the previous iteration can be further merged. Hence the number of mergeable angles decreases by at least a factor 2 at every repetition of the while loop. Therefore, we have at most \(O(\log d)\) iterations. Putting everything together, we arrive to a complexity of order \(O(dn^{2}\log d)\). It is difficult to understand theoretically how much this optimization reduces the number of required gates. Therefore, we rely on numerics. We consider the same random vectors used to generate Fig. 3, we optimize the angles using Alg. 6, and compute the ratio between the gates needed to prepare the state before and after optimization. The results are shown in Fig. 7 (a), (b). We apply the same strategy further for both real and uniform random vectors and we show the results in Fig. 7 (c) and (d) and Fig. 7 (e) and (f), respectively. We find that the optimization of random vectors is only relevant at intermediate values of \(d\). Empirically, the best improvement we observe is around 10%. As we have seen, this comes at a classical cost, which is \(O(n^{2}\log d)\) worse than the one required to find the angles. In the case of real vectors, we observe that at densities \(d/N\approx 0.1\) the improvement in the gate count ranges from 20% to 25%, respectively, making them suitable candidates for this particular type of optimization. In the case of uniform vectors, the improvement in gate count at moderate values of \(d\) ranges from 10% to 35%. For fixed \(d\), our optimization approach showcases improvements of up to 40%. In the near-future, quantum costs will be the bottleneck of quantum calculations, hence it is still useful to incur a higher classical complexity cost that will reduce the gate total count, even by a modest amount. Figure 7: Ratio of number of Toffoli gates needed to run Alg. 1 after and before optimizing the angles using Alg.6 to prepare random states (a), (b), random _real_ states (c), (d), and _uniform_ states (e), (f) as a function of the density \(d/N\) at fixed \(n\), and at fixed \(d\). We use a log scale on the abscissa. Implementing permutation matrices We present a simple way to implement permutation matrices. Given a permutation \(\sigma\) of \(N\) elements, we want to implement a unitary, which we also denote \(\sigma\), that acts as \(\sigma\left|i\right\rangle=\left|\sigma(i)\right\rangle\). For simplicity, we assume that \(N=2^{n}\) for some integer \(n\), but this condition can be easily relaxed. First, we classically decompose the permutation in cycles, \(\sigma=c_{0}c_{1}\ldots c_{n_{c}-1}\). Here \(n_{c}\) is the number of required cycles. Each cycle is then simple to implement. Let \(c=(x_{0}\ x_{1}\ \ldots\ x_{M-1})\) be a cycle of length \(M\), where \(x_{k}\) are \(n\)-bit strings, and let \(\left|a\right\rangle\) be an ancilla register that we initially prepare in \(\left|0\right\rangle\). The cycle can be implemented by repeating for \(k=0,1,\ldots M-1\) the following two operations: 1. We flip the ancilla conditioned on the state of the first \(n\) qubits being \(\left|x_{k}\right\rangle\). 2. Conditioned on the ancilla being in state \(\left|1\right\rangle\), we map the first \(n\) qubits to \(\left|x_{k+1}\right\rangle\). To map \(\left|x_{k}\right\rangle\) to \(\left|x_{k+1}\right\rangle\) we can use \(\bigotimes_{l=0}^{n-1}X^{\Delta_{l}}\), where \(\Delta=x_{k}\oplus x_{k+1}\) is the bit-wise difference between \(x_{k}\) and \(x_{k+1}\). Notice that we are setting \(x_{M}\equiv x_{0}\). For example, in Figure 8, we show the circuit obtained for cycle \(c=(0,1,2)\) and \(N=8\). The steps of the needed to implement a cycle are summarized in Alg. 7. ``` functionCycle(cycle \(c=(x_{0}\ x_{1}\ \ldots\ x_{M-1})\), n-qubit basis state \(\left|y\right\rangle\)) \(x_{M}\gets x_{0}\) Add a qubit ancilla in state \(\left|a\right\rangle=\left|0\right\rangle\) for\(k=0,1,\ldots,M-1\)do if\(y=x_{k}\)then Flip ancilla, \(a\to a\oplus 1\) endif \(\Delta\gets x_{k}\oplus x_{k+1}\) if\(a=1\)then \(\left|y\right\rangle\leftarrow(X^{\Delta_{0}}\otimes X^{\Delta_{1}}\otimes\cdots \otimes X^{\Delta_{n-1}})\cdot\left|y\right\rangle\)\(\triangleright\) Map \(\left|x_{k}\right\rangle\) to \(\left|x_{k+1}\right\rangle\) endif endfor return\(\left|y\right\rangle\) endfunction ``` **Algorithm 7** Cycle Finally, we provide a count for the number of gates required to run this algorithm. The ancilla flip is a generalized Toffoli gate with \(n\) controls, hence the cost of each is \(2(n-1)\mathcal{C}_{T}+4\mathcal{C}_{1}+2\mathcal{C}_{CNOT}\). We also need to include two \(X\) gates for every qubit controlled on \(0\), resulting in adding \(n-\left|x_{k}\right|\) gates. There will be \(M+1\) of these terms. Then each state flip is determined by the number of bits that need to be swapped. More precisely, we need \(\left|x_{k}\oplus x_{k+1}\right|\) CNOT gates for each of the \(M\) elements of a cycle. The cost of this algorithm is \[\begin{split}\mathcal{C}[\text{Cycle}(c)]&=2(M+1)((n-1) \mathcal{C}_{T}+2\mathcal{C}_{1}+\mathcal{C}_{CNOT})+2\sum_{k=0}^{M}(n-|x_{k}|) \mathcal{C}_{1}\\ &+4\sum_{k=0}^{M-1}|x_{k}\oplus x_{k+1}|\mathcal{C}_{1}+2\sum_{k =0}^{M-1}|x_{k}\oplus x_{k+1}|\mathcal{C}_{CNOT}\\ &=O(Mn)\,,\end{split} \tag{11}\] where \(\mathcal{C}_{T}\), \(C_{CNOT}\), \(C_{1}\) are the costs of Toffoli, CNOT, 1-qubit gate, respectively, \(c=(x_{0}\ x_{1}\ \dots\ x_{M-1})\), \(|\cdot|\) denotes the Hamming weight of the bit string, and again \(x_{M}\equiv x_{0}\). Notice that the algorithm also requires \(O(Mn)\) classical operations.
2306.02971
Online Learning with Feedback Graphs: The True Shape of Regret
Sequential learning with feedback graphs is a natural extension of the multi-armed bandit problem where the problem is equipped with an underlying graph structure that provides additional information - playing an action reveals the losses of all the neighbors of the action. This problem was introduced by \citet{mannor2011} and received considerable attention in recent years. It is generally stated in the literature that the minimax regret rate for this problem is of order $\sqrt{\alpha T}$, where $\alpha$ is the independence number of the graph, and $T$ is the time horizon. However, this is proven only when the number of rounds $T$ is larger than $\alpha^3$, which poses a significant restriction for the usability of this result in large graphs. In this paper, we define a new quantity $R^*$, called the \emph{problem complexity}, and prove that the minimax regret is proportional to $R^*$ for any graph and time horizon $T$. Introducing an intricate exploration strategy, we define the \mainAlgorithm algorithm that achieves the minimax optimal regret bound and becomes the first provably optimal algorithm for this setting, even if $T$ is smaller than $\alpha^3$.
Tomáš Kocák, Alexandra Carpentier
2023-06-05T15:35:00Z
http://arxiv.org/abs/2306.02971v1
# Online Learning with Feedback Graphs: The True Shape of Regret ###### Abstract Sequential learning with feedback graphs is a natural extension of the multi-armed bandit problem where the problem is equipped with an underlying graph structure that provides additional information - playing an action reveals the losses of all the neighbors of the action. This problem was introduced by Mannor & Shamir (2011) and received considerable attention in recent years. It is generally stated in the literature that the minimax regret rate for this problem is of order \(\sqrt{\alpha T}\), where \(\alpha\) is the independence number of the graph, and \(T\) is the time horizon. However, this is proven only when the number of rounds \(T\) is larger than \(\alpha^{3}\), which poses a significant restriction for the usability of this result in large graphs. In this paper, we define a new quantity \(R^{*}\), called the _problem complexity_, and prove that the minimax regret is proportional to \(R^{*}\) for any graph and time horizon \(T\). Introducing an intricate exploration strategy, we define the Exp3-EX algorithm that achieves the minimax optimal regret bound and becomes the first provably optimal algorithm for this setting, even if \(T\) is smaller than \(\alpha^{3}\). Machine Learning, Feedback Graphs, Learning Learning, Feedback Graphs, Learning Learning, Feedback Graphs, Learning Learning, Feedback Graphs, Learning, Feedback Graphs. ## 1 Introduction In this paper, we consider a sequential decision-making problem in an adversarial environment. This problem consists of \(N\) actions, \(T\) rounds, and sequence of losses \((\ell_{t,i})(t,i)\in[T]\times[N]\) where \([K]\triangleq\{1,\ldots,K\}\). Each loss \(\ell_{t,i}\) is associated with round \(t\) and action \(i\). We do not impose any statistical assumptions on the losses provided by the environment. Instead, we assume that the losses are set by an oblivious adversary before the learning process begins. The only assumption on the losses is that they are bounded in \([0,1]\), otherwise, the losses can be completely arbitrary and change in every round. The learning process, or the game, proceeds in rounds. In round \(t\), the learner picks one of the actions denoted by \(i_{t}\) and incurs associated loss \(\ell_{t,i_{t}}\). The learner also observes loss \(\ell_{t,i_{t}}\) itself and possibly, losses of some other actions. The set of observations depends on the feedback scheme, we discuss different feedback schemes later. The goal of the learner is to minimize the total loss received at the end of the game, after \(T\) rounds. This is equivalent to minimizing the difference between the total loss of the learner and the loss of the strategy that plays the best-fixed action in hindsight, after \(T\) rounds. We refer to this difference as regret and define it as \[R_{T}\triangleq\max_{i\in[N]}\mathbb{E}\bigg{[}\sum_{t\in[T]}(\ell_{t,i_{t}}- \ell_{t,i})\bigg{]},\] where the expectation is taken over the potential randomization of the environment and the learner. A quantity of interest to characterize the difficulty of such a sequential decision-making problem is what we will refer to here as the minimax regret, namely the regret incurred by the best possible strategy - the choice of actions \((i_{t})_{t}\) - on the most difficult possible bandit problem - the choice of loss sequences \((\ell_{t,i})_{t,i}\). Note that the minimax regret depends on the feedback scheme considered. Traditionally, this problem is studied under different feedback schemes. The most relevant schemes for our paper are the following: Full-information feedback(Cesa-Bianchi et al., 1997; Littlestone & Warmuth, 1994; Vovk, 1990), sometimes called prediction with expert advice. This feedback is the simplest since the learner has access to all losses. At the end of round \(t\) the learner observes whole loss vector \((\ell_{t,1}\,\ldots,\ell_{t,N})\). The minimax rate for this feedback scheme is \(\sqrt{T\log(N)}\) and is attained by the EXP algorithm (Cesa-Bianchi & Lugosi, 2006). Note that having access to all the losses in every round causes the minimax rate scale only as \(\sqrt{\log(N)}\) with the number of actions. Bandit feedback(Thompson, 1933; Robbins, 1952; Auer et al., 1995). In every round, the learner observes only the loss of the selected action, namely \(\ell_{t,i_{t}}\), while the losses of other actions are not disclosed. The minimax rate for this feedback scheme is \(\sqrt{NT}\)(Audibert & Bubeck, 2010) and is attained by INF (Implicitly Normalized Forecaster) algorithm by Audibert and Bubeck (2010). Having only one observation per round results in a scaling of the regret with \(\sqrt{N}\) which is significantly worse than in the full-information feedback. Graph feedback(Mannor and Shamir, 2011; Alon et al., 2013, 2015, 2017; Kocak et al., 2014, 2016, 2016, 2017; Esposito et al., 2022). In the graph feedback setting, the actions are vertices of a graph and in every round, the learner observes the loss of the selected action (so that the setting is strongly observable) as well as the losses of all its neighbors - see Section 2 for a precise definition. This is the feedback scheme that we consider in this paper, which is an intermediary between full-information and bandit feedback and contains both these settings. Similarly to what happens in the bandit setting, the algorithms for bandits with graph feedback need to balance _exploration_ of actions with _exploitation_ of already acquired knowledge. In the graph feedback setting, however, different actions might provide different amounts of exploration, as an action also provides information on the losses of its neighbors. So that balancing exploration and exploitation in this context is more delicate, and efficient algorithms will need to adapt to the graph structure - and the minimax regret will also be graph dependent. In this setting, a relevant graph-dependent quantity is the independence number \(\alpha\) of the graph (see Definition 2.3). Several algorithms with different approaches have been proposed, ELP (Mannor and Shamir, 2011), Exp3-SET and Exp3-DOM (Alon et al., 2013), Exp3-IX and FPL-IX (Kocak et al., 2014), Exp3.G (Alon et al., 2015). While these algorithms differ in their approach to exploration, assumptions on the graph disclosure, or computational complexity, the common denominator is that all of these algorithms' upper bounds on the regret, in the case of strongly observable graphs, are of order \(\sqrt{\alpha T}\) up to logarithmic terms, regardless of time horizon \(T\). All of the aforementioned algorithms were inspired by the lower bound for the setting proposed by Mannor and Shamir (2011), which states that if \(T\geq 374\alpha^{3}\), the minimax regret is lower bounded by a quantity of the order \(\sqrt{\alpha T}\) - see Proposition 2.4 for a precise quotation of their result. This poses the question of what happens for large graphs - or equivalently when \(T\) is small - and whether current algorithms are also optimal in this case. This is a very important question since even in a moderately large problem and for some graphs, where the independence number is in the hundreds, we need to have millions of rounds for this assumption to hold. Partial monitoring.A bit further from the setting that we consider in this paper, yet related to it, is the field of partial monitoring (Rustichini, 1999; Audibert and Bubeck, 2010; Lattimore and Szepesvari, 2019, 2020), where the action selection is decoupled from the feedback. An example of this which is very relevant for us is weakly observed graphs - see (Alon et al., 2015) - which is a generalization of the graph feedback setting where not all self-loops are included, which means that one does not necessarily observes the loss of the action that one selects. The algorithm Exp3.G therein takes advantage of small dominating sets of vertices - i.e. sets of vertices whose joint set of neighbors are all vertices, see Section 2 for a precise definition - to explore efficiently the vertices, and then focus on promising actions. While not developed for the setting considered in this paper, this approach opens however interesting perspectives in cases of large graphs with a few very connected vertices, and we will discuss this more in detail in Subsection 2.2. ### Contribution In this paper, we focus on the setting with graph feedback - see Section 2 - and our aim is to pinpoint the minimax regret in the missing case presented in the corresponding paragraph above, namely for large graphs where \(T\) is of smaller order than \(\alpha^{3}\). The first important remark that we make in this paper is that there are some simple cases of large graphs where it is possible to achieve a minimax regret of much smaller order than \(\sqrt{\alpha T}\), which is the current best known upper bound. This is e.g. the case when there is one action that is connected in the graph to all other actions, and that is therefore very informative. In this case, if \(T\) is of smaller order than \(\alpha^{3}\), a minimax optimal strategy would make heavy use of this action in order to explore the other actions, even if this action is sub-optimal. We detail such an example in 2.2. This is very different from what current algorithms in the (strongly observable) graph bandit literature do and is more related to some strategies in partial monitoring, see e.g. (Lattimore and Szepesvari, 2019) and also (Alon et al., 2015) that we will discuss in detail later. Starting from this remark, the main result of this paper is to pinpoint, for any time horizon \(T\) and any given graph, the minimax regret up to logarithmic terms. We first provide a more refined lower bound in Section 3, that holds for any graph and time horizon - therefore also in the case where \(T<374\alpha^{3}\) which is not covered by the state of the art lower bound in Mannor and Shamir (2011) - and that involves a more subtle graph dependent quantity than the independence number. Then, in Section 4, we provide Exp3-EX algorithm (EX stands for **E**xplicit **e**X**ploration) that matches this lower bound up to logarithmic terms, and whose particularity is that it explores informative actions in a refined and explicit way. ## 2 Problem Setting In this section, we formally define the setting introduced by Mannor and Shamir (2011) and provide all the notation used throughout the paper. We consider an online learning game with a directed observability graph \(G=(V,E)\) over the set of actions \(V=[N]\) with the set of edges \(E\subseteq[N]\times[N]\). The graph contains all the self-loops, i.e. \((i,i)\in E\) for every \(i\in V\). The indicator function of an edge from node \(i\) to node \(j\) is defined as \(G_{i,j}\triangleq\mathbb{I}\{(i,j)\in E\}\). The game takes place over \(T\) rounds. Before the game starts the environment, potentially adversarial, assigns losses \(\{\ell_{t,i}\}_{(t,i)\in[T]\times[N]}\) to every action \(i\) and round \(t\). We only assume that \(\ell_{t,i}\in[0,1]\) for any \(t\leq T,i\leq N\). In every round \(t\), the learner picks an action \(i_{t}\in[N]\), incurs the loss \(\ell_{t,i_{t}}\), and observes the losses \(\ell_{t,i}\) of all out neighbors of \(i_{t}\), i.e. of all \(i\in V\) such that \((i_{t},i)\in E\)1. Note that in our setting, we always observe the loss of the chosen action since the graph contains all the self-loops. The performance of the learner is then measured in terms of regret - sometimes also called pseudo-regret, or also expected regret - as explained in the introduction Footnote 1: We write \(N^{out}_{i_{t}}\) for this set, see definition 2.1 later. \[R_{T}\triangleq\max_{i\in[N]}\mathbb{E}\bigg{[}\sum_{t\in[T]}(\ell_{t,i_{t}}- \ell_{t,i})\bigg{]},\] where the expectation is taken over the potential randomization of the environment and the learner. ### Auxiliary Definitions and Statements This section sums up all the necessary graph-related definitions we use later throughout the paper. In bandits with graph feedback, the learner's task is to select an action and observe the losses of its neighbors. Each loss observation can have different sources, either the learner selected the action itself or one of its neighbors. The following definition provides us with a tool to define side observations and their sources more easily. **Definition 2.1**.: Let \(G=(V,E)\) be a graph with the set of vertices \(V\) and the set of edges \(E\). We define the out-neighborhood of vertex \(i\in V\) as \[N^{out}_{i}\triangleq\{j\in V\,:\,(i,j)\in E\}\] and the in-neighborhood of vertex \(i\in V\) as \[N^{in}_{i}\triangleq\{j\in V\,:\,(j,i)\in E\}\] Playing only a few actions can provide the learner with information about many other actions. Dominating sets and numbers provide a convenient way to describe this phenomenon. **Definition 2.2**.: Let \(G=(V,E)\) be a graph with the set of vertices \(V\) and the set of edges \(E\). We say that \(D\subseteq V\) is a dominating set of \(B\subseteq V\) (or that \(D\) dominates \(B\)) from \(A\subseteq V\) if \(D\subseteq A\) and \(B\subseteq\cup_{i\in D}N^{out}_{i}\). We define the dominating number of \(B\) from \(A\) as \(\delta^{A}(B)\triangleq\min|D|\) where the minimum is taken over all dominating sets \(D\) of \(B\) from \(A\). In case no such \(D\) exists, we define \(\delta^{A}(B)\) as \(\infty\). Further, we say that \(\delta(B)\triangleq\delta^{V}(B)\) is the dominating number of \(B\) and \(\delta\triangleq\delta^{V}(V)\) is the dominating number of the graph. On the other hand, if the actions are not connected by an edge, playing one action does not provide any additional information about the other actions. This is captured in the following definition of independent sets. **Definition 2.3**.: Let \(G=(V,E)\) be a graph with the set of vertices \(V\) and the set of edges \(E\). We say that \(I\subseteq V\) is an independent set of \(G\) if for every \(i,j\in I\) s.t. \(i\neq j\), vertices \(i\) and \(j\) are not connected by an edge, i.e. \((i,j)\not\in E\). Independence number \(\alpha\) of \(G\) is the size of the largest independent set of \(G\), i.e. \[\alpha\triangleq\max_{I\in\{J\subseteq V\,:\,J\text{ is independent}\}}|I|.\] ### Lower Bound by Mannor & Shamir (2011) and Motivational Example In this subsection, we quote formally an important and state-of-the-art result of the literature and discuss why some relevant graph feedback examples are not optimally resolved by existing algorithms. The following proposition restates the lower bound result by Mannor & Shamir (2011). **Proposition 2.4**.: _Let \(G\) be an observability graph with independence number \(\alpha\). Then there exists a series of losses \(\{\ell_{t,i}\}_{(t,i)\in[T]\times[N]}\) such that for every \(T\geq 374\alpha^{3}\) and any learner, the expected regret is at least \(0.06\sqrt{\alpha T}\)_ It is important to note that the statement assumes that \(T\) needs to be large - \(T\geq 374\alpha^{3}\) - for the lower bound to hold. As mentioned in the introduction, some existing algorithms match this lower bound up to logarithmic factors (Kocak et al., 2014, Corollary 1) (Alon et al., 2015, Theorem 1). These algorithms also function when \(T<374\alpha^{3}\) where the best known upper bounds on the regret are of order \(\sqrt{\alpha T}\) up to logarithmic factors. However, since the existing lower bound stated above does not cover this case, it is therefore unclear whether those algorithms are optimal or not. The following lemma demonstrates that \(\sqrt{\alpha T}\) is indeed not the correct rate. **Lemma 2.5**.: _Let \(G=(V,E)\) be a graph with \(|V|=N\) and \(E=\{(N,i):i\in[N-1]\}\) (see Figure 1). Then, there exists an algorithm such that the regret upper bound of this algorithm is of \(\delta^{1/3}T^{2/3}\) where \(\delta=1\) is the dominating number of \(G\)._ The independence number of the graph from the previous lemma is \(N-1\). This also means that whenever \(T\ll\alpha^{3}\) in the lemma above, the regret bound of \(\delta^{1/3}T^{2/3}\) is an improvement over the regret bound of \(\sqrt{\alpha T}\). See Appendix A for the proof and further discussion. ### Problem Complexity We have seen some indications (e.g. the example in Lemma 2.5) that for small \(T\), the minimax regret might not scale with \(\sqrt{\alpha T}\). In what follows, we define the graph-dependent problem complexity that will later appear in our lower and upper bounds. This quantity is complex and depends on the graph in a refined way. It is however what one would expect for a worst-case stochastic problem - namely a problem where losses \(\ell_{i,t}\) are independent and sampled according to a distribution depending on \(i\). In order to introduce the problem complexity, we resort to intuitions from the stochastic setting, although we will analyze the problem in an adversarial setting, and provide precise results later, see Theorems 3.1 and 4.4. Assume that we are given a set containing all promising actions that could be optimal, given available information - let us call it \(I\), in a stochastic setting it would typically be a set of actions whose empirical mean confidence intervals intersect one of the actions with higher lower confidence bound. Let us oversimplify the problem and assume in this informal application that all these actions but one have a small gap \(\Delta>0\) with respect to the optimal action. The optimal action is also in \(I\) and has a gap of 0. When playing an action in \(I\), one incurs an average instantaneous regret of \(\Delta\) except if one samples the optimal action. We are facing the following choice when we try to find the optimal action: we can either sample in \(I\) directly and have small instantaneous regret - namely \(\Delta\) - or we can sample outside of this set and have an instantaneous regret that is in all generality bounded by \(1\). However, sampling outside of \(I\) might still be interesting if some of the actions there are connected to many actions in \(I\), providing in this way very informative feedback on many actions therein - see e.g. Figure 1 where even if the hub action is clearly suboptimal from the samples, it might still be interesting to take advantage of it. At the end of the budget and if one wants to have found the optimal action - and to therefore not pay an instantaneous regret of at least \(\Delta\) at each round - one would need as is usual in stochastic bandits to have observed all actions - from inside or outside of \(I\)- at least \(1/\Delta^{2}\) times. In the stochastic setting, we would therefore expect that the most difficult graph bandit problems would correspond to the worst choice of \((I,\Delta)\). These considerations drive us to the first definition of the _problem complexity_\(Q^{*}\). **Definition 2.6**.: Let \(G=(V,E)\) be a graph and \(T\) be the number of rounds. Then the problem complexity \(Q^{*}_{I}\) for given set \(I\subseteq V\) is defined as \[Q^{*}_{I}\triangleq\max_{\Delta\in(0,1/2]}Q^{*}_{I,\Delta}\] where \[Q^{*}_{I,\Delta}\triangleq\min_{\boldsymbol{\pi}\in\Pi}\min\left[T\sum_{i\in I }\pi_{i}\Delta+T\sum_{i\not\in I}\pi_{i},\,T\Delta\right]\] and \[\Pi=\bigg{\{}\boldsymbol{\pi}\in\mathbb{R}^{N}_{+}:\sum_{i\in[ N]}\pi_{i}\leq 1,\\ T\sum_{i\in[N]}\pi_{i}G_{i,j}\geq 1/\Delta^{2},\forall j\in I \bigg{\}}.\] Moreover, we define the problem complexity \(Q^{*}\) as \[Q^{*}\triangleq\max_{I\subseteq V}Q^{*}_{I}.\] \(Q^{*}_{I,\Delta}\) would correspond to the regret of the best stationary policy \(\pi\) over a problem as described above, for a fixed set \(I\) and gap \(\Delta\). The worst-case problem is then obtained by taking the worst case of set \(I\) and gap \(\Delta\). Unfortunately, the quantity defined in Definition 2.6 is very unintuitive, in that it is unclear how it relates to quantities such as dominating numbers, and independence numbers, of (sub-)graphs. we, therefore, define another relevant notion of the problem complexity \(R^{*}\). Figure 1: Bandit problem with one hub action that observes all other \(N-1\) actions. **Definition 2.7**.: Let \(G=(V,E)\) be a graph and \(T\) be a number of rounds. Then the problem complexity \(R_{I}^{*}\) for given set \(I\subseteq V\) is defined as \[R_{I}^{*}\triangleq\min_{J\subseteq I}\max\left\{\delta^{I}(J)^{\frac{1}{2}}T^{ \frac{1}{2}},\,\delta^{V}(I\setminus J)^{\frac{1}{3}}T^{\frac{2}{3}}\right\}.\] Moreover, we define the problem complexity \(R^{*}\) as \[R^{*}\triangleq\max_{I\subseteq V}R_{I}^{*}.\] This definition is much more tractable from a graph perspective, as it involves only two relevant graph-dependent quantities, namely the dominating set \(\delta^{I}(J)\) of \(J\) from \(I\), and the dominating number \(\delta^{V}(I\setminus J)\) of \(I\setminus J\) from \(V\). Here the choice of the optimal policy is reduced to only choosing the set \(J\) that is best explored from inside of \(I\), and we then select the worst case of \(I\). Interestingly, the following lemma shows that both definitions of the problem complexity are almost equivalent and differ only up to a logarithmic factor. From now on, whenever talking about the problem complexity, we specify which definition we use and the reasons why. **Lemma 2.8**.: _Let \(G=(V,E)\) be a graph and \(I\subseteq V\) be any set of actions. Then for the problem complexities \(Q_{I}^{*}\) and \(R_{I}^{*}\), the following inequalities hold._ \[R_{I}^{*}/(10\log N)\leq Q_{I}^{*}\leq 2R_{I}^{*}\] The proof of this lemma can be found in Appendix B. ## 3 Lower Bound Ever since the introduction of the setting by Mannor and Shamir (2011), the lower bound in Proposition 2.4 was used to drive the ideas for the algorithms. However, in general, this lower bound holds only when \(T\geq 374\alpha^{3}\). Even though approaches of algorithms and their upper bound analyses differ from paper to paper, most of them are able to match the lower bound for \(T\geq 374\alpha^{3}\). Without the lower bound for regimes where \(T<374\alpha^{3}\), there was no incentive for the algorithms to strive for a different rate than the one suggested by Proposition 2.4. In this section, we present a new lower bound that holds regardless of the value of \(T\) and thus, extend the result in Proposition 2.4. The following theorem shows a regret lower bound that scales with the problem complexity \(R^{*}\) and is one of the main results of our paper. **Theorem 3.1**.: _Let \(G=(V,E)\) be a directed graph with \(N=|V|\) and \(T\) be a number of rounds. Then, for any learner, there exists a sequence of randomized losses such that regret \(R_{T}\) of the learner is lower bounded as_ \[R_{T}\geq\frac{Q^{*}}{2^{7}}\geq\frac{R^{*}}{2^{7}10\log N}.\] _where \(Q^{*}\) and \(R^{*}\) are problem complexities_ Proof idea.: The idea of the proof follows standard lower bound proof steps, see e.g. (Lattimore and Szepesvari, 2020, Chapter 15), with our problem-specific twist. The idea is to create a set of "difficult" stochastic bandit problems and show that no matter what the learner does, there always will be at least two different problems that the learner can not distinguish. We create the problems by first choosing a set of near-optimal actions \(I\) and then setting the gap of every action outside of \(I\) to 1 and inside of \(I\) to some small constant \(\Delta\). The only exception is the optimal action. For different problems, we choose different optimal actions from \(I\) and set its gap to 0. Using information-theoretic tools, we can show that every action needs to be explored enough, i.e. at least \(1/\Delta^{2}\) times, in order to be able to distinguish the problems. The result of the theorem is then obtained by carefully choosing the gap parameter \(\Delta\) and the set of difficult actions \(I\). The detailed proof of the theorem can be found in Appendix C. Note that the lower bound in Theorem 3.1 scales with either of the definitions of the problem complexity. Later, we show that the rate depending on the problem complexity is indeed minimax optimal and we comment on the connection to the rate in the previous papers in Section 5. ## 4 Algorithm The algorithm for our setting, similarly to the previous papers, uses exponential weights to define a probability distribution over the set of actions and then samples according to this distribution. Similarly to Exp3.G algorithm by Alon et al. (2015), we add extra exploration to some actions. This extra exploration adapts to the estimated quality of each action, but also to its informativeness, i.e. to how much it is connected to other promising actions on the graph. The construction of the exploration distribution is rather intricate and is the main algorithmic contribution of this paper, and we will discuss it in detail later. We first present the main algorithm, with part devoted to the construction of this exploration distribution. We then present associated theoretical guarantees. ### Main Algorithm As is usual in the literature, our algorithm (Exp3-EX presented in Algorithm 1) updates at each time \(t\) a probability distribution \(\mathbf{p}_{t}\). It then plays at each round action \(i_{t}\), which in the graph setting reveals the losses \(\ell_{t,i}\) of all its neighbors \(i\in N_{i_{t}}^{out}\). These losses enable us to update unbiased estimates of the (cumulative) loss estimates, which will be used in the algorithm. (Cumulative) Loss estimates.In the graph setting, the probability \(P_{t,i}\) of observing loss \(\ell_{t,i}\) is simply the sum of probabilities of playing any of the in-neighbors of arm \(i\) and is defined as \(P_{t,i}\triangleq\sum_{j\in N_{i}^{in}}p_{t,j}\). This allows us, at the end of each round \(t\), to construct, conditionally unbiased loss estimates \(\hat{\ell}_{t,i}\) of the loss of each action \[\hat{\ell}_{t,i}\triangleq\frac{\ell_{t,i}\mathbb{I}\{i\in N_{i_{t}}^{out}\}} {P_{t,i}}\qquad\text{ for all }\qquad i\in[N]\] and to also update the cumulative loss estimates as \[\widehat{L}_{t,i}\triangleq\widehat{L}_{t-1,i}+\hat{\ell}_{t,i}\qquad\text{ for all }\qquad i\in[N].\] We now describe the construction of the distribution \(\mathbf{p}_{t}\). As in (Alon et al., 2015) and in several other papers from the literature, we mix a distribution based on exponential weights - i.e. we update at each time \(t\) the exponential weights \(\mathbf{w}_{t}\) and define a normalized distribution \(\mathbf{q}_{t}\), as in (Auer et al., 2002) - with an exploration distribution \(\mathbf{u}_{t}\). We postpone the construction of the learning distribution to Section 4.2 (summary in Definition 4.3), as it is intricate and our main algorithmic contribution. We describe below how we construct \(\mathbf{w}_{t},\mathbf{q}_{t},\mathbf{p}_{t}\), based on \(\mathbf{u}_{t}\). Recallibration of the parameters.Our algorithm first recalibrates at every step the learning rate \(\eta_{t}\) of the EXP3 part of the algorithm - we describe later in how it is chosen. Our algorithm uses it to also callibrates the mixing probability \(\gamma_{t}\triangleq\min\{(\eta_{t}T)^{-1},1/2\}\) of sampling the exploration distribution \(\mathbf{u}_{t}\). (Renormalized) Exponential weights.Based on the cumulative loss estimate \(\widehat{L}_{t-1,i}\) of arm \(i\) at time \(t\), we can define as in (Auer et al., 2002) the exponential weights \(w_{t,i}\) as \[w_{t,i}\triangleq\exp(-\eta_{t}\widehat{L}_{t-1,i})\qquad\text{ for all }\qquad i\in[N].\] Using these weights, we can construct a distribution simply by re-normalizing them to define \[q_{t,i}\triangleq\frac{w_{t,i}}{W_{t}}\triangleq\frac{w_{t,i}}{\sum_{j\in[N]} w_{t,j}}\qquad\text{for all }\qquad i\in[N]. \tag{1}\] Mixed distribution.Based on \(\mathbf{u}_{t}\), we define our sampling distribution as \[p_{t,i}\triangleq(1-\gamma_{t})q_{t,i}+\gamma_{t}u_{t,i}\qquad\text{for all }\qquad i\in[N]. \tag{2}\] So far the algorithm is not different from the majority of algorithms designed for the graph setting and it is summarized in Algorithm 1. The key difference lies in the exploration distributions \((u_{t,i})_{i\in[N]}\) leveraging the structure of the graph, and in the learning rates \(\eta_{t}\) defined later in Theorem 4.4. Especially exploration distributions \((u_{t,i})_{i\in[N]}\) set our algorithm apart from the previous algorithms and enables us to improve the upper bound to match the newly proposed lower bound. The following section explains all the details necessary for the definition of the exploration distributions. ``` Input:\(G=(V,E)\), \(\widehat{L}_{0,i}=0\) for all \(i\in[N]\) for\(t=1\)to\(T\)do Set learning rate \(\eta_{t}\) (see Theorem 4.4) \(\gamma_{t}=\min\{(\eta_{t}T)^{-1},1/2\}\) \(w_{t,i}=(1/N)\exp(-\eta\widehat{L}_{t-1,i})\) \(W_{t}=\sum_{i\in[N]}w_{t,i}\) \(q_{t,i}=w_{t,i}/W_{t}\) \(p_{t,i}=(1-\gamma_{t})q_{t,i}+\gamma_{t}u_{t,i}\qquad\text{ (see Definition \ref{eq:L1})}\) Choose \(i_{t}\sim\mathbf{p}_{t}=(p_{t,1},\,\ldots,\,p_{t,N})\) Observe losses \(\ell_{t,i}\) for \(i\in N^{out}(i_{t})\) \(P_{t,i}=\sum_{j\in N^{in}(i)}p_{t,j}\) \(\hat{\ell}_{t,i}=\ell_{t,i}\mathbb{I}\{i\in N^{out}(i_{t})\}/P_{t,i}\) \(\widehat{L}_{t,i}=\widehat{L}_{t-1,i}+\hat{\ell}_{t,i}\) endfor ``` **Algorithm 1**Exp3-EX ### Mixing Distribution and Exploration In the graph setting, the interest of an algorithm for sampling an arm is not only characterized by the quality of this arm - i.e. minus its cumulative loss at time \(t\) - but also by the informativeness of this algorithm on other relevant arms - namely, whether or not it is connected to many arms with small cumulative loss at time \(t\). While a classical adversarial bandit algorithm would take into account the first of these two factors, we need to add extra exploration to take into account the second factor, namely the connections of the arms through the graph structure. The idea of the algorithm is to homogenize the actions by grouping them up according to their cumulative loss as well as the amount of information they provide and then define the exploration for each partition separately. We create the partitioning in two steps. Partitioning of the actions into sets \((I_{t,k,l})_{{}_{t,k,l}}\).For every round \(t\), we create partitions \(\{I_{t,k}\}_{k\in[K+1]}\), for \(\left\lceil 5\log_{2}(N)\right\rceil\), such that the normalized exponential weights of arms, defined in Equation 1, are similar within a partition. More precisely define \[I_{t,k}\triangleq\left\{i\in[N]:q_{t,i}\in(2^{-k},2^{-k+1}]\right\}.\] The last partition \(I_{t,K+1}\) contains the rest of the arms, i.e. \[I_{t,K+1}\triangleq\left\{i\in[N]:q_{t,i}\leq 2^{-K}\right\}.\] Note that for every action \(i\in I_{t,K+1}\), \(q_{t,i}\) can be upper bounded by \(1/N^{5}\). We further subdivide each set \(I_{t,k}\) into subsets that are roughly homogeneous in terms of the numbers of neighbors in \(I_{t,k}\). For every arm \(i\in I_{t,k}\), we define \(\deg_{t,k}(i)\) as the number of neighbors of \(i\) within partition \(I_{t,k}\): \[\deg_{t,k}(i)\triangleq|\{j\in I_{t,k}:(i,j)\in E\}|.\] For \(L=\left\lceil\log_{2}(N)\right\rceil\), we will further subdivide each set \(I_{t,k}\) into subsets \(\{I_{t,k,l}\}_{l\in[L]}\) that are roughly homogeneous in terms of numbers of neighbors in \(I_{t,k}\) - namely, arm \(i\in I_{t,k,l}\) if and only if \(\deg_{t,k}(i)\in(N2^{-l},N2^{-l+1}]\). The following definition summarizes the construction of the partitions. **Definition 4.1**.: Let \(K\triangleq\left\lceil 5\log_{2}(N)\right\rceil\) and \(L\triangleq\left\lceil\log_{2}(N)\right\rceil\), then for every \((t,k,l)\in[T]\times[K]\times[L]\) we define \[I_{t,k,l}\triangleq\Big{\{}i\in[N]:q_{t,i}\in(2^{-k},2^{-k+1}],\] \[\deg_{t,k}(i)\in\left(N2^{-l},N2^{-l+1}\right]\Big{\}}\] and \[I_{t,K+1}\triangleq\Big{\{}i\in[N]:q_{t,i}\leq 2^{-K}\Big{\}}.\] Partition in exploration sets from inside and outside of \(I_{t,k,l}\).Inspired by the definition of the problem complexity in Definition 2.7, we can define a splitting of every set \(I_{t,k,l}\), for \(k\in[K]\) and \(l\in[L]\), into two parts, \(J_{t,k,l}\) and \(J^{\prime}_{t,k,l}\triangleq I_{t,k,l}\setminus J_{t,k,l}\) that minimize expression \[\max\left\{\delta^{I_{t,k,l}}(J_{t,k,l})^{\frac{1}{2}}T^{\frac{1}{2}},\, \delta^{V}(J^{\prime}_{t,k,l})^{\frac{1}{3}}T^{\frac{2}{3}}\right\}.\] We write \(R^{*}_{I_{t,k,l}}\) the value of this minimum for each given set \(I_{t,k,l}\). As we discussed in Section 2.3, an optimal exploration of the set \(J_{t,k,l}\) can be done using actions in \(I_{t,k,l}\), while an optimal exploration of \(J^{\prime}_{t,k,l}\) can be performed using actions outside \(I_{t,k,l}\). In order to construct our exploration distribution, we would like to have access to the sets \(J_{t,k,l}\) and \(J^{\prime}_{t,k,l}\), and more specifically to some (approximate) dominating sets, in order to be able to define the exploration distribution. While it is possible in theory to find these sets based on the \(I_{t,k,l}\) and on the graph, solving the optimization problem that leads to them can be computationally very expensive. For this reason, we do not work directly with the sets \(J_{t,k,l}\), \(J^{\prime}_{t,k,l}\), but rather with some approximations that are computationally tractable. Such approximations exist, as stated in Corollary 4.2 below, and are described in Appendix D. **Corollary 4.2**.: _The algorithm described in Appendix D, which is polynomial time in \(N\) (as it consists in solving a linear optimization problem under linear constraints) outputs partitions \(\bar{J}_{t,k,l}\), \(\bar{J}^{\prime}_{t,k,l}\) of \(I_{t,k,l}\) together with their corresponding dominating sets \(\bar{D}_{t,k,l}\), \(\bar{D}^{\prime}_{t,k,l}\), which satisfy_ \[|\bar{D}_{t,k,l}| \leq\log(N)\delta^{I_{t,k,l}}(\bar{J}_{t,k,l}),\] \[|\bar{D}^{\prime}_{t,k,l}| \leq\log(N)\delta^{V}(\bar{J}^{\prime}_{t,k,l}),\] _and_ \[\max\Big{\{}|\bar{D}_{t,k,l}|^{\frac{1}{2}}T^{\frac{1}{2}},\,| \bar{D}^{\prime}_{t,k,l}|^{\frac{1}{3}}T^{\frac{2}{3}}\Big{\}}\leq\\ \leq 24\times 10^{4}\log(N)^{4}\sqrt{\log(N)}R^{*}_{I_{t,k,l}}.\] As mentioned, \(\bar{J}_{t,k,l}\) (resp. \(\bar{J}^{\prime}_{t,k,l}\)) serves as a surrogate of \(J_{t,k,l}\) (resp. \(J^{\prime}_{t,k,l}\)) and dominating set \(\bar{D}_{t,k,l}\) (resp. \(\bar{D}^{\prime}_{t,k,l}\)) is an approximation of the smallest dominating set of \(\bar{J}_{t,k,l}\) (resp. \(\bar{J}^{\prime}_{t,k,l}\)) from \(I_{t,k,l}\) (resp. \(V\)). While the full construction of these sets is deferred to Appendix D, we discuss and sketch briefly their construction in Subsection 2.3. Having an efficient way of computing partitions \(\bar{J}^{\prime}_{t,k,l}\) and their dominating sets \(\bar{D}^{\prime}_{t,k,l}\) allows us to define the following exploration distribution **Definition 4.3**.: let \(I_{t,k,l}\), for \((t,k,l)\in[T]\times[K]\times[L]\) be a partition of \(V\) from Definition 4.1 and \(\bar{D}^{\prime}_{t,k,l}\) be a dominating set, from Corollary 4.2. Then, we can define, \[u_{t,i}\triangleq\frac{1}{KL+1}\left(\frac{1}{N}+\sum_{k\in[K]}\sum_{l\in[L] }u_{t,i}^{k,l}\right) \tag{3}\] where \[u_{t,i}^{k,l} =\frac{1}{|\bar{D}^{\prime}_{t,k,l}|}\] for all \[i\in\bar{D}^{\prime}_{t,k,l}\] \[u_{t,i}^{k,l} =0\] for all \[i\not\in\bar{D}^{\prime}_{t,k,l}.\] Distribution \((u_{t,i})_{i\in[N]}\) can be seen as a mixture of uniform distributions where the term \(1/N\) in Equation 3 corresponds to the uniform distribution over all the actions and \((u_{t,i}^{k,l})_{i\in[N]}\) corresponds to the uniform distribution over set \(\bar{D}^{\prime}_{t,k,l}\) which, as a consequence, secures exploration of \(\bar{J}^{\prime}_{t,k,l}\). ### Main Upper Bound Theorem Utilization of the exploration distributions \((u_{t,i})_{i\in[N]}\) from the previous section and appropriately tuned learning rates \(\eta_{t}\) enable us to prove the optimal regret upper bound for Algorithm 1 stated in the following theorem. **Theorem 4.4**.: _Let learning rate \(\eta_{t}\) is defined as_ \[\min_{s\in[t]}\min_{k\in[K]}\min_{l\in[L]}\min\left\{|\bar{D}_{s,k,l}|^{-\frac{1 }{2}}T^{-\frac{1}{2}},|\bar{D}^{\prime}_{s,k,l}|^{-\frac{1}{3}}T^{-\frac{1}{3} }\right\},\] _where we remind that \(\bar{D}_{t,k,l}\) and \(\bar{D}^{\prime}_{t,k,l}\) be the dominating sets outputted by the algorithm described in Appendix D. Then the regret of Algorithm 1 is upper bounded as_ \[R_{T}\leq 24\times 10^{4}\log(N)^{5}DR^{*}\] _for_ \[D =4KL+2+\left((KL)^{2}+KL+1\right)\log(N),\] \[K =\lceil 5\log_{2}(N)\rceil,\] \[L =\lceil\log_{2}(N)\rceil.\] _Proof idea._ The proof of the theorem relies heavily on the partitioning from the Definition 4.1 by decomposing the regret along the partitions. Careful construction of partitions allows us to show that the actions corresponding to one individual partition contribute to regret with no more than \(R^{*}\), up to logarithmic factors. The fact that the number of partitions \(KL+1\) is only polylogarithmic in the number of actions allows us to obtain the final regret bound as the sum of regret bounds for individual partitions. The detailed proof of the theorem can be found in Appendix E. ## 5 Discussion We have presented the regret lower bound for the setting (Section 3, Theorem 3.1) as well as the matching, up to logarithmic terms, regret upper bound for the proposed algorithm (Section 4, Theorem 4.4). Together, these two theorems prove that the minimax rate for online learning with feedback graphs is proportional to the problem complexity \(R^{*}\) (Definition 2.7). In this section, we compare the minimax rate presented in this paper to the previously known results and emphasize the improvements that we bring to the setting. We focus mainly on two regimes: When \(T\) is large enough when compared to \(\alpha^{3}\), we recover results from the literature, namely a minimax rate of order \(\sqrt{\alpha T}\) up to logarithmic terms. We do it by showing that the problem complexity \(R^{*}\) is equal to \(\sqrt{\alpha T}\) when \(T\) is large enough. When \(T\) is small, we demonstrate the existence of graphs for which rate \(\sqrt{\alpha T}\) is far from optimal. An important consequence of this statement is that all the algorithms proposed in the previous papers prove only suboptimal regret upper bounds for some graphs and budgets \(T\). This also means that Algorithm 1 is the first provably optimal algorithm for the setting in all possible problems and regimes. ### Regime when \(T\) is Large Previous papers proved that the minimax regret scales with \(\sqrt{\alpha T}\) whenever \(T\geq 374\alpha^{3}\) while the minimax regret presented in this paper scales with the problem complexity \(R^{*}\) instead. The following corollary shows that the two rates are up to log factors the same when \(T\) is large enough. **Corollary 5.1**.: _Let \(G\) be a graph with independence number \(\alpha\). Then for any \(T\geq\alpha^{3}\), the problem complexity \(R^{*}\) simplifies to_ \[R^{*}=\sqrt{\alpha T}.\] The proof for this corollary can be found in Appendix F.1. As our upper and lower bounds in Theorems 3.1 and 4.4 match \(R^{*}\) up to logarithmic terms, we recover the results from the literature. ### Regime when \(T\) is Small From the previous section, we know that the minimax rate for large enough \(T\) is \(\sqrt{\alpha T}\). It is also true that most of the prior algorithms can achieve a regret upper bound that scales with \(\sqrt{\alpha T}\). However, at first glance, it is not obvious how significant the improvement of the newly defined problem complexity is. We return back to the example from Lemma 2.5 and introduce a couple of examples demonstrating that rate \(\sqrt{\alpha T}\) can be significantly sub-optimal. **Example 1**.: Lemma 2.5 provides an example where the graph contains \(N-1\) independent vertices and one hub connected to all other vertices. The following Corollary states that the minimax rate for this graph indeed scales with \(\delta^{1/3}T^{2/3}\) instead of \(\sqrt{\alpha T}\). **Corollary 5.2**.: _Let \(G=(V,E)\) be a graph on \(N\) vertices with one hub, i.e. the set of edges is \(E=\{(N,i):i\in[N-1]\}\). Then for any \(T<\alpha^{3}\), the problem complexity \(R^{*}\) simplifies to_ \[R^{*}=T^{\frac{3}{3}}.\] The proof for this corollary can be found in Appendix F.2. This result also shows that with the increasing number of actions, the gap between \(\sqrt{\alpha T}\) and the problem complexity can be arbitrarily large. **Example 2**.: Generalizing the previous example, we can create a graph consisting of two parts. A star graph with \(1+N_{1}\) vertices and \(N_{2}\) independent vertices without any edges. Now the problem complexity is of order \((T^{2/3}+\sqrt{N_{2}T})\wedge\sqrt{(N_{1}+N_{2})T}\) while \(\alpha=N_{1}+N_{2}\) and \(1+N_{2}\). If either \(N_{2}\geq N_{1}\), or \(T\geq N_{1}^{2}\), the problem complexities \(R^{*}\) and \(Q^{*}\) are of order \(\sqrt{(N_{1}+N_{2})T}=\sqrt{\alpha T}\) (up to logarithmic terms) as predicted by Alon et al. (2015). However, if \(N_{2}<N_{1}\) and \(T<N_{1}^{3}\) (large star graph), then the problem complexity is of order \(T^{2/3}\). This is an example where the minimax rate is much smaller than \(\delta^{1/3}T^{2/3}\) or \(\sqrt{\alpha T}\). This example also illustrates that the minimax rates from the previous papers are not valid when \(T\) is small enough. In contrast, the true minimax rate scales with the problem complexity which demonstrates that it is important to adapt locally to the graph and global quantities like the dominating number, or the independence number, are not complex enough to describe the problem complexity. **Example 3**.: Expanding the previous example, we consider a graph where we have \(\sum_{k\leq K}(k+1)m_{k}\) vertices. This graph consists, for each \(k\in\{1,\ldots,K\}\), of \(m_{k}\) star graphs with \(k+1\) vertices each with no connection to each other. In this case \(\alpha=\sum_{k\leq K}km_{k}\) and \(\delta=\sum_{k\leq K}m_{k}\). Now, write \(A\) for the set of indexes \(k\) such that \(\sqrt{m_{k}kT}\geq m_{k}^{1/3}T^{2/3}\). The problem complexities \(Q^{*},R^{*}\) are of order \(\sqrt{T\sum_{k\not\in A}m_{k}k}+(\sum_{k\in A}m_{k})^{1/3}T^{2/3})\). For a graph containing some large star graphs, e.g. whenever \(\sup_{k}m_{k}k^{3}\geq T\), the rate is of order, up to logarithmic terms, of \((\sum_{k\in A}m_{k})^{1/3}T^{2/3}\). This can be significantly smaller than \(\delta^{1/3}T^{2/3}\) if \(A\) is very different from \(\{1,\ldots,n\}\), e.g. when the graph contains a small number of very large star graphs and a moderate number of small star graphs - an extreme case being in the previous example. These examples highlight that in the case of large graphs that are not homogeneous in the size of their hubs, the problem complexity is not driven by quantities like the dominating number or the independence number, but by some related quantities that are local in the graph. Our algorithm is able to adapt to such local structures. ### Exploration Distribution We believe that the exploration distribution in Definition 4.3 plays a crucial role in adapting Exp3 algorithm to the setting for small \(T\) and that the algorithm is suboptimal without it. In general, exponential weights encourage playing actions with small cumulative loss but neglect actions that are highly informative, i.e. connected to many other actions. To correct this behavior, we look at every partition \(I_{t,k,l}\) and identify the set \(\bar{J}^{\prime}_{t,k,l}\) from Corollary 4.2 and its dominating set \(\bar{D}^{\prime}_{t,k,l}\). We already know that the optimal way of exploring \(\bar{J}^{\prime}_{t,k,l}\) is by playing more informative actions in \(\bar{D}^{\prime}_{t,k,l}\). To enforce this behavior in the Exp3 algorithm we simply add extra uniform exploration to actions in \(\bar{D}^{\prime}_{t,k,l}\). ## Acknowledgements The work of A. Carpentier is partially supported by the Deutsche Forschungsgemeinschaft (DFG) Emmy Noether grant MuSyAD (CA 1488/1-1), by the DFG CRC 1294 "Data Assimilation", Project A03, by the DFG Forschungsgruppe FOR 5381 "Mathematical Statistics in the Information Age - Statistical Efficiency and Computational Tractability", Project TP 02, by the Agence Nationale de la Recherche (ANR) and the DFG on the French-German PRCI ANR ASCAI CA 1488/4-1 "Atkive und Batch-Segmentierung, Clustering und Seriation: Grundlagen der KI".
2303.16102
KeyMatchNet: Zero-Shot Pose Estimation in 3D Point Clouds by Generalized Keypoint Matching
In this paper, we present KeyMatchNet, a novel network for zero-shot pose estimation in 3D point clouds. Our method uses only depth information, making it more applicable for many industrial use cases, as color information is seldom available. The network is composed of two parallel components for computing object and scene features. The features are then combined to create matches used for pose estimation. The parallel structure allows for pre-processing of the individual parts, which decreases the run-time. Using a zero-shot network allows for a very short set-up time, as it is not necessary to train models for new objects. However, as the network is not trained for the specific object, zero-shot pose estimation methods generally have lower accuracy compared with conventional methods. To address this, we reduce the complexity of the task by including the scenario information during training. This is typically not feasible as collecting real data for new tasks drastically increases the cost. However, for zero-shot pose estimation, training for new objects is not necessary and the expensive data collection can thus be performed only once. Our method is trained on 1,500 objects and is only tested on unseen objects. We demonstrate that the trained network can not only accurately estimate poses for novel objects, but also demonstrate the ability of the network on objects outside of the trained class. Test results are also shown on real data. We believe that the presented method is valuable for many real-world scenarios. Project page available at keymatchnet.github.io
Frederik Hagelskjær, Rasmus Laurvig Haugaard
2023-03-28T16:11:31Z
http://arxiv.org/abs/2303.16102v3
# GP3D: Generalized Pose Estimation in 3D Point Clouds: A case study on bin picking ###### Abstract In this paper, we present GP3D, a novel network for generalized pose estimation in 3D point clouds. The method generalizes to new objects by using both the scene point cloud and the object point cloud with keypoint indexes as input. The network is trained to match the object keypoints to scene points. To address the pose estimation of novel objects we also present a new approach for training pose estimation. The typical solution is a single model trained for pose estimation of a specific object in any scenario. This has several drawbacks: training a model for each object is time-consuming, energy consuming, and by excluding the scenario information the task becomes more difficult. In this paper, we present the opposite solution; a scenario-specific pose estimation method for novel objects that do not require retraining. The network is trained on 1500 objects and is able to learn a generalized solution. We demonstrate that the network is able to correctly predict novel objects, and demonstrate the ability of the network to perform outside of the trained class. We believe that the demonstrated method is a valuable solution for many real-world scenarios. Code and trained network will be made available after publication. pose estimation, point cloud, deep learning ## I Introduction Pose estimation enables much greater flexibility in robotics. New objects can be manipulated without the need for mechanical fixtures or teaching specific robot positions. This enables a more adaptive production with a shorter changeover time and faster adaptation to production demands. However, the set-up of computer vision set-ups can itself be a very time-consuming task [6]. There is, therefore, great interest in pose estimation solutions with simple set-ups. Deep learning has allowed learning the specifics of the object and the scenario thus giving much better performance than human fine-tuning [10, 18]. However, collecting the data for training the deep neural networks can be very time-consuming, thereby limiting the usability. To avoid this data collection, the use of synthetic data has gained widespread use [10]. But, this introduces a domain gap between the real world and the training data, which is often handled by adding large amounts of domain randomization during training [18]. In this solution a single model is trained for one object adapted to any scenario. However, we believe that by utilizing the scenario information a single model can be trained for multiple objects. For many set-ups this much better fit the tasks at hand. It allow us to train on real data as training is not required for new objects. New objects can be introduced faster as training is not required, and the power consumption for training will be removed. An example is robotic work-cells where the scenario is constant while new objects are introduced. In this paper we focus on the Fig. 1: Illustration of the pose estimation method. The input to the network is a scene point cloud and an object point cloud with keypoints spread over the object. The output of the network is object segmentation and keypoint votes. Notice how the rotational symmetry makes a stippled pattern of keypoint predictions. Finally these votes are used in RANSAC for pose estimation. task of bin picking with homogeneous bins, which is a difficult challenge that often occurs in industry. The homogeneous bin, also removes the need for object detection and allow us to only focus on pose estimation. We recognize that the general approach as shown in e.g. [18] has huge importance, but also state it is not the best solution for all tasks. In this paper we show that generalized pose estimation can show very good performance when restricting the scenario. We believe that these results invite further research into this topic, as more flexible robotic set-ups with lower power consumption is an important topic. ## II Related Works Visual pose estimation is an important topic, and many different approaches have been developed. **Classic Pose Estimation:** The classic pose estimation approach is based on matching features between the scene and object, and computing the pose by e.g. RANSAC [2]. The matches are computed using handcrafted features, which is generally computed in 3D point clouds. A huge amount of handcrafted features have been developed [4], with Fast Point Feature Histograms (FPFH) [16] being one of the best performing features. **Deep Learning Based:** Generally deep learning based methods are based on color information [10, 18]. This is possibly a result of many deep learning based methods developed for this space, with huge pre-trained networks available. These methods have vastly outperformed the classical methods, however, a network is often trained per object [10, 18]. Deep learning for pose estimation has also been performed in point clouds with methods such as PointVoteNet [5]. We base our method on PointVoteNet, but only train a single network for all objects. **Generalized Pose Estimation:** Several approaches have been developed for generalized pose estimation. As in the deep learning based pose estimation, the field of generalized pose estimation is also dominated by color-based methods [8, 12, 17]. The general approach for these methods is to match templates of the object with the real image. These templates can either be generated synthetically as in [14] or with a few real images as in FS6D [8]. The same approach have also been used for tracking of unknown objects [13]. MegaPose6D [12] is a notable example where the network is trained on a huge dataset with 2 million images. The method most similar to ours is a point cloud based method [3]. It also uses the object model as input. However, several differences are present compared with our approach; the method includes color information, it does not limit the scenario and thus does not obtain the increased performance from this, and it does not separate the object and scene features. ## III Method The developed method is based on DGCNN [19] and bears a similarity to a similar pose estimation network [5]. Point clouds are well suited for industrial objects as precise CAD models are generally available, but color and surface is rarely available. Compared with PointVoteNet we introduce several differences. Instead of learning the prediction of cad model keypoints directly in the network, as in previous methods, we instead train the network to match keypoint features from the object to the scene point cloud. The network structure is shown in Fig. 2. Initially the object and keypoint features are com Fig. 2: The network structure for the developed method. Initially features are computed independently for the object and the scene. This allow us to compute the object features a single time and match it with multiple scene point clouds. The “Edge Feature” is specific to DGCNN and computes and concatenates neighbor points. puted independently using the standard feed-forward part of DGCNN. However, as only the keypoints from the object point cloud is needed the object point cloud is down-sampled to the twenty keypoints after the second neighbor computation in the DGCNN. As the number of neighbors is set to the same as the number of keypoints, all keypoints include information about the other. After both object and scene features are computed they are joined together to create a matrix of length \((n*k)\), where \(n\) is the number of point and \(k\) is the number of keypoints. A linear layer then processes each join independently and a local maxpool reduces the matrix to length \((n)\), combining all keypoint information for the scene point. The object features are then joined again and a linear layer computes features per keypoint-scene point pair. By applying a softmax the prediction for each keypoint is computed. To compute the segmentation a local maxpool is applied followed by a linear layer. In Fig. 3 the networks ability to correctly classify keypoints for different objects is shown. Finally, RANSAC is used with the segmentation and keypoint predictions to compute the final object pose. We employ the vote threshold as in [5] to allow multiple predictions at a single point. The vote threshold is set to 0.7. ### _Generating scene data_ As the scenario is homogeneous bin-picking the detection is vastly simplified. As the bin position is known beforehand, and the contents are homogeneous, any point belongs to an object. Thus we can randomly sample point a point, and this point will belong to the object. By then extracting using a radius set to the object model diagonal, all points in the scene belonging to the object will be obtained. The point cloud is then centered around the sampled point to allow for instance segmentation. The scene point clouds are generated using BlenderProc [1]. ### _Generating object data_ The object point clouds is generated using Poisson sampling to obtain 2048 evenly sampled points on the surface. Farthest point sampling is then used to obtain the keypoints spread evenly on the object. During training the keypoints are continuously re-computed with random initialization, to avoid the network over-fitting to a specific combination. ### _Computing object features off-line_ As shown in Fig. 2 the features computed from object point cloud are independent of the features computed from the scene point cloud. This is opposed to [3]. This allow us to use the same object feature for multiple pose estimations. The object features can also be computed offline to reduce the run-time and the computational cost. ## IV Experiments To test the developed method several experiments are performed. A single network is trained and tested for all the experiments. The network is trained on 1500 free cad models from an online database of electronic components1. Fifty models are used as validation to test the networks ability on unseen objects. The method is also tested on novel objects. Seven electrical components from a different database are tested [7]. The seven test objects are shown at the top of Fig. 4. Additionally we test the ability of the network on industrial objects from the WRS [21] dataset. On this dataset we show the networks ability to generalize to other objects outside of the training scope. The objects from the WRS dataset are shown in the bottom of Fig. 4. All point cloud processing and pose estimation is performed using the Open3D framework, [22]. The network processing was performed using PyTorch [15]. Footnote 1: [https://www.pcb-3d.com/membership_type/free/](https://www.pcb-3d.com/membership_type/free/) ### _Network Training_ The 1500 components are split into a 1450/50 train-validation dataset. During each epoch we generate 160 point clouds for each component. Thus each training epoch consists of 232000 point clouds. The network is trained with a batch size of 14, 7 on each GPU, using the Adam optimizer [11], with an initial learning rate of 0.0001. We use a step scheduler with a step size of 20 and the gamma parameter set to 0.7. The loss is calculated with cross entropy using a 0.2/0.8 split for segmentation and keypoint loss. For the keypoint loss only points belonging to the object is used. To generalize the network Group Norm [20] with group size 32 is used after each linear layer. Group Norm is used as opposed to Batch Norm as a result of the small batch size. Dropout is used for the object features, as the network should not overfit to a specific part of the object, and is used after the to concurrent linear layers. The dropout is set to 40 %, used after the last two linear layers of the object feature and the first two of the combined feature. Additionally, up to 0.75 % Gaussian noise is applied to the object and scene point clouds, and 10 % position shift is applied to the object point cloud. The network was trained on a PC environment with two NVIDIA GeForce RTX 2080 GPUs. The network was trained for 120 epochs lasting approximately six days (141 hours). Fig. 3: An illustration of the networks ability to correctly match keypoints from multiple objects. Both objects are unknown to the object, with the right object being out of class. ### _Training and test performance_ The performance for the trained networks is show in Tab. I. Performance is shown for both the loss, segmentation accuracy, and keypoint accuracy. We present the training performance both with and without generalization. The network does not appear to overfit to the training data, and actually shows better performance on the validation set. For the test data with electronic objects not from the same dataset, the network performs well with a slightly lower performance. As the objects are symmetric to varying levels, the keypoint accuracy despite being low, still gives good pose estimations. This is seen in Fig. 1, where the symmetry of the object results in striped matching of keypoints. However, these matches are still very useful for the pose estimation. ### _Performance for each component_ To analyze the network, performance for each component is shown in Tab. II. The two objects with the highest performance is "3" and "5". These two object are both very similar to objects in the training data. The most challenging object is "2". The split between the two parts of the object makes it very dissimilar to the training data. The other components perform very well, especially for the segmentation task. ### _Pose Estimation Performance_ To test the pose estimation ability of the network we compare it with a classic pose estimation method. The classic pose estimation method is FPFH [16] with RANSAC [2]. The performance is measured by the ADD-S score, as it is well suited for the symmetric objects in our dataset [9]. To further test the robustness of the system, varying levels of noise is added to the points clouds. The results are shown in Tab. III. It can be seen that our method outperforms the classic method for all objects. When adding noise the difference becomes even more pronounced. However, for the \(5\%\) noise the performance also drops significantly for our method. ### _Testing out of class_ The WRS dataset consists of industrial objects, such as motors and pulleys. The objects were used for the WRS assembly challenge held in 2018 [21]. We have chosen they represent an industrial challenge and because of the variety. The results of the network performance and pose estimation is shown in Tab. IV. It is seen that our developed method obtains very good performance on these objects that appear quite different than the training dataset. As seen in Fig. 5, because the objects are quite symmetric, the pose estimation succeeds even though the keypoints prediction is not correct. ### _Run-time_ The run-time of the network was tested both with and without computing the object features at run-time. With the object features computed at run-time the processing lasts Fig. 4: The seven electronic components used for testing, along with the seven components from the WRS dataset used for out of class testing. 14.9 ms. While by pre-computing the features the run-time is only 7.9 ms. The separation of object and scene feature computations is thus a significant speed-up. However, for the full system the run-time is currently 84.7 ms. This is mainly related to the RANSAC search, which would be a focus to replace. ## V Conclusion In this paper we have presented a novel method for generalized pose estimation. The method consists of a novel network structure and a scenario specific approach. The method show very good generalizability across different objects, even with out of class objects. This proves the validity of creating object independent networks for specific scenarios, which can be useful for many real world applications. In future work, it will be very interesting to test the method with real data, both for training and testing. Additionally, the network could be adapted to perform full pose estimation to simplify the pipeline and improve the run-time. The objects from the MegaPose6D [12] dataset could also be used to diversify the object types, and test on benchmark datasets.
2307.10084
Eversion Robots for Mapping Radiation in Pipes
A system and testing rig were designed and built to simulate the use of an eversion robot equipped with a radiation sensor to characterise an irradiated pipe prior to decommissioning. The magnets were used as dummy radiation sources which were detected by a hall effect sensor mounted in the interior of the robot. The robot successfully navigated a simple structure with sharp 45{\deg} and 90{\deg} swept bends as well as constrictions that were used to model partial blockages.
Thomas Mack, Mohammed Al-Dubooni, Kaspar Althoefer
2023-07-19T15:55:14Z
http://arxiv.org/abs/2307.10084v1
# Eversion Robots for Mapping Radiation in Pipes ###### Abstract A system and testing rig were designed and built to simulate the use of an version robot equipped with a radiation sensor to characterise an irradiated pipe prior to decommissioning. The magnets were used as dummy radiation sources which were detected by a hall effect sensor mounted in the interior of the robot. The robot successfully navigated a simple structure with sharp 45\({}^{\circ}\) and 90\({}^{\circ}\) swept bends as well as constrictions that were used to model partial blockages. ## I Introduction The substitution of humans with robots in hazardous environments is becoming commonplace, especially in the nuclear decommissioning industry where the constant presence of radioactive materials and waste warrants extreme safety precautions for almost all aspects of operation. The exploration and characterisation of potentially irradiated spaces is one such area for which robotic solutions have been developed [1]. However, the deployment of in-situ inspection solutions to the pipes and ducts that riddle these facilities is made difficult due to the small access ports. Their uses ranged from ventilation to waste drainage and many are positioned in hard to reach areas that make characterising them from the outside extremely difficult or impossible. Many robotic devices have already been developed for the inspection of pipes [2], but most rigid solutions are highly restricted by their size, making them unsuitable for many of the thinner pipes. They also risk spreading contaminants through the pipe and must be decomaminated before reuse or disposed of leading to unwanted extra costs. Eversion robots (_Figure 1_) - a subset of soft robots - hold significant advantages for this kind of environment. They consist of an inverted sleeve, usually made from fabric or plastic. When inflated, the sleeve grows forward, pulling a tail of new material from the base. The outer walls remain static with the environment and do not exert frictional forces. As a result, they have medical applications in and outside the body where delicacy is required [3]. The pipe provides a guide for the robot to travel along, reducing the need for complex control and transforming the mapping into a one-dimensional problem, assuming an absence of forks. They also allow a robot that fills the pipe to retract without any extra assistance such as the retraction device in the cap created by Jeong et al. [4]. Due to the soft nature of the robot, it will also able to squeeze past some partial blockages. Adding sensors to the tip of version robots is a presently studied challenge due to the material continuously moving as the robot extends. Most caps are made from rigid materials and fully encase the tip [4, 5], limiting the size of the aperture they can fit through and undermining the evasion robot's ability to squeeze through spaces smaller than itself. A soft, fabric solution exists [6], but that can have difficulties remaining in place while the robot retracts. ## II Proposal The main focus of characterising pipes in a nuclear environment is locating contaminated areas. The thin walls of an evenion robot would be easily penetrated by beta and gamma radiation. Thus, small radiation sensors could be placed within the robot on the end of the tail instead of mounting them on the tip. They would travel down the full length of the robot as it exerts, measuring the distribution of radiation. This will remove the need for a cap and not limit the robot from passing through tight spaces. While this will prevent a camera from being mounted, it will allow the robot to continue to measure radiation levels past where it is feasible to take a camera on a rigid cap. To simulate this, we replaced the radiation sources with magnets that we detected with a hall-effect sensor. They were placed on a reconfigurable course of pipework for the robot to pass through and detect their locations. ## III Experimental Study ### _Mock-up Environment_ Simple courses of 55mm diameter plastic tubing were constructed to simulate pipes that could be found in a nuclear facility. They contained sharp 45\({}^{\circ}\) and swept 90\({}^{\circ}\) bends as well as some constrictions as small as 40mm. There were no splits in the path as the robot is, so far, unable to actively direct itself in a more complex network. Two sets of magnets were placed along the pipe for detection. Fig. 1: Basic construction and operation of the evenion robot ### _Eversion Robot Construction_ The first robot was constructed from Rip-Stop Nylon fabric and the second robot from Polythene lay-flat tubing. The fabric tubing was sewn from two strips into a 5m long, 60mm diameter tube. Excess fabric on the seams was cut as short as possible and then heat sealed with vinyl to eliminate air leaks. One of the open ends was sewn and similarly sealed. The plastic tubing could just be heat sealed at one end to form the shape we needed. However, it was not available in a size that fully filled our test rig, so we used the next smallest version. A tendon was tied to the sealed end of both sleeves, enabling controlled extension and retraction. It was attached to a 3D-printed, hand-cranked drum which can be used to infer the robot's extension using a shaft encoder. A hall effect sensor was also attached at the sealed end with a wire leading out of the base of the robot. A clamp was placed around the end of the eversion robot to minimise air leaks from the seal around the tendon and wire. A small ROS package was written to read the shaft encoder and the hall effect sensor to plot them in real time. ## IV Results Both robots were able to evert through the straight piece of pipe and the constrictions without any problems, but the fabric robot was unable to extend past the 45\({}^{\circ}\) bends. It was noted that due to manufacturing imperfections, there were multiple small leaks on the seams causing pressure loss, and the vinyl sealing created a stiff section which may have hindered extension. The plastic robot was uniform and almost fully airtight, save for the clamped base where the wire and tendon had to pass through. It successfully transported the hall-effect sensor through every course of pipes that we assembled. However, it was not possible to cleanly retract it as it did not fill the pipe, which caused it to buckle and bend considerably. Clear spikes _(Figure 3)_ in the graphs were produced by the hall effect sensor which could be used to measure where on the pipe the magnets were placed. However, there was always a considerable amount of distance before the sensor encountered a magnet because it had to travel half the full length of the robot before it entered the pipe. ## V Conclusion The system has been shown to work as a proof of concept for mapping the radiation levels in simple pipe structures with no branching paths. The plastic lay-flat tubing worked best for everting through constrictions and sharp bends as it was able to retain more pressure than the fabric self-made robots. In future, the entire assembly will be encased in an airtight chamber to reduce friction on the tendon and cable as well as let the entire robot be retracted and rolled up on the drum. A stepper motor will be used to rotate the drum instead so the mapping can be automated. The plastic lay-flat tubing will also be tested with pipe of the correct diameter.
2309.17247
The translation invariant product measure problem in non-sigma finite case
We give an example of non-translation invariant product measure obtained from two translation invariant measures, one of which is non-sigma finite. This particular example also suggests that there can be infinitely many product measures if we abandon the sigma-finiteness assumption.
Nicha Khenkhok
2023-08-15T13:34:19Z
http://arxiv.org/abs/2309.17247v1
# The translation invariant product measure problem in non-sigma finite case ###### Abstract We give an example of non-translation invariant product measure obtained from two translation invariant measures, one of which is non-sigma finite. This particular example also suggests that there can be infinitely many product measures if we abandon the sigma-finiteness assumption. ## 1 Introduction Let \((X,\mathcal{A},\mu)\) be a measure space with \(\sigma\)-algebra \(\mathcal{A}\) and measure \(\mu\), and \((\mathbb{R},\mathcal{B},\nu)\) be a real measure space with the Borel \(\sigma\)-algebra \(\mathcal{B}\) and the Lebesgue measure \(\nu\). Denote their product measure space by \((X\times\mathbb{R},\mathcal{A}\otimes\mathcal{B},\mu\times\nu)\), where the product measure is arbitrary. Define a product measure using the definition given by D.H. Fremlin in [1]. The set function \(\mu\times\nu:\mathcal{A}\otimes\mathcal{B}\to[0,\infty]\) is a product measure iff it is a measure and for every measurable rectangle \(A\times B\), where \(A\in\mathcal{A}\) and \(B\in\mathcal{B}\), we have \[\mu\times\nu\left(A\times B\right)=\mu(A)\nu(B).\] We shall fix these measure spaces throughout the article. The Lebesgue measure is known to be translation-invariant. One question we may ask is whether a product measure \(\mu\times\nu\) inherits this property in the sense that any shift applied to a measurable set \(B\in\mathcal{A}\otimes\mathcal{B}\) along the real axis does not alter the measure. Formally, we conjecture **1.1 Conjecture**.: _Let the product measure space \((X\times\mathbb{R},\mathcal{A}\otimes\mathcal{B},\mu\times\nu)\) be arbitrary and a set \(B\in\mathcal{A}\otimes\mathcal{B}\) be given. For any \(c\in\mathbb{R}\), define the vertical shift of \(B\) by \(c\) as the set_ \[B+c\coloneqq\{(x,y+c):(x,y)\in B\}\in\mathcal{A}\otimes\mathcal{B}.\] _Then, \(\mu\times\nu\left(B+c\right)=\mu\times\nu\left(B\right)\)._ If the measure space \((X,\mathcal{A},\mu)\) is \(\sigma\)-finite, then the conjecture holds trivially as the product measure is unique. This unique product measure is obtained through the Caratheodory's extension theorem. As for the non-\(\sigma\)-finite case, we will show that the conjecture is not true. ## 2 Completely locally determined product measure Let \((X,\mathcal{A},\mu)=([0,1],\mathcal{B},\mu)\), where \(\mathcal{B}\) is the Borel \(\sigma\)-algebra and \(\mu\) is the counting measure. Then, we may define the measurable space of \((X,\mathcal{A},\mu)\) and \((\mathbb{R},\mathcal{B},\nu)\). Let \[\pi(E)=\inf\left\{\sum_{n=0}^{\infty}\mu\times\nu\left(A_{n}\times B_{n} \right):\{A_{n}\}_{n\in\mathbb{N}}\subseteq X,\{B_{n}\}_{n\in\mathbb{N}} \subseteq\mathbb{R},E\subseteq\bigcup_{n=0}^{\infty}A_{n}\times B_{n}\right\}\] be the product measure space obtained through the Caratheodory's extension theorem. Another candidate as a product measure is the completely locally determined product measure (c.l.d), which the reader may refer to [1] for further details. The c.l.d product measure is given by \[\rho(E)=\left\{\pi(E\cap(A\times B)):A\in\mathcal{A},B\in\mathcal{B},\mu(A)< \infty,\nu(B)<\infty\right\}.\] On the diagonal \(\Delta=\{(x,x):x\in[0,1]\}\), which can be written as \[\Delta=\bigcap_{n=1}^{\infty}\bigcup_{k=0}^{\infty}\left[\frac{k}{n},\frac{k+ 1}{n}\right]\times\left[\frac{k}{n},\frac{k+1}{n}\right]\in\mathcal{A}\otimes \mathcal{B},\] we have \(\pi(\Delta)=\infty\) and \(\rho(\Delta)=0\). ## 3 Counterexample measure We will construct a product measure, which utilises the c.l.d. measure. Let \(\Delta=\{(x,x):x\in[0,1]\}\) as before. Recall that \(\nu:\mathcal{B}\rightarrow[0,\infty]\) is the Lebesgue measure on the Borel \(\sigma\)-algebra. Define \(f:[0,1]\rightarrow[0,1]\times[0,1]\) to be \[f(x)=(x,x),\] which is a measurable function on \([0,1]\). As every preimage \(f^{-1}[E]\) of a measurable set \(E\in\mathcal{A}\otimes\mathcal{B}\) is measurable in \(\mathcal{B}\), we can safely define the set function \(\xi:\mathcal{A}\otimes\mathcal{B}\rightarrow[0,1]\) as \[\xi(E)=\nu(f^{-1}[E\cap\Delta]).\] We claim that \(\xi\) is a measure. Trivially, \(\xi(\emptyset)=0\). We now check the \(\sigma\)-additivity property. Let \(\{E_{n}\}_{n\in\mathbb{N}}\subseteq\mathcal{A}\otimes\mathcal{B}\) be a sequence of disjoint sets. Then, \[\xi\left(\bigcup_{n=0}^{\infty}E_{n}\right) =\nu\left(f^{-1}\left[\bigcup_{n=0}^{\infty}E_{n}\cap\Delta\right]\right)\] \[=\nu\left(f^{-1}\left[\bigcup_{n=0}^{\infty}(E_{n}\cap\Delta) \right]\right)\] \[=\nu\left(\bigcup_{n=0}^{\infty}f^{-1}[E_{n}\cap\Delta]\right)\] \[=\sum_{n=0}^{\infty}\nu\left(f^{-1}[E_{n}\cap\Delta]\right)\] \[=\sum_{n=0}^{\infty}\xi(E_{n}).\] That is, \(\xi\) is indeed a measure on \(\mathcal{A}\otimes\mathcal{B}\). We now proceed to the main result. **3.1 Theorem**.: _There exists a product measurable space \((X\times\mathbb{R},\mathcal{A}\otimes\mathcal{B},\mu\times\nu)\) such that for some \(c\in\mathbb{R}\) and some measurable set \(B\in\mathcal{A}\otimes\mathcal{B}\), the vertical shift of \(B\) by \(c\) results in a change in measure. That is, \(\mu\times\nu(B)\neq\mu\times\nu(B+c)\)._ Proof.: We assume the notions previously defined in this section. Consider the set function \(\eta:\mathcal{A}\otimes\mathcal{B}\rightarrow[0,\infty]\) given by \[\eta(E)=\rho(E)+\xi(E).\] Since \(\eta\) is a sum of measures on \(\mathcal{A}\otimes\mathcal{B}\), we have that \(\eta\) is also a measure on \(\mathcal{A}\otimes\mathcal{B}\). We remain to prove that \(\eta\) is a product measure. For this, we consider the following cases for a measurable rectangle \(A\times B\), where \(A\in\mathcal{A}\) and \(B\in\mathcal{B}\). * If \(\mu(A)<\infty\) and \(\nu(B)\leq\infty\), then \(A\) has finitely many points since \(\mu\) is a counting measure. So, \(A=\{a_{1},...,a_{k}\}\) for some \(k\in\{0,1,\dots\}\). It holds that \[A\times B=\{a_{1},...,a_{k}\}\times B\subseteq\{a_{1},...,a_{k}\}\times\mathbb{ R}=A\times\mathbb{R},\] and hence, \[\Delta\cap(A\times B)\subseteq\Delta\cap(A\times\mathbb{R})=\{(x,x):x=a_{1},...,a_{k}\}.\] Using monotonicity of measure, \[\xi(A\times B)\leq\xi(A\times\mathbb{R})=\nu(f^{-1}[\Delta\cap(A\times \mathbb{R})])=\nu(\{a_{1},...,a_{k}\})=0.\] Therefore, \(\eta(A\times B)=\rho(A\times B)+\underbrace{\xi(A\times B)}_{0}=\rho(A\times B )=\mu(A)\nu(B)\). * If \(\mu(A)=\infty\) and \(\nu(B)>0\), then \(\rho(A\times B)=\mu(A)\nu(B)=\infty\). Therefore, \[\eta(A\times B)=\underbrace{\rho(A\times B)}_{\infty}+\underbrace{\xi(A\times B )}_{\geq 0}=\underbrace{\rho(A\times B)}_{\infty}=\mu(A)\nu(B).\] * If \(\mu(A)=\infty\) and \(\nu(B)=0\), then \(\rho(A\times B)=\mu(A)\nu(B)=0\). It holds that \[f^{-1}[\Delta\cap(A\times B)]\subseteq f^{-1}[\Delta\cap(\mathbb{R}\times B )]=B\cap[0,1]\] By monotonicity of measure, \[\xi(A\times B)=\nu(f^{-1}[\Delta\cap(A\times B)])\leq\nu(B\cap[0,1])\leq\nu(B )=0.\] Thus, \(\eta(A\times B)=\rho(A\times B)+\xi(A\times B)=0=\mu(A)\nu(B)\). Therefore, \(\eta\) is indeed a product measure. Furthermore, \(\eta(\Delta)=\rho(\Delta)+\xi(\Delta)=0+1=1\). However, \(\eta(\Delta+1)=\rho(\Delta+1)+\xi(\Delta+1)=0+0=0\). \(\blacksquare\)
2304.09085
Balancing Unobserved Confounding with a Few Unbiased Ratings in Debiased Recommendations
Recommender systems are seen as an effective tool to address information overload, but it is widely known that the presence of various biases makes direct training on large-scale observational data result in sub-optimal prediction performance. In contrast, unbiased ratings obtained from randomized controlled trials or A/B tests are considered to be the golden standard, but are costly and small in scale in reality. To exploit both types of data, recent works proposed to use unbiased ratings to correct the parameters of the propensity or imputation models trained on the biased dataset. However, the existing methods fail to obtain accurate predictions in the presence of unobserved confounding or model misspecification. In this paper, we propose a theoretically guaranteed model-agnostic balancing approach that can be applied to any existing debiasing method with the aim of combating unobserved confounding and model misspecification. The proposed approach makes full use of unbiased data by alternatively correcting model parameters learned with biased data, and adaptively learning balance coefficients of biased samples for further debiasing. Extensive real-world experiments are conducted along with the deployment of our proposal on four representative debiasing methods to demonstrate the effectiveness.
Haoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu
2023-04-17T08:56:55Z
http://arxiv.org/abs/2304.09085v1
# Balancing Unobserved Confounding with a Few Unbiased Ratings in Debiased Recommendations ###### Abstract. Recommender systems are seen as an effective tool to address information overload, but it is widely known that the presence of various biases makes direct training on large-scale observational data result in sub-optimal prediction performance. In contrast, unbiased ratings obtained from randomized controlled trials or A/B tests are considered to be the golden standard, but are costly and small in scale in reality. To exploit both types of data, recent works proposed to use unbiased ratings to correct the parameters of the propensity or imputation models trained on the biased dataset. However, the existing methods fail to obtain accurate predictions in the presence of unobserved confounding or model misspecification. In this paper, we propose a theoretically guaranteed model-agnostic balancing approach that can be applied to any existing debiasing method with the aim of combating unobserved confounding and model misspecification. The proposed approach makes full use of unbiased data by alternatively correcting model parameters learned with biased data, and adaptively learning balance coefficients of biased samples for further debiasing. Extensive real-world experiments are conducted along with the deployment of our proposal on four representative debiasing methods to demonstrate the effectiveness. Recommender Systems; Bias; Debias; Unobserved Confounding + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Journal of Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems + Footnote †: journal: Information systems Recommender systems In contrast to observational ratings, uniform ratings are considered the golden standard and can be obtained from A/B tests or randomized controlled trials (RCTs), but harm users' experience and are costly and time-consuming (Han et al., 2017; Li et al., 2018). Due to its small scale property, it is impractical to train prediction models directly on unbiased ratings. Recent studies propose to use a few unbiased ratings for the parameter selection of the propensity and imputation models using bi-level optimization, which has a more favorable debiasing performance compared with the RCT-free debiasing methods (Han et al., 2017; Li et al., 2018). However, we show that using unbiased ratings only to correct propensity and imputation model parameters still leads to biased predictions, in the presence of unobserved confounding or model misspecification. This motivates a more sufficient use of the unbiased ratings to combat the effects of unobserved confounding. In this paper, we propose a model-agnostic approach to balance unobserved confounding with a few unbiased ratings. Different from the previous debiasing methods, our approach enlarges the model hypothesis space to include the unbiased ideal loss. The training objective of the balancing weights is formalized as a convex optimization problem, with balancing the loss estimation between biased and unbiased ratings as constraints. Through theoretical analysis, we prove the existence of the global optimal solution. Then, we propose an efficient training algorithm to achieve the training objectives, where the balancing weights are reparameterized and updated alternatively with the prediction model. Remarkably, the proposed balancing algorithm can be applied to any existing de-biased recommendation methods. The main contributions of this paper are summarized as follows. * We propose a principled balancing training objective with a few unbiased ratings for combating unmeasured confounding in debiased recommendations. * To optimize the objectives, we propose an efficient model-agnostic learning algorithm that alternatively updates the balancing weights and rating predictions. * Extensive experiments are conducted on two real-world datasets to demonstrate the effectiveness of our proposal. ## 2. Preliminaries Let \(\{u_{1},u_{2},\ldots,u_{M}\}\) be a set of \(M\) users, \(\{i_{1},i_{2},\ldots,i_{N}\}\) be the set of \(N\) items, and \(\mathcal{D}=\{(u_{m},i_{n})\mid m=1,\ldots,M;n=1,\ldots,N\}\) be the set of all user-item pairs. Denote \(\mathbf{R}=\{r_{u,i}\mid(u,i)\in\mathcal{D}\}\in\mathbb{R}^{|\mathcal{D}|}\) be a true rating matrix, where \(r_{u,i}\) is the rating of item \(i\) by user \(u\). However, users always selectively rate items based on their interests, resulting in observed ratings, denoted as \(\mathbf{R}^{\mathcal{B}}\in\mathbb{R}^{|\mathcal{B}|}(\mathcal{B}\subseteq \mathcal{D})\), are missing not at random and thus biased. For a given user-item pair \((u,i)\), let \(x_{u,i}\) be the feature vector of user \(u\) and item \(i\), such as user gender, age, and item attributes, etc. Let \(o_{u,i}\) be the binary variable indicating whether \(r_{u,i}\) is observed \(o_{u,i}=1\) or missing \(o_{u,i}=0\). Given the biased ratings \(\mathbf{R}^{\mathcal{B}}\), the prediction model \(\hat{r}_{u,i}=f(x_{u,i};\theta)\) in the debiased recommendation aims to predict all true ratings accurately. Ideally, it can be trained by minimizing the prediction error between the predicted rating matrix \(\hat{\mathbf{R}}=\{\hat{r}_{u,i}\mid(u,i)\in\mathcal{D}\}\in\mathbb{R}^{| \mathcal{D}|}\) and the true rating matrix \(\mathbf{R}\), and is given by \[\mathcal{L}_{ideal}(\theta)=\frac{1}{|\mathcal{D}|}\sum_{(u,i)\in\mathcal{D}} \delta(r_{u,i},\hat{r}_{u,i})=\frac{1}{|\mathcal{D}|}\sum_{(u,i)\in\mathcal{D }}e_{u,i}, \tag{1}\] where \(\delta(\cdot,\cdot)\) is a pre-specified loss, and \(e_{u,i}\) is the prediction error, such as the squared loss \(e_{u,i}=(\hat{r}_{u,i}-r_{u,i})^{2}\). For unbiased estimates of the ideal loss in Eq. (1), previous studies proposed to model the missing mechanism of the biased ratings \(\mathbf{R}^{\mathcal{B}}\). Formally, the probability \(p_{u,i}=\Pr(o_{u,i}=1|x_{u,i})\) of a user \(u\) rating an item \(i\) is called propensity. The inverse probability scoring (IPS) estimator (Nakamura and Koyama, 2015) is given as \[\mathcal{L}_{IPS}(\theta)=\frac{1}{|\mathcal{D}|}\sum_{(u,i)\in\mathcal{D}} \frac{o_{u,i}e_{u,i}}{\hat{p}_{u,i}},\] where \(\hat{p}_{u,i}=\pi(x_{u,i};\phi_{p})\) is an estimate of the propensity \(p_{u,i}\), and the IPS estimator is unbiased when \(\hat{p}_{u,i}=p_{u,i}\). The doubly robust (DR) estimator (Han et al., 2017; Li et al., 2018) is given as \[\mathcal{L}_{DR}(\theta)=\frac{1}{|\mathcal{D}|}\sum_{(u,i)\in\mathcal{D}} \Big{[}\hat{e}_{u,i}+\frac{o_{u,i}(e_{u,i}-\hat{e}_{u,i})}{\hat{p}_{u,i}}\Big{]},\] where \(\hat{e}_{u,i}=m(x_{u,i};\phi_{e})\) fits the prediction error \(e_{u,i}\) using \(x_{u,i}\), i.e., it estimates \(g_{u,i}=\mathbb{E}\left[e_{u,i}\mid x_{u,i}\right]\), and DR has double robustness, i.e., it is unbiased when either \(\hat{e}_{u,i}=g_{u,i}\) or \(\hat{p}_{u,i}=p_{u,i}\). In industrial scenarios, randomized controlled trials or A/B tests are considered to be the golden standard, and users might be asked to rate randomly selected items to collect unbiased ratings, denoted as \(\mathbf{R}^{\mathcal{U}}\in\mathbb{R}^{|\mathcal{U}|}(\mathcal{U}\subseteq \mathcal{D})\). The ideal loss can be estimated unbiasedly by simply taking the average of the prediction errors over the unbiased ratings \[\mathcal{L}_{\mathcal{U}}(\theta)=\frac{1}{|\mathcal{U}|}\sum_{(u,i)\in\mathcal{ U}}e_{u,i}\approx\mathcal{L}_{ideal}(\theta).\] However, unbiased ratings are costly and small in scale in reality. To exploit both types of data, recent works proposed to use unbiased ratings to correct the parameters of the propensity or imputation models trained on the biased dataset. Learning to debias (LTD) (Li et al., 2018) and AutoDebias (Han et al., 2017) propose to use bi-level optimization, using unbiased ratings \(\mathbf{R}^{\mathcal{U}}\) to correct the propensity and imputation model parameters, and then the prediction model is trained by minimizing the IPS or DR loss estimated on the biased ratings \(\mathbf{R}^{\mathcal{B}}\). Formally, this goal can be formulated as \[\phi^{*} =\arg\min_{\phi}\mathcal{L}_{\mathcal{U}}\left(\theta^{*}(\phi); \mathcal{U}\right) \tag{3}\] \[\text{s.t.}\ \theta^{*}(\phi) =\arg\min_{\theta}\mathcal{L}_{\mathcal{B}}(\theta,\phi;\mathcal{ B}), \tag{2}\] where \(\mathcal{L}_{\mathcal{B}}\) is a pre-defined loss on the biased ratings, such as IPS with \(\phi=\{\phi_{p}\}\), DR with \(\phi=\{\phi_{p},\phi_{e}\}\), and AutoDebias with an extra propensity that \(\phi=\{\phi_{p},\phi_{p2},\phi_{e}\}\). The bi-level optimization first performs an assumed update of \(\theta(\phi)\) by Eq. (3), then updates the propensity and imputation model parameters \(\phi\) by Eq. (2), and finally updates the prediction model parameters \(\theta\) by Eq. (3). ## 3. Proposed Approach We study debiased recommendations given biased ratings with a few unbiased ratings. Different from previous studies (Han et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018), we consider there may be unmesaured confounding in the biased ratings, making the unconfoundedness assumption no longer hold. In Section 3.1, we show that simply using unbiased ratings to perform model selection of propensity and imputation does not eliminate the bias from unobserved confounding and model misspecification. In Section 3.2, we propose a balancing training objective to combat the unobserved confounding and model misspecification by further exploiting unbiased ratings. In Section 3.3, we propose an efficient model-agnostic algorithm to achieve the training objective. ### Motivation First, the unbiasedness of IPS and DR requires not only that learned propensities or imputed errors are accurate, but also the unconfoundedness assumption holds, i.e., \(o_{u,i}\perp\hat{e}_{u,i}\mid x_{u,i}\). However, there may exist unobserved confounding \(h\), making \(o_{u,i}\perp\hat{e}_{u,i}\mid x_{u,i}\) and \(o_{u,i}\perp\hat{e}_{u,i}\mid(x_{u,i},h_{u,i})\). Let \(\hat{p}_{u,i}=\Pr(o_{u,i}=1\mid x_{u,i},h_{u,i})\) be the true propensity, then the nominal propensity \(p_{u,i}\neq\hat{p}_{u,i}\), and Lemma 1 states that the existing IPS and DR on \(\mathbb{R}^{\mathcal{B}}\) are biased estimates of the ideal loss in the presence of unobserved confounding. **Lemma 1**.: _The IPS and DR estimators are biased in the presence of unobserved confounding, even the learned propensities and imputed errors are accurate, i.e., \(\hat{p}_{u,i}=p_{u,i}\), \(\hat{e}_{u,i}=g_{u,i}\), then_ \[\mathbb{E}[\mathcal{L}_{IPS}(\theta)]-\mathbb{E}[\mathcal{L}_{ideal}(\theta)] =\mathrm{Cov}\left(\frac{o_{u,i}-p_{u,i}}{p_{u,i}},e_{u,i}\right)\neq 0,\] _and_ \[\mathbb{E}[\mathcal{L}_{DR}(\theta)]-\mathbb{E}[\mathcal{L}_{ideal}(\theta)] =\mathrm{Cov}\left(\frac{o_{u,i}-p_{u,i}}{p_{u,i}},e_{u,i}-g_{u,i}\right)\neq 0.\] Proof.: For DR estimator, if \(\hat{p}_{u,i}=p_{u,i}\), \(\hat{e}_{u,i}=g_{u,i}\), we have \[\mathbb{E}[\mathcal{L}_{DR}(\theta)] =\mathbb{E}\left[e_{u,i}+\frac{o_{u,i}-p_{u,i}}{p_{u,i}}\left(e_ {u,i}-g_{u,i}\right)\right]\] \[=\mathbb{E}[\mathcal{L}_{ideal}(\theta)]+\mathbb{E}\left[\frac{o_ {u,i}-p_{u,i}}{p_{u,i}}\left(e_{u,i}-g_{u,i}\right)\right]\] \[=\mathbb{E}[\mathcal{L}_{ideal}(\theta)]+\mathrm{Cov}\left(\frac{ o_{u,i}-p_{u,i}}{p_{u,i}},e_{u,i}-g_{u,i}\right).\] The last equation follows by noting that \[\mathbb{E}\left[\frac{o_{u,i}-p_{u,i}}{p_{u,i}}\right]=\mathbb{E}\left[ \mathbb{E}\left(\frac{o_{u,i}-p_{u,i}}{p_{u,i}}\mid x_{u,i}\right)\right]=0,\] and \(\mathbb{E}[e_{u,i}-g_{u,i}]=0\). In the presence of hidden confounding, \(\mathrm{Cov}((o_{u,i}-p_{u,i})/p_{u,i},e_{u,i}-g_{u,i})\neq 0\). The conclusions of the IPS estimator can be obtained directly from taking \(g_{u,i}=0\) in DR. In addition, the existing methods using bi-level optimization, as shown in Eq. (2) and Eq. (3), simply uses unbiased ratings for parameter tuning of the propensity and imputation models. It follows that the prediction models in hypothesis space \(\mathcal{H}_{\phi}=\{\mathcal{L}_{\mathcal{B}}(\theta,\phi)\mid\phi\in\Phi\}\) are as a subset of DR, where \(\Phi\) is the parameter space of \(\phi\). Though the unbiased ratings correct partial bias, in the presence of unobserved confounding or model misspecification, i.e., \(\mathcal{L}_{ideal}\not\in\mathcal{H}_{\phi}\), it is still biased due to the limited \(\mathcal{H}_{\phi}\). **Proposition 2**.: _The IPS and DR estimators are biased, in the presence of (a) unobserved confounding or (b) model misspecification._ Proposition 2 concludes the biased property of IPS and DR in the presence of unobserved confounding or model misspecification. ### Training Objective To combat unobserved confounding and model misspecification on biased ratings, we propose a balancing approach to fully leverage the unbiased ratings for debiased recommendations. First, when there is no unobserved confounding, we have \[\mathbb{E}[\mathcal{L}_{\mathcal{B}}(\theta,\phi;\mathcal{B})]=\mathbb{E}[ \mathcal{L}_{\mathcal{U}}\left(\theta(\phi);\mathcal{U}\right)].\] To obtain unbiased estimates in the presence of unmeasured confounding or model misspecification, we propose to enlarge the hypothesis space to include the ideal loss, from \(\mathcal{H}_{\phi}\) to \(\mathcal{H}_{Bal}=\{w^{T}\mathcal{L}_{\mathcal{B}}(x;\theta,\phi)\mid\phi\in \Phi,w\in\mathbb{R}^{|\mathcal{D}|}\}\), where \(\mathcal{L}_{\mathcal{B}}(x;\theta,\phi)\in\mathbb{R}^{|\mathcal{D}|}\) consists of the contribution of \((u,i)\) to \(\mathcal{L}_{\mathcal{B}}\). The effects of the unobserved confounding and model misspecification can be balanced through introducing the coefficients \(w_{u,i}\) for each \((u,i)\), by making \[\mathbb{E}[\mathbf{w}^{T}\mathcal{L}_{\mathcal{B}}(x;\theta,\phi)]=\mathbb{E}[ \mathcal{L}_{\mathcal{U}}\left(\theta(\phi);\mathcal{U}\right)]=\mathbb{E}[ \mathcal{L}_{ideal}(\theta)]. \tag{4}\] Proposition 3 is the empirical version of Eq. (4) in terms of the balanced IPS, DR, and AutoDebias loss. **Proposition 3**.: _(a) There exists \(w_{u,i}>0\), \((u,i)\in\mathcal{B}\) such that_ \[\sum_{(u,i)\in\mathcal{B}}w_{u,i}\frac{e_{u,i}}{\hat{p}_{u,i}}=\frac{1}{| \mathcal{U}|}\sum_{(u,i)\in\mathcal{U}}e_{u,i}.\] _(b) There exists \(w_{u,i,1}>0\), \((u,i)\in\mathcal{D}\) and \(w_{u,i,2}>0\), \((u,i)\in\mathcal{B}\) such that_ \[\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\hat{e}_{u,i}+\sum_{(u,i)\in\mathcal{B}}w_{ u,i,2}\frac{e_{u,i}-\hat{e}_{u,i}}{\hat{p}_{u,i}}=\frac{1}{|\mathcal{U}|}\sum_{(u,i) \in\mathcal{U}}e_{u,i}.\] _(c) There exists \(w_{u,i,1}>0\), \((u,i)\in\mathcal{D}\) and \(w_{u,i,2}>0\), \((u,i)\in\mathcal{B}\) such that_ \[\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\frac{\hat{e}_{u,i}}{\hat{p}_{u,i}}+\sum_{(u,i)\in\mathcal{B}}w_{u,i,2}\frac{e_{u,i}}{\hat{p}_{u,i}}=\frac{1}{|\mathcal{U}| }\sum_{(u,i)\in\mathcal{U}}e_{u,i}.\] From Proposition 3(a), when \(w_{u,i}\equiv|\mathcal{D}|^{-1}\), the left-hand side (LFS) degenerates to the standard IPS with maximal entropy of the balancing weights. The training objectives of the balanced IPS are \[\max_{\mathbf{w}\in\mathbb{R}^{|\mathcal{B}|}} \sum_{(u,i)\in\mathcal{B}}w_{u,i}\log(w_{u,i})\] (5) s.t. \[w_{u,i}>0,\quad(u,i)\in\mathcal{B} \tag{6}\] \[\frac{1}{|\mathcal{B}|}\sum_{(u,i)\in\mathcal{B}}w_{u,i}=\frac{1}{| \mathcal{D}|}\] (7) \[\sum_{(u,i)\in\mathcal{B}}w_{u,i}\frac{e_{u,i}}{\hat{p}_{u,i}}= \frac{1}{|\mathcal{U}|}\sum_{(u,i)\in\mathcal{U}}e_{u,i}, \tag{8}\] where the training objective in Eq. (5) is to maximize the empirical entropy of the balancing weights and to be able to prevent extreme weights. The positivity and normality of the balancing weights are guaranteed by Eq. (6) and Eq. (7), respectively, and the influence of unobserved confounding and model misspecification is balanced out by reweighting the IPS estimates on biased ratings in Eq. (8). Similarly, for balanced DR and AutoDebias in Proposition 3(b) and 3(c), the estimators are re-weighted by \(w_{u,i,1}\) and \(w_{u,i,2}\) on the entire and biased user-item pairs, respectively, to combat unobserved confounding and model misspecification. The training objectives of the balanced DR are (9) \[\max_{\mathbf{w}_{1},\mathbf{w}_{2}} \sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\log(w_{u,i,1})+\sum_{(u,i)\in \mathcal{B}}w_{u,i,2}\log(w_{u,i,2})\] (10) \[\text{s.t.} w_{u,i,1}>0,\quad(u,i)\in\mathcal{D},\qquad w_{u,i,2}>0,\quad(u,i) \in\mathcal{B}\] \[\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}=1,\quad\frac{1}{|\mathcal{B }|}\sum_{(u,i)\in\mathcal{B}}w_{u,i,2}=\frac{1}{|\mathcal{D}|}\] (11) \[\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\tilde{e}_{u,i}+\sum_{(u,i)\in \mathcal{B}}w_{u,i,2}\frac{e_{u,i}-\tilde{e}_{u,i}}{\tilde{p}_{u,i}}=\frac{1} {|\mathcal{U}|}\sum_{(u,i)\in\mathcal{U}}e_{u,i},\] (12) where \(\mathbf{w}_{1}=[w_{u,i,1}\mid(u,i)\in\mathcal{D}],\mathbf{w}_{2}=[w_{u,i,2} \mid(u,i)\in\mathcal{B}]\), and the difference in balanced AutoDebias is that Eq. (12) comes to \[\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\frac{\tilde{e}_{u,i}}{\tilde{p}_{u,i,1}}+ \sum_{(u,i)\in\mathcal{B}}w_{u,i,2}\frac{e_{u,i}}{\tilde{p}_{u,i,2}}=\frac{1} {|\mathcal{U}|}\sum_{(u,i)\in\mathcal{U}}e_{u,i}, \tag{13}\] where the LFS of Eq. (12) and Eq. (13) degenerates to standard DR and AutoDebias, respectively, when \(w_{u,i,1}\equiv|\mathcal{D}|^{-1}\) on \(\mathcal{D}\) and \(w_{u,i,1}\equiv|\mathcal{D}|^{-1}\) on \(\mathcal{B}\). Theorem 4 proves the existence of global optimal solutions corresponding to the proposed balanced IPS, DR and AutoDebias using Karush-Kuhn-Tucker conditions. Theorem 4 ().: _There exists global optimal solutions to the optimization problem in balanced IPS, DR and AutoDebias._ Proof.: Note that the empirical entropy as the optimization objectives in Eq. (5) and Eq. (9) are strictly convex. The inequality constraints in Eq. (6) and Eq. (10) are strictly feasible, i.e., there exists \(w_{u,i}\) in \(\mathcal{D}\) such that \(w_{u,i}>0\). The equality constraints are affine in Eq. (7), Eq. (8), Eq. (11), and Eq. (12). By the Karush-Kuhn-Tucker condition, there exist global optimal solutions. Theoretically, due to the convexity of the objective function, its local optimal solution is same as the global optimal solution. The generalized Lagrange multiplier method can be used to solve the primal and the dual problem, and such balancing weights can effectively combat the unobserved confounding as in Proposition 3. ### Training Algorithm Next, we propose an efficient mode-agnostic training algorithm to achieve the training objective in Section 3.2. The algorithm consists of three parts: first, training the propensity and imputation models using a bi-level optimization, _but without updating the prediction model_; then, reparameterizing and updating the gradients of the balancing weights to combat the effects of unobserved confounding and model misspecification; and finally, _minimizing the estimated balancing loss_, named Bal-IPS, Bal-DR, or Bal-AutoDebias, and updating the prediction model to achieve unbiased learning. #### 3.3.1. Propensity and Imputation Model Training Different from LTD and AutoDebias that use bi-level optimization to update the prediction model, we only perform assumed updates of the prediction model parameters \(\theta(\phi)\) using bi-level optimization by Eq. (3), and updates of the propensity and imputation model parameters \(\phi\) by Eq. (2). Since there may exist unobserved confounding or model misspecification, we postpone the true update of the prediction model parameters \(\theta\) to Section 3.3.3, after performing the balancing steps in Section 3.3.2. We summarized the propensity and imputation model training algorithm in Alg. 1. ``` Input:\(S\), \(\mathbf{R}^{\mathcal{B}}\), \(\mathbf{R}^{\mathcal{U}}\), \(\phi_{0}\), \(\theta\), \(\eta\) 1for\(s=0,\ldots,S-1\)do 2 Sample mini-batches \(\mathcal{B}_{s}\subseteq\mathcal{B}\) and \(\mathcal{U}_{s}\subseteq\mathcal{U}\); 3 Compute the lower loss in Eq. (3) on \(\mathcal{B}_{s}\); 4 Compute an assumed update \(\theta_{s+1}(\phi_{s})\leftarrow\theta_{s}-\eta\nabla_{\theta_{s}}\mathcal{L} _{\mathcal{B}}(\theta,\phi;\mathcal{B}_{s})\); 5 Compute the upper loss in Eq. (2) on \(\mathcal{U}_{s}\); 6 Update the propensity and imputation model \(\phi_{s+1}\leftarrow\phi_{s}-\eta\nabla_{\phi_{s}}\mathcal{L}_{\mathcal{U}}( \theta_{s+1}(\phi);\mathcal{B}_{s})\); 7 8 end for Output:\(\phi_{S}\) ``` **Algorithm 1**Propensity and Imputation Model Training #### 3.3.2. Balancing Unobserved Confounding Training One challenge in solving the balancing optimization problem is that as the number of user-item pairs increases, the number of balancing weights also increases, resulting in a significant increase in solution time for large-scale datasets. To address this issue, we propose to _reparameterize_\(w_{u,i}\) in the balanced IPS, i.e., \(w_{u,i}=g(x_{u,i};\xi)\), where \(\xi\) is the balancing model parameter. To satisfy the optimization constraints Eq. (6) and Eq. (7), the last layer of \(g(x_{u,i};\xi)\) uses Sigmoid as the activation function to guarantee positivity and batch normalization to guarantee normality. The balancing weights in the balanced IPS are trained by minimizing the negative empirical entropy with the violation of the balanced constraint Eq. (8) as regularization \[\mathcal{L}_{W-IPS}(\xi)= -\sum_{(u,i)\in\mathcal{B}}w_{u,i}\log(w_{u,i})\] \[+\lambda\Bigg{[}\sum_{(u,i)\in\mathcal{B}}w_{u,i}\frac{e_{u,i}}{ \tilde{p}_{u,i}}-\frac{1}{|\mathcal{U}|}\sum_{(u,i)\in\mathcal{U}}e_{u,i} \Bigg{]}^{2},\] where \(\lambda>0\) is a hyper-parameter, for trade-off the original loss estimation with the correction due to the unobserved confounding. Similarly, \(w_{u,i,1}\) and \(w_{u,i,2}\) in the balanced DR and balanced AutoDebias are also reparameterized as \(w_{u,i,1}=g(x_{u,i};\xi_{1})\) and \(w_{u,i,2}=g(x_{u,i};\xi_{2})\). The balancing weights in the balanced DR and balanced AutoDebias are trained by minimizing \[\mathcal{L}_{W-DR}(\xi)= -\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\log(w_{u,i,1})-\sum_{(u,i)\in \mathcal{B}}w_{u,i,2}\log(w_{u,i,2})\] \[+\lambda\Bigg{[}\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\tilde{e}_{u,i }+\sum_{(u,i)\in\mathcal{B}}w_{u,i,2}\frac{e_{u,i}-\tilde{e}_{u,i}}{\tilde{p}_{u,i }}-\frac{1}{|\mathcal{U}|}\sum_{(u,i)\in\mathcal{U}}e_{u,i}\Bigg{]}^{2},\] and \[\mathcal{L}_{W-Auto}(\xi)= -\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\log(w_{u,i,1})-\sum_{(u,i)\in \mathcal{B}}w_{u,i,2}\log(w_{u,i,2})\] \[+\lambda\Bigg{[}\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\frac{\tilde {e}_{u,i}}{\tilde{p}_{u,i,1}}+\sum_{(u,i)\in\mathcal{B}}w_{u,i,2}\frac{e_{u,i}}{ \tilde{p}_{u,i,2}}-\frac{1}{|\mathcal{U}|}\sum_{(u,i)\in\mathcal{U}}e_{u,i} \Bigg{]}^{2},\] where \(\lambda>0\) is a hyper-parameter, and \(\xi\equiv\{\xi_{1},\xi_{2}\}\) are the parameters of the balancing model. ``` Input:\(T\), \(S\), \(\mathbf{R}^{\mathcal{B}}\), \(\mathbf{R}^{\mathcal{H}}\), \(\phi_{0}\), \(\theta_{0}\), \(\xi_{0}\), \(\eta\), \(\lambda\) 1for\(t=0,\ldots,T-1\)do 2 Call Alg. 1 by \(\phi_{t+1}\leftarrow\) Alg. \(1(S,\mathbf{R}^{\mathcal{B}},\mathbf{R}^{\mathcal{H}},\phi_{t},\theta_{t},\eta)\); 3for\(s=0,\ldots,S-1\)do 4 Sample mini-batches \(\mathcal{D}_{t}^{s}\subseteq\mathcal{D}\), \(\mathcal{B}_{t}^{s}\subseteq\mathcal{B}\) and \(\mathcal{U}_{t}^{s}\subseteq\mathcal{U}\); 5 Compute unmeasured confounding balancing loss; 6 Update the balancing weight \(\xi_{t}^{s+1}\leftarrow\xi_{t}^{s}-\eta\nabla_{\xi_{t}^{s}}\mathcal{L}_{W}( \xi)\); 7 Compute the balanced prediction error loss; 8 Update the prediction model \(\theta_{t}^{s+1}\leftarrow\theta_{t}^{s}-\eta\nabla\theta_{t}^{s}\mathcal{L}_{ Bal}(\theta)\); 9 10 end for 11 Copy the balancing model's parameters \(\hat{\xi}_{t+1}^{0}\leftarrow\xi_{t}^{S}\); 12 Copy the prediction model's parameters \(\theta_{t+1}^{0}\leftarrow\theta_{t}^{S}\); 13 14 end for Output:\(\theta_{T}\) ``` **Algorithm 2** Balancing Unobserved Confounding Training #### 3.3.3. Prediction Model Training Since the optimization of the balancing weights aims to balance the prediction errors on the biased and unbiased ratings, which also depends on the prediction model, we propose to update the balancing model and the prediction model alternatively. Specifically, given the balancing weights of IPS, the prediction model is trained by minimizing the balanced IPS (Bal-IPS) \[\mathcal{L}_{Bal-IPS}(\theta)=\sum_{(u,i)\in\mathcal{B}}w_{u,i}\frac{e_{u,i}}{ \hat{p}_{u,i}}. \tag{14}\] Similarly, for balanced DR (Bal-DR) or balanced AutoDebias (Bal-AutoDebias), the prediction model is trained by minimizing \[\mathcal{L}_{Bal-DR}(\theta)=\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\hat{e}_{u,i}+ \sum_{(u,i)\in\mathcal{B}}w_{u,i,2}\frac{e_{u,i}-\hat{e}_{u,i}}{\hat{p}_{u,i}}, \tag{15}\] and \[\mathcal{L}_{Bal-Auto}(\theta)=\sum_{(u,i)\in\mathcal{D}}w_{u,i,1}\frac{\hat {e}_{u,i}}{\hat{p}_{u,i,1}}+\sum_{(u,i)\in\mathcal{B}}w_{u,i,2}\frac{e_{u,i}} {\hat{p}_{u,i,2}}. \tag{16}\] Next, given the prediction model, the balancing weights are updated again as described in Section 3.3.2. The balancing weights and the prediction model are updated alternately, allowing a more adequate use of unbiased ratings, resulting in unbiased learning of the prediction model. The main difference compared with LTD (Kumar et al., 2017) and AutoDebias (Bahdan et al., 2017) is that we do not only use unbiased ratings to select the parameters of the propensity and imputation models, and then use standard IPS or DR for the prediction model update. Instead, we combat the effects of unobserved confounding by introducing a balancing model, and then perform prediction model updates based on the balanced losses. Remarkably, the proposed method is model-agnostic and can be applied to any of the debiased recommendation methods. Here we use IPS, DR and AutoDebias for illustration. We summarized the whole training algorithm in Alg. 2. #### 3.3.4. Training Efficiency The proposed workflow for balancing the unobserved confounding is shown in Figure 1. In Section 3.3.1, our algorithm performs two forward and backward passes for the prediction model on \(\mathbf{R}^{\mathcal{B}}\) and \(\mathbf{R}^{\mathcal{H}}\), respectively, and one forward and backward pass for the propensity and imputation model on \(\mathbf{R}^{\mathcal{B}}\). The backward-on-backward pass is used to obtain the gradients of the propensity and imputation models. In Section 3.3.2, one forward and one reverse pass are performed for the balancing model. In Section 3.3.3, a backward pass is used to actually update the prediction model. We refer to (Kumar et al., 2017; Kumar et al., 2017) that the running time of a backward-on-backward pass and a forward pass are about the same. As a result, the training time of the proposed algorithm does not exceed 3x learning time compared to two-stage learning and about 1.5x learning time compared to LTD and AutoDebias. ## 4. Real-World Experiments In this section, We conduct extensive experiments on two real-world datasets to answer the following research questions (RQs): * Do the proposed Bal-methods improve the debiasing performance compared with the existing methods? \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Users & Items & Training & Uniform & Validation & Test \\ \hline Music & 15,400 & 1,000 & 311,704 & 2,700 & 2,700 & 48,600 \\ Coat & 290 & 300 & 6,960 & 232 & 232 & 4,176 \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of the datasets. Figure 1. The proposed workflow for balancing unobserved confounding consists of four steps: (1) assumed updating the prediction model parameters from \(\theta(\phi)\) to \(\theta^{\prime}(\phi)\) using \(\mathbf{R}^{\mathcal{B}}\) (green arrow); (2) updating the propensity and imputation model parameters \(\phi\) using \(\mathbf{R}^{\mathcal{H}}\) (blue arrow); (3) updating the balancing model parameters \(\phi\) using both \(\mathbf{R}^{\mathcal{B}}\) and \(\mathbf{R}^{\mathcal{H}}\) (red arrow); (4) actually updating the prediction model parameters \(\theta\) using the balanced loss \(w^{\mathcal{I}}\mathcal{L}_{\mathcal{B}}\) (red arrow). * Do our methods stably perform well with different initializations of the prediction model? * How does the balancing model affect the performance of our methods? * What factors influence the effectiveness of our methods? ### Experimental Setup **Dataset and preprocessing**. Following the previous studies ((5; 35; 42; 43)), we conduct extensive experiments on the two widely used real-world datasets with both missing-not-at-random (MMAR) and missing-at-random (MAR) ratings: **Music1** and **Coat2**. In particular, Music dataset contains 15,400 users and 1,000 items with 54,000 MAR and 311,704 MNAR ratings. Coat dataset contains 290 users and 300 items with 4,640 MAR and 6,960 MNAR ratings. Following (5; 28), we take all the biased data as the training set and randomly split the uniform data as three parts: 5% for balancing the unobserved confounding, 5% for validation set and 90% for test set. We summarize the datasets and splitting details in Table 1. Footnote 1: [http://webscope.sandbox.yahoo.com/](http://webscope.sandbox.yahoo.com/) **Baselines.** In our experiments, we compare the proposed Bal-methods with the following baselines: \(\bullet\)**Base Model**(21): the Matrix Factorization (MF) model is trained on biased data, uniform data and both of them respectively, denoted as MF (biased), MF (uniform) and MF (combine). \(\bullet\)**Inverse Propensity Scoring (IPS)**(37): a reweighting method using inverse propensity scores to weight the observed events. \(\bullet\)**Doub Robust (DR)**(35; 42): an efficient method combining imputations and inverse propensities with double robustness. \(\bullet\)**CausE**(28): a sample-based knowledge distillation approach to reduce computational complexity. \(\bullet\)**KD-Label**(28): an efficient framework for knowledge distillation to transfer unbiased information to teacher model and guide the training of student model. \(\bullet\)**AutoDebias**(5): a meta-learning based method using few unbiased data to further mitigate the selection bias. **Experimental protocols and details.** Following (5; 43), AUC, NDCG@5 and NDCG@10 are adopted as the evaluation metrics to measure the debiasing performance. Formally, \[AUC=\frac{\sum_{(u,i)\in\mathcal{U}^{*}}\hat{Z}_{u,i}-\left|\mathcal{U}^{*} \right|\cdot(\left|\mathcal{U}^{*}\right|+1)/2}{\left|\mathcal{U}^{*}\right| \cdot(\left|\mathcal{U}\right|-\left|\mathcal{U}^{*}\right|)},\] and NDCG@k measures the quality of ranking list as \[DCG_{u}\oplus k=\sum_{(u,i)\in\mathcal{U}}\frac{\mathbb{I}(\hat{Z}_{u,i}\leq k )}{\log(\hat{Z}_{u,i}+1)},\quad NDCG@k=\frac{1}{M}\sum_{m=1}^{M}\frac{DCG_{ u_{m}}(\oplus k)}{IDCG_{u_{m}}(\oplus k)},\] where \(\mathcal{U}^{*}\subseteq\mathcal{U}\) denotes the positive ratings in the uniform dataset, \(\hat{Z}_{u,i}\) is the rank position of \((u,i)\) given by the rating predictions, and \(IDCG_{u_{m}}(\oplus k)\) is the ideal \(DCG_{u_{m}}(\oplus k)\). All the methods are implemented on PyTorch. Throughout, Adam optimizer is utilized for propensity and imputation model with learning rate and weight decay in [1e-4, 1e-2]. SGD optimizer is utilized for prediction model and balancing model with learning rate in [1e-7, 1] and weight decay in [1e-4, 1]. We tune the regularization hyper-parameter \(\lambda\) in \(\{0,2^{-9},2^{-6},2^{-3},1\}\). All hyper-parameters are tuned based on the performance on the validation set. ### Performance Comparison (RQ1) Table 2 compares the prediction performance of the various methods on two real-world datasets Music and Coat. We find that the proposed model-agnostic Bal-methods have significantly improved performance when applied to MF, IPS, DR and AutoDebias with respect to all metrics. Overall, Bal-AutoDebias exhibits the best performance. Impressively, although AutoDebias hardly improves the performance on Coat compared with DR as reported in (5), the proposed Bal-AutoDebias improves 4.21% and 3.06% on NDCG@5 and NDCG@10 compared with the best baseline, respectively, validating the effectiveness of the proposed balancing approach. In addition, MF using only uniform data exhibits the worst performance, due to its small size which causes unavoidable overfitting. Directly combining the biased and unbiased ratings increases the MF performance slightly and insignificantly. As in (5), AutoDebias \begin{table} \begin{tabular}{c|c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Music} & \multicolumn{6}{c}{Coat} \\ \cline{2-13} & AUC & RI & NDCG@5 & RI & NDCG@10 & RI & AUC & RI & NDCG@5 & RI & NDCG@10 & RI \\ \hline CausE & 0.731 & - & 0.551 & - & 0.656 & - & 0.761 & - & 0.500 & - & 0.605 & - \\ \hline KD-Label & 0.740 & - & 0.580 & - & 0.680 & - & 0.750 & - & 0.504 & - & 0.610 & - \\ \hline MF (biased) & 0.727 & - & 0.550 & - & 0.655 & - & 0.747 & - & 0.500 & - & 0.606 & - \\ MF (uniform) & 0.573 & - & 0.449 & - & 0.591 & - & 0.579 & - & 0.358 & - & 0.482 & - \\ MF (combine) & 0.730 & - & 0.554 & - & 0.659 & - & 0.750 & - & 0.503 & - & 0.611 & - \\ Bal-MF & **0.739** & 1.23\% & **0.579** & 4.51\% & **0.679** & 3.03\% & **0.761** & 1.47\% & **0.511** & 1.59\% & **0.620** & 1.47\% \\ \hline IPS & 0.723 & - & 0.549 & - & 0.656 & - & 0.760 & - & 0.509 & - & 0.613 & - \\ Bal-IPS & **0.727** & 0.55\% & **0.564** & 2.73\% & **0.668** & 1.83\% & **0.771** & 1.45\% & **0.521** & 2.36\% & **0.628** & 2.45\% \\ \hline DR & 0.724 & - & 0.550 & - & 0.656 & - & 0.765 & - & 0.521 & - & 0.620 & - \\ Bal-DR & **0.731** & 0.97\% & **0.569** & 3.45\% & **0.669** & 1.98\% & **0.770** & 0.65\% & **0.523** & 0.38\% & **0.628** & 1.29\% \\ \hline AutoDebias & 0.741 & - & 0.645 & - & 0.725 & - & 0.766 & - & 0.522 & - & 0.621 & - \\ Bal-AutoDebias & **0.749** & 1.08\% & **0.670** & 3.88\% & **0.744** & 2.62\% & **0.772** & 0.78\% & **0.544** & 4.21\% & **0.640** & 3.06\% \\ \hline \hline \end{tabular} * Note: RI refers to the relative improvement of Bal-methods over the corresponding baseline. \end{table} Table 2. Performance comparison in terms of AUC, NDCG@5, and NDCG@10. The best results to each base method are bolded. has the most competitive performance among the existing methods, due to the use of unbiased ratings for the parameter selection of the propensity and imputation models. However, as discussed in previous sections, the previous methods were unable to combat the potential unobserved confounding in the biased data. The proposed Bal-methods address this issue by further utilizing unbiased ratings to balance the loss estimates from biased ratings. ### In-depth Analysis (RQ2) We further conduct an in-depth analysis by using the pre-trained prediction model parameters given by IPS, DR and AutoDebias as initialization in Alg. 2, respectively, to verify that the proposed Bal-methods can be effectively applied to any existing debiasing methods. The results are presented in Table 3. We find that all Bal-methods show significant performance improvement in all metrics compared to the pre-trained prediction models. Notably, applying the Bal-methods to any initialized predictions can stably boost the performance compared with AutoDebias on Coat, which can be explained by the possible presence of unobserved confounding and model misspecification in the biased data, while our method can mitigate the potential bias via a model-agnostic manner. ### Ablation Study (RQ3) To explore the impact of the proposed balancing model on the debiasing performance, we conduct ablation experiments using varying regularization hyperparameters \(\lambda\) for trade-offs between the original loss estimation and the correction due to the unobserved confounding. Note that when \(\lambda=0\), the globally optimal balancing weights equal to \(1/|\mathcal{D}|\) with maximum entropy, degenerating to the standard IPS, DR and AutoDebias. We tune \(\lambda\) in {0, \(2^{-9}\), \(2^{-6}\), \(2^{-3}\), 1} on Bal-IPS, Bal-DR and Bal-AutoDebias, and the results are shown in Figure 2, where the black dashed line is used as the most competitive baseline for reference. We find that the AUC and NDCG@K of all methods first increase and then decrease with the increasing constraint strength, with optimal performance around \(\lambda=2^{-6}\). This is interpreted as the best tradeoff between estimated loss and unobserved confounding. All methods using \(\lambda>0\) stably outperform the standard AutoDebias and the case without considering unobserved confounding, i.e., \(\lambda=0\), so it can be concluded that the proposed balancing model plays an important role in the debiasing. \begin{table} \begin{tabular}{c|c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{Method} & \multicolumn{3}{c|}{Music} & \multicolumn{3}{c}{Coat} \\ \hline \(\lambda_{m,L}\) & \(\mu_{M,L}\) & AUC & NDCG@5 & NDCG@10 & AUC & NDCG@5 & NDCG@10 \\ \hline MF & MF & 0.749 & 0.670 & 0.744 & 0.772 & 0.544 & 0.640 \\ MF & NCF & 0.745 & 0.667 & 0.742 & 0.769 & 0.539 & 0.635 \\ NCF & MF & **0.762** & **0.675** & **0.748** & **0.774** & **0.548** & **0.646** \\ NCF & NCF & 0.749 & 0.671 & 0.745 & 0.771 & 0.545 & 0.639 \\ \hline \hline \end{tabular} \end{table} Table 4. Effects of balancing models on Bal-AutoDebias. Figure 2. Effect of regularization strength \(\lambda\) on Music and Coat, degenerating to standard AutoDebias when \(\lambda=0\). \begin{table} \begin{tabular}{c|c|c c|c c|c c|c c} \hline \hline & Initial Method & \multicolumn{3}{c|}{Initial with IPS} & \multicolumn{3}{c|}{Initial with DR} & \multicolumn{3}{c}{Initial with AutoDebias} \\ \hline Dataset & Method & AUC & NDCG@5 & NDCG@10 & AUC & NDCG@5 & NDCG@10 & AUC & NDCG@5 & NDCG@10 \\ \hline \multirow{4}{*}{Music} & Baseline & 0.723 & 0.549 & 0.656 & 0.724 & 0.550 & 0.656 & 0.741 & 0.645 & 0.725 \\ & Bal-IPS & \(0.726_{0.48\uparrow}\) & \(0.561_{2.28\uparrow}\) & \(0.666_{1.58\uparrow}\) & \(0.726_{0.35\uparrow}\) & \(0.562_{2.28\uparrow}\) & \(0.666_{1.55\uparrow}\) & \(0.747_{0.88\uparrow}\) & \(0.656_{1.78\uparrow}\) & \(0.733_{1.19\uparrow}\) \\ & Bal-DR & \(0.725_{0.38\uparrow}\) & \(0.556_{1.38\uparrow}\) & \(0.665_{1.48\uparrow}\) & \(0.726_{0.33\uparrow}\) & \(0.559_{1.68\uparrow}\) & \(0.667_{1.78\uparrow}\) & \(0.748_{0.98\uparrow}\) & \(0.658_{2.08\uparrow}\) & \(0.734_{1.28\uparrow}\) \\ & Bal-AutoDebias & \(\textbf{0.739}_{2.28\uparrow}\) & \(\textbf{0.584}_{6.48\uparrow}\) & \(\textbf{0.683}_{1.15\uparrow}\) & \(\textbf{0.740}_{2.28\uparrow}\) & \(\textbf{0.586}_{6.59\uparrow}\) & \(\textbf{0.684}_{4.38\uparrow}\) & \(\textbf{0.749}_{1.11\uparrow}\) & \(\textbf{0.670}_{3.9\uparrow}\) & \(\textbf{0.744}_{2.63\uparrow}\) \\ \hline \multirow{4}{*}{Coat} & Baseline & 0.760 & 0.509 & 0.613 & 0.765 & 0.521 & 0.620 & 0.766 & 0.522 & 0.621 \\ & Bal-IPS & \(\textbf{0.771}_{1.45\uparrow}\) & \(0.521_{2.48\uparrow}\) & \(0.628_{2.48\uparrow}\) & \(0.770_{0.78\uparrow}\) & \(0.523_{0.48\uparrow}\) & \(0.627_{1.15\uparrow}\) & \(0.770_{0.58\uparrow}\) & \(0.523_{0.28\uparrow}\) & \(0.629_{1.38\uparrow}\) \\ \cline{1-1} & Bal-DR & \(0.770_{1.38\uparrow}\) & \(0.523_{2.88\uparrow}\) & \(0.628_{2.48\uparrow}\) & \(0.771_{0.88\uparrow}\) & \(0.522_{0.28\uparrow}\) & \(0.629_{1.55\uparrow}\) & \(0.770_{0.58\uparrow}\) & \(0.523_{0.28\uparrow}\) & \(0.629_{1.38\uparrow}\) \\ \cline{1-1} & Bal-AutoDebias & \(\textbf{0.771}_{1.45\uparrow}\) & \(\textbf{0.531}_{4.33\uparrow}\) & \(\textbf{0.632}_{1.15\uparrow}\) & \(\textbf{0.772}_{0.9\uparrow}\) & \(\textbf{0.539}_{3.55\uparrow}\) & \(\textbf{0.637}_{2.78\uparrow}\) & \(\textbf{0.772}_{0.8\uparrow}\) & \(\textbf{0.544}_{2.8\uparrow}\) & \(\textbf{0.640}_{3.15\uparrow}\) \\ \hline \hline \end{tabular} \end{table} Table 3. Performance of the Bal-methods under different prediction models as initializations on Music and Coat. ### Exploratory Analysis (RQ4) **Effect of balancing model selections.** We further explore the effect of model selections on the balanced weights to the debiasing performance. Specifically, we take different combinations of MF and NCF as balancing models for \(w_{u,i,1}\) on \(\mathcal{D}\) and \(w_{u,i,2}\) on \(\mathcal{B}\), and the results are shown in Table 4. The performance can be significantly improved when NCF and MF are used to model \(w_{u,i,1}\) and \(w_{u,i,2}\), respectively. We argue that the main reason is that \(|\mathcal{D}|\gg|\mathcal{B}|\), leading to a reasonable reparameterization of \(w_{u,i,1}\) using deep models (e.g., NCF), and \(w_{u,i,2}\) using simple models (e.g., MF). **Effect of uniform data size.** Figure 3 shows the sensitivity of the debiasing methods to the size of the uniform data ranging from 1% to 10%. We find that the proposed Bal-AutoDebias stably outperforms the existing methods for varying sizes of unbiased ratings. For the previous methods, AutoDebias has a more competitive performance compared with KD-label and CausE. When providing with a small size (e.g., 1%) of the unbiased ratings, CausE performs even worse than the biased MF, while Bal-AutoDebias achieves the optimal performance. Compared with AutoDebias, our methods make significant improvements on both NDCG@5 and NDCG@10, validating the effectiveness of the proposed balancing learning. ## 5. Related Work **Debiased Recommendation.** Recommender algorithms are often trained based on the historical interactions. However, the historical data cannot fully represent the user's true preference (Beng et al., 2017; Wang et al., 2018), because user behavior is affected by various factors, such as conformity (Kang et al., 2018) and item popularity (Wang et al., 2018), etc. Many methods were developed for achieving unbiased learning, aiming to capture the true user preferences with biased data. For example, (Zhu et al., 2019) noticed the missing data problem in RS and recommended using the IPS strategy to remove the bias, (Zhu et al., 2019) designed a doubly robust (DR) loss and suggested adopting the joint learning method for model training. Subsequently, several approaches enhanced the DR method by pursuing a better bias-variance trade-off (Beng et al., 2017; Wang et al., 2018), leveraging parameter sharing and multi-task learning technique (Zhu et al., 2019; Wang et al., 2018; Wang et al., 2018), combing a small uniform dataset (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018), addressing the problem of small propensities and weakening the reliance on extrapolation (Zhu et al., 2019), and reducing bias and variance simultaneously when the imputed errors are less accurate (Zhu et al., 2019). In addition, (Zhu et al., 2019) proposed a multiple robust learning method that allows the use of multiple candidate propensity and imputation models and is unbiased when any of the propensity or imputation models is accurate. (Beng et al., 2017; Wang et al., 2018) reviewed the recent progress in debiased recommendation. To mitigate the effects of unobserved confounding, (Beng et al., 2017) proposed an adversarial learning method that uses only biased ratings. Unlike the existing methods, this paper combats the effect of unmeasured confounding with a small uniform dataset to achieve exact unbiasedness. **Causal Inference under Unmeasured Confounding.** Unmeasured confounding is a difficult problem in causal inference and the main strategies for addressing it can be divided into two classes (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). One is the sensitivity analysis (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018) that seeks bounds for the true causal effects with datasets suffering from unmeasured confounders. The other class methods aim to obtain unbiased causal effect estimators by leveraging some auxiliary information, such as instrument variable methods (Beng et al., 2017; Wang et al., 2018), front door adjustment (Zhu et al., 2019), and negative control (Zhu et al., 2019). In general, finding a reliable instrument variable or a mediator that satisfies the front door criterion (Wang et al., 2018; Wang et al., 2018) is a challenging task in practice. Different from these methods based on an observational dataset, this paper considers a more practical scenario in debiased recommendations, i.e., addressing unmeasured confounding by fully exploiting the unbiasedness property of a small uniform dataset. ## 6. Conclusion This paper develops a method for balancing unobserved confounding with few unbiased ratings. We first show theoretically that previous methods that simply using unbiased ratings to select propensity and imputation model parameters is not sufficient to combat the effects of unobserved confounding and model misspecification. We then propose a balancing optimization training objective, and further propose a model-agnostic training algorithm to achieve the training objective using reparameterization techniques. The balancing model is alternately updated with the prediction model to combat the effect of unobserved confounding. We conduct extensive experiments on two real-world datasets to demonstrate the superiority of the proposed approach. To the best of our knowledge, this is the first paper using a few unbiased ratings to combat the effects of unobserved confounding in debiased recommendations. For future works, we will derive theoretical generalization error bounds for the balancing approaches, as well as explore more effective ways to leverage the unbiased ratings to enhance the debiasing performance of the prediction models. Figure 3. Effect of varying size of uniform data. ## 7. Acknowledgments This work was supported by the National Key R&D Program of China (No. 2018YFB1701500 and No. 2018YFB1701503).
2308.00412
Crystallization Dynamics of Amorphous Yttrium Iron Garnet Thin Films
Yttrium iron garnet (YIG) is a prototypical material in spintronics due to its exceptional magnetic properties. To exploit these properties high quality thin films need to be manufactured. Deposition techniques like sputter deposition or pulsed laser deposition at ambient temperature produce amorphous films, which need a post annealing step to induce crystallization. However, not much is known about the exact dynamics of the formation of crystalline YIG out of the amorphous phase. Here, we conduct extensive time and temperature series to study the crystallization behavior of YIG on various substrates and extract the crystallization velocities as well as the activation energies needed to promote crystallization. We find that the type of crystallization as well as the crystallization velocity depend on the lattice mismatch to the substrate. We compare the crystallization parameters found in literature with our results and find an excellent agreement with our model. Our results allow us to determine the time needed for the formation of a fully crystalline film of arbitrary thickness for any temperature.
Sebastian Sailler, Gregor Skobjin, Heike Schlörb, Benny Boehm, Olav Hellwig, Andy Thomas, Sebastian T. B. Goennenwein, Michaela Lammel
2023-08-01T09:45:58Z
http://arxiv.org/abs/2308.00412v2
# Crystallization Dynamics of Amorphous Yttrium Iron Garnet Thin Films ###### Abstract Yttrium iron garnet (YIG) is a prototypical material in spintronics due to its exceptional magnetic properties. To exploit these properties high quality thin films need to be manufactured. Deposition techniques like sputter deposition or pulsed laser deposition at ambient temperature produce amorphous films, which need a post annealing step to induce crystallization. However, not much is known about the exact dynamics of the formation of crystalline YIG out of the amorphous phase. Here, we conduct extensive time and temperature series to study the crystallization behavior of YIG on various substrates and extract the crystallization velocities as well as the activation energies needed to promote crystallization. We find that the type of crystallization as well as the crystallization velocity depend on the lattice mismatch to the substrate. We compare the crystallization parameters found in literature with our results and find an excellent agreement with our model. Our results allow us to determine the time needed for the formation of a fully crystalline film of arbitrary thickness for any temperature. ## I Introduction Yttrium iron garnet (Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\), YIG) is an electrically insulating ferrimagnet, crystallizing in a cubic crystal system with Ia3d symmetry.[1; 2] Its electric and magnetic properties include a long spin diffusion length, which makes YIG an ideal material for spin transport experiments with pure spin currents.[3; 4; 5] Additionally, YIG shows an exceptionally low Gilbert damping and a low coercive field, which allows investigations of magnon dynamics via e.g. ferromagnetic resonance experiments.[6; 7; 8; 9; 10] These exceptional properties caused YIG to be intensively studied and made it the prototypical material in the field of spintronics, which almost exclusively relies on devices in thin film geometry. Several deposition techniques are known to produce high quality YIG thin films, including pulsed laser deposition (PLD),[11; 12; 13; 14; 15; 16; 17] liquid phase epitaxy (LPE) [18; 19; 20; 21] and radio-frequency (RF) magnetron sputtering.[22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] Some deposition techniques like magnetron sputtering give the opportunity to deposit both, amorphous and crystalline thin films, depending on the process temperatures during deposition.[39; 23] Here, room temperature processes yield amorphous films.[22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] For the deposition of YIG onto gadolinium gallium garnet (Gd\({}_{3}\)Ga\({}_{5}\)O\({}_{12}\), GGG) substrates, which feature a lattice constant very similar to the one of YIG, direct epitaxial growth was observed for process temperatures of 700\({}^{\circ}\)C.[39; 23] On quartz a post annealing step is needed to enable the formation of polycrystalline YIG.[40] The annealing process is usually performed under air[33; 24] or reduced oxygen atmosphere[37; 26] to counteract potential oxygen vacancies in the YIG lattice. For amorphous PLD films it has also been shown, that inert argon atmosphere has no deteriorating influence.[14] The vast amount of publications highlight the interest in YIG. However, the influence of the post annealing step on the crystallization of the YIG thin film is only vaguely studied in the literature. Here, we present an extended picture of the crystallization dynamics of YIG at different temperatures and annealing times, which allows us to define different crystallization windows depending on the substrate. Our results provide a general crystallographic description of the crystallization process for YIG thin films and summarize the crystallization parameters found in the literature. ## II Methods Ahead of the deposition, all substrates were cleaned for five minutes in aceton and isopropanol, and one minute in deionized water in an ultrasonic bath. YIG thin films were then deposited onto different substrate materials using RF sputtering from a YIG sinter target at 2.7\(\cdot\)10\({}^{-3}\) mbar argon pressure and 80 W power, at a rate of 0.0135 nm/s. The nominal thickness upon deposition was 33 nm. The post-annealing steps were carried out in a tube zone furnace under air. As substrates yttrium aluminum garnet (Y\({}_{3}\)Al\({}_{5}\)O\({}_{12}\), YAG, _CrysTec_) and gadolinium gallium garnet (Gd\({}_{3}\)Ga\({}_{5}\)O\({}_{12}\), GGG, _SurfaceNet_) with a <111> crystal orientation along the surface normal were used. Additionally, silicon wafers cut along the <100> crystal direction with a 500 nm thick thermal oxide layer (Si/SiO\({}_{x}\), _MicroChemicals_) were used. Since GGG and YAG crystallize in the same space group Ia3d as YIG and their lattice parameters are 1.2376 nm [41] and 1.2009 nm [42], respectively, they are considered lattice matched in regards to the 1.2380 nm for YIG [43]. The lattice mismatch \(\epsilon\) can be calculated with Eq. (1) \[\epsilon=\frac{a_{YIG}-a_{substrate}}{a_{substrate}}\cdot 100\% \tag{1}\] and translates to 0.03 % for GGG and 3.09 % for YAG [44]. Due to the amorphous SiO\({}_{\mathrm{x}}\) layer the Si/SiO\({}_{\mathrm{x}}\) substrates do not provide any preferential direction for crystallization. But even considering the underlying Si layer, we do not expect it to influence the crystallization direction in any way, as it features a fundamentally different space group (Fd3m) and lattice constant [45]. Therefore, Si/SiO\({}_{\mathrm{x}}\) is considered non lattice matched and fulfills the function as an arbitrary substrate. For the structural characterization X-ray diffraction measurements (XRD) were performed using a Rigaku Smart Lab Diffractometer with Cu \(K_{\alpha}\) radiation. Scanning electron microscopy as well as electron backscatter diffraction (EBSD) measurements were conducted using a Zeiss Gemini Scanning Electron Microscope (SEM). The magnetic properties were characterized by determining the magnetooptical polarization rotation in a Kerr-Microscope from Evico Magnetics. ## III Results and Discussion The crystallization mechanism of a thin film crucially depends on the substrate: for substrates where the lattices of film and substrate are sufficiently similar, the thin film layer crystallizes epitaxially, whereas for a substrate where the two lattices do not match, nucleation is needed. Figure 1 shows the different growth mechanisms and the resulting YIG micro structure depending on the chosen substrate. As depicted in Figure 1(a), a lattice matched substrate acts as a seed on which the film can grow epitaxially. Therefore, a single crystalline front is expected to move from the substrate on towards the upper boundary of the film [46, 47], which is commonly referred to as solid phase epitaxy (SPE) in the literature. For a substrate with a sufficiently large lattice mismatch or no crystalline order, no such surface is given, see Fig. 1(b). Here, a nucleus needs to be formed first from which further crystallization takes place. The formation of the initial seeds by nucleation is expected to take place randomly. The polycrystalline seeds grow until reaching another grain or one of the sample's boundaries. For any of these processes, SPE or nucleation, to take place, the system needs to be at a temperature characteristic for this specific thin film/substrate system [48]. To distinguish between amorphous, partly and fully crystalline films we apply several characterization methods, probing the structural and magnetic properties of the YIG thin films. The typical fingerprints of amorphous versus crystalline YIG on different substrates as determined by X-ray diffraction (XRD), magneto-optical Kerr microscopy (MOKE) and electron back scatter diffraction (EBSD) are depicted in Figure 2. From top to bottom we gain an increased spacial resolution, probing increasingly smaller areas of the sample. With XRD, the structural properties of YIG on YAG and GGG can be evaluated. For the amorphous films, the XRD measurements in Fig. 2 (a-c) show a signal stemming only from the substrate (cp. gray dashed lines). Upon annealing, YIG is visible in the form of Laue-oscillations on GGG (purple) and as a peak on YAG (red). In stark contrast to that no signal, which could be attributed to YIG, can be found on SiO\({}_{x}\) even when annealing at 800 \({}^{\circ}\)C for 48 h. The sharp peak in Fig. 2(c) at 32.96\({}^{\circ}\) can be attributed to a detour excitation of the substrate, as it is visible in the as deposited state and fits the forbidden Si (200) peak [49]. In the literature, YIG on SiO\({}_{\mathrm{x}}\) has been reported to be polycrystalline at lower annealing temperatures than in the exemplary data shown in Fig. 2(c) [24, 26, 40]. These films show peaks in the XRD, however they were at least one order of magnitude thicker. We therefore do not expect the YIG layer on Si/SiO\({}_{\mathrm{x}}\) to be amorphous, which will be confirmed in the following. By probing the magnetic properties of the thin films with MOKE (cp. Fig. 2(d-f)), a clear distinction between amorphous and crystalline YIG can be made. While the film shows a linear MOKE signal in the as-deposited state, it changes to a hysteresis for all three samples upon annealing. In general, the sharpest hysteresis is visible for YIG on GGG, which becomes broader for an increasing structural misfit. Naively polycrystalline samples are expected to consist of multiple domains pointing towards different directions, which lead to an increase of the coercive field. This is consistent with our results and also with the magnetic properties found in literature [13, 25, 32, 50]. These coercive fields are below 0.1 mT for YIG on GGG [13, 32] and between 2.2-3 mT for YIG on Si/SiO\({}_{\mathrm{x}}\)[26, 50]. The MOKE measurements therefore indicate the spontaneous formation of a phase with magnetic ordering on all three substrates. While MOKE correlates the magnetic properties with amorphous and crystalline films, it lacks the ability to quantify the amount of crystalline YIG. The hysteretic response for the annealed YIG on SiO\({}_{\mathrm{x}}\) strongly supports the forma Figure 1: Expected crystallization of an amorphous, as-deposited (a.d.) YIG thin film on lattice matched substrates (a) and non lattice matched substrates (b). In the first case of solid phase epitaxy, a homogeneous crystal front forms at the substrate and propagates towards the upper thin film border. For the latter, nucleation is necessary and crystallites form in various orientations. This results in a single crystalline (sc) film for the epitaxy and a polycrystalline (pc) film when nucleation occurs. tion of crystalline YIG, however, we cannot correlate this to a percentage of crystalline material. Therefore, a structural characterization with higher spacial resolution than XRD is needed. To that end electron back scatter diffraction (EBSD) measurements were performed. With this technique Kikuchi patterns, which are correlated to the crystal structure, are detected and later evaluated. The results are shown for crystalline samples only, as the amorphous film showed no Kikuchi patterns. This confirms, that the detected patterns stem from the YIG thin film itself and not from the crystallographically similar substrates of YAG or GGG. This is consistent with the EBSD signal depth given in the literature of 10 to 40 nm.[51] The extracted crystal orientations along the surface normal can be seen in Fig. 2 (g-i). On YAG and GGG a single color corresponding to the <111> direction is visible in the mapping, which is consistent with the XRD data and corroborates the epitaxial growth from the substrate in the <111> direction. On SiO\({}_{\mathbf{x}}\), however, various crystal directions are present, confirming the polycrystalline nature of the YIG. The crystallographic data from our EBSD measurements show random nucleation. The cross shape of the individual crystalline areas point towards an anisotropic crystallization with a preferential direction along <110> or higher indexed directions like Figure 2: (a)-(c) XRD spectra of YIG thin films pre and post annealing on different substrates as given above in the respective column. The nominal positions of the substrate and the thin film are marked by the grey and black dashed lines, respectively. The additional peak marked with Si(200) in (c) is a detour reflex from the substrate. (d)-(f) Background corrected Kerr microscopy data for the same samples before and after the annealing procedure. The change in grey value corresponds to a change in the magnetization of the sample. (g)-(i) crystal orientation of the post annealed YIG thin films normal to the surface normal as extracted from the Kikuchi-patterns determined by EBSD. The as-deposited films showed no Kikuchi-Patterns and are therefore not shown here. <112>, which is consistent with earlier studies on YIG and other rare earth garnets [22, 53, 54, 55] as well as PLD grown bismuth iron garnet [11]. Using EBSD allows for a quantification of the amount of crystalline material for a YIG thin film on SiO\({}_{\text{x}}\) or any arbitrary substrate. Combining the magnetic and structural data from MOKE and EBSD, respectively, allows for an unambiguous identification of the formation of polycrystalline YIG on SiO\({}_{\text{x}}\), which was not possible by XRD. We presume that the absence of any XRD peaks results from the small volume of the individual crystallites of YIG on SiO\({}_{\text{x}}\). We approximate the volume of a single polycrystalline grain, i.e. one cross from the EBSD data (cp. Fig. 2(i)) to be 0.5 \(\mu\)m\({}^{3}\), stemming from an area of about 15 \(\mu\)m and a film thickness of 32 nm. This is also the size of individually contributing grains to the diffraction within the XRD. Assuming a single crystalline thin film, where the whole irradiated area contributes additively, the contributing area amounts to \(7\cdot 10^{5}\mu\)m\({}^{3}\), which is six orders of magnitude larger than that of an individual grain. Therefore, the contributions of the individual grains of the YIG layer on SiO\({}_{\text{x}}\) to the XRD intensity are too small to result in a finite peak for a 30 nm thick film. These results provide the basis for the investigation of the crystallization behavior and reveal how different techniques enable us to distinguish between amorphous, partly and fully crystalline films. We utilize the structural information to analyze the crystallization dynamics on the different substrates. The percentage of crystalline YIG was quantified differently for the three different substrates. For YIG on YAG the amount of crystalline YIG correlates to the intensity of the Bragg peak. A certain film thickness corresponds to a maximum area under the peak, to which the intensity is normalized. For YIG on GGG, the percentage of crystalline YIG is extracted from the Laue oscillations (cp. Fig 2(a)). The frequency of the oscillation corresponds to the number of interfering lattice planes, enabling the calculation of the thickness of the crystalline layer. Using X-ray reflectivity, the absolute film thickness was measured for each film. Comparing the thickness of the crystalline layer with the film thickness then enables to monitor the growth of YIG on GGG. For the non lattice matched substrates, EBSD mappings were taken to extract the amount of crystalline YIG. For each of the YIG thin films, a percentage of crystalline YIG at a given time and temperature is extracted, which allows an evaluation of the crystallization process for this specific temperature. First, we find the onset temperature for the crystallization of YIG on each substrate. As crystallization is thermally activated, it depends exponentially on the annealing temperature [55], which leads to a very narrow temperature window of incomplete crystallization. To extract this window, multiple samples were annealed for four hours at different temperatures. Figure 3(a) exemplarily shows the results for YIG on YAG, where after 575 \({}^{\circ}\)C a steep increase in intensity up to a maximum at 600 \({}^{\circ}\)C can be seen. This intensity value stays the same for 700 \({}^{\circ}\)C, which suggests, that the YIG film is fully crystallized and no further increase in intensity is expected. A crystalline YIG film on YAG can therefore be obtained at a temperature range around 600 \({}^{\circ}\)C. For YIG on GGG this window is found to be slightly below 600 \({}^{\circ}\)C, whereas on SiO\({}_{\text{x}}\), we found the first indication of crystallization at 700 \({}^{\circ}\)C. For our samples, the heating up and cooling down is included in the annealing time. An in-situ study on a representative sample with \(d_{\text{YIG}}=100\) nm yielded data in a good agreement to the crystallization behavior in the one zone furnace. Please note that the use of different equipment led to a small variation in absolute temperature (see supplementary information). The lower intensities at 800 \({}^{\circ}\)C and above (cp. Fig. 3(a)) hint towards the occurrence of competing growth processes. We attribute the reduction in intensity at annealing temperatures above 800 \({}^{\circ}\)C to additional growth of polycrystalline grains enabled by the elevated temperatures, which competes with the epitaxial growth and by that reduces the crystal quality of the thin film. Analyzing the rocking curves of these samples (see supplementary information) confirms an increased full width at half maximum value towards higher temperatures. This can be correlated with a lower crystal quality, which supports an additional polycrystalline growth. To study the crystallization dynamics, the time evolution of the intensity for a given temperature was evaluated, again representatively shown for YIG on YAG in Fig. 3(b). Here, one sample was subjected to the same temperature for multiple repeats until the extracted intensity of the YIG peak and therewith the crystalline area did saturate. This saturation represents a fully crystallized thin film, where no further changes are expected. To describe the crystallization at an arbitrary temperature, we find a general crystallographic description for each of the substrates. A phase transition in a solid like crystallization can generally be described by the Avrami equation [55, 56, 57, 58]: \[\theta_{\text{c}}=1-e^{-k+n} \tag{2}\] where, \(\theta_{\text{c}}\) is the crystallinity normalized to one, with respect to Figure 3: Intensity evolution of the (444) YIG Bragg peak as a function of the annealing temperature for a constant annealing time of 4 h (a) and for various times at a constant temperature of 600 \({}^{\circ}\)C (b). The dotted lines act as a guide to the eye. a complete crystallization, \(k\) the rate constant and \(t\) the annealing time. The exponent \(n\) is often referred to as the Avrami exponent and describes how the crystallization takes place.[57] It can take values between 1 and 4, where one contribution stems from the nucleation and takes values of 0 for controlled and 1 for random nucleation, while the other contributions originate from the type of crystallization in the three spacial directions. For the rate constant k we use an exponential Arrhenius dependency:[59; 46] \[k=k_{0}\cdot e^{\frac{-E_{A}}{k_{0}T}} \tag{3}\] where both the pre-factor \(k_{0}\) and the activation energy \(E_{A}\) are unique for each combination of film and substrate material. The Avrami equation (cp. Eq.(2)) lets us describe the crystallization on all three substrates. To that end we fit the normalized crystallinity values of YIG with the Avrami equation (cp. Eq.(2)), where we fix the Avrami exponent n between 1 and 4 (cp. Fig. 4(a)+(b)). The rate constants \(k\) then describe the growth velocities on the respective substrate in \(h^{-1}\). The crystallization behavior of YIG on GGG and YAG at an annealing temperature of 600 \({}^{\circ}\)C is shown in Fig. 4(a). On GGG at 600 \({}^{\circ}\)C (cp. Fig. 4(a)), YIG immediately starts to crystallize with a rate constant of 1.68 h\({}^{-1}\) and an Avrami exponent of 1. This means, that the crystallization takes place without nucleation and in one spacial direction, which is consistent with the monotonously moving crystallization front expected for SPE. The rate constant translates to a starting velocity of 0.84 nm/min for the 30 nm films. Towards longer annealing times, the curve flattens, meaning that the crystalline material reaches the sample's surface. The crystallization of YIG on YAG shows an initial time delay, despite the comparably small lattice mismatch of 3.09 % (cp. Fig. 4(a)). The fitting of the data at 600 \({}^{\circ}\)C leads to a rate constant of 0.10 h\({}^{-1}\) with \(n=3.8\). This means, that the crystallization does not follow a typical SPE behavior and nucleation processes in the thin film cannot be excluded. However, also for the crystallization on YAG, single crystalline YIG is obtained (cp. Fig. 2(b)+(h)). This deviation from YIG on GGG is most likely due to the larger lattice mismatch which causes an energetically costly strain in the film.[60] The crystallization velocity along the surface normal direction is obtained by the \(n\)-th root out of the rate constant and translates to 0.27 nm/min. The crystallization of YIG on SiO\({}_{\mathrm{x}}\) is fundamentally different (cp. Fig. 4(b)). Here, polycrystalline grains could be found at temperatures of 675 \({}^{\circ}\)C and above. The time evolution of the crystallinity is depicted in Figure 4(b), where fitting the data by the Avrami equation (Eq. (2)) yields with \(n=4\) and a rate constant of 9.9\(\cdot\)10\({}^{-5}\) h\({}^{-1}\). This confirms our initial hypothesis of nucleation and subsequent three dimensional growth. Higher temperatures compared to the garnet substrates are needed to provide enough energy for nucleation, which causes the crystallization process to be visible at 675 \({}^{\circ}\)C and above. An approximation of the crystallization velocity can be extracted from the EBSD data. Here, we assume that the crystallization starts in the middle of a cross shape structure (cp. Fig. 2(i)) and stops when reaching a boundary given by neighboring crystallites. The distance covered depends on the number of nuclei formed and is highly dependent on the direction. To ensure comparability with the two lattice matched substrates, we consider grains growing in plane along the <111> direction. At 700 \({}^{\circ}\)C, the YIG crystallites on SiO\({}_{\mathrm{x}}\) measured up to 10 \(\mu\)m in length after at least 12 h of annealing. This translates into a propagation velocity of 16.7 nm/min at 700 \({}^{\circ}\)C on an arbitrary substrate along the <111> direction. To compare the three crystallization velocities, the temperature dependence of the rate constants \(k\) needs to be taken into consideration. Using the Arrhenius equation (Eq. (3)) we are able to extrapolate the crystallization rate at any temperature. To that end, the logarithm of each rate constant is plotted over the inverse temperature. The linear dependency of Eq. (3) in the logarithmic plot allows us to extract the activation energy and the pre-factor \(k_{0}\) for YIG on each substrate. The resulting values are plotted in Tab. 1. While at first glance the crystal Figure 4: (a) and (b) show the time evolution of the YIG crystallization on the three substrates after normalizing the data with the maximum value to 1. The dots represent the crystallinity values from XRD (YAG/GGG) and EBSD (SiO\({}_{\mathrm{x}}\) Compared to the literature, both the activation energy \(E_{A}\) and the pre-factor \(v_{0}\) seem plausible. ), while the solid lines show the fit of the data using Eq.(2). Because of the inherently different crystallization processes, the time scales and the temperatures differ. Conducting these time evolutions at different temperatures for each substrate result in a rate constant \(k(T)\) for this temperature. A logarithmic representation of the \(k(T)\) values over the inverse temperature is given by the symbols in (c). For each substrate a linear expression was fitted, where the slope represents the activation Energy \(E_{A}\) and the intercept of the \(y\)-axis the pre-factor \(k_{0}\) for YIG on each substrate. tallization velocity for YIG on SiO\({}_{\rm x}\) seems faster, the different annealing temperatures of 600 \({}^{\circ}\)C for the garnet substrates and 700 \({}^{\circ}\)C for SiO\({}_{\rm x}\) need to be taken into account (cp. Fig.4). Extrapolating the growth velocity for YIG on GGG at 700 \({}^{\circ}\)C reveals that here YIG would crystallize approximately 30 times faster than on SiO\({}_{\rm x}\). Our activation energy of 3.9 eV for YIG on GGG is in good agreement with the literature. For the formation of bulk YIG from oxide powders, a value of 5.08 eV was reported.[61] Further, for the crystallization of bulk polycrystalline YAG, which is expected to behave similarly as it has the same crystal structure, an activation energy of 4.5 eV was found.[62] The lower value of 3.9 eV for YIG on GGG highlights the reduced energy needed, due to the growth from the lattice matched GGG. The activation energies for YIG on YAG as well as on SiO\({}_{\rm x}\) are much higher than the value on GGG. As the general crystallization windows and times needed for a fully crystalline film stay the same, we ascribe this behavior to a kinetic blocking, originating from the lattice mismatch and the nucleation. Understanding the exact mechanism however, would need further study. These results allow to establish a diagram to underline which annealing parameters will lead to a fully crystalline YIG thin film on the three substrates (cp. Fig. 5(a)). For a mathematical description, we combine the Avrami equation Eq. (2) with the Arrhenius equation Eq. (3) to be able to express the crystallinity in terms of annealing time and temperature. \[t=\left(\left[-\frac{ln(1-\theta_{\rm c})}{k_{0}}\right]\cdot e^{\frac{E_{A}}{ k_{0}}}\right)^{\frac{1}{n}} \tag{4}\] We use a crystallinity \(\theta_{\rm c}\) of 0.999 to avoid the divergence of the logarithm and the respective \(n\), \(k_{0}\) and \(E_{A}\) found in Tab. 1. Figure 5(a) outlines the temperature and time combination where crystalline YIG (shaded areas) can be obtained. Regions where the YIG thin film remains amorphous are left in white. The boundary between non crystalline and crystalline for each substrate is given by Eq. (4). Each of the circles seen in Fig. 5(a) represents one fully crystalline sample obtained as described for Fig. 3(b). The filled circles represent Fig. 5: (a) Annealing parameters to obtain a fully crystalline YIG film on the respective substrates. We expect every point in the colored area to yield a fully crystalline sample. We use Eq.(4) with the values obtained in Fig. 4(c) to determine the boundary separating crystalline YIG (shaded areas, sc = single crystalline, pc = poly crystalline) from amorphous YIG (white areas). The open circles represent the samples from Fig. 4(c) which are used for the fit. Further studied, fully crystalline samples are marked by the full circles. There are different regions where the YIG is fully crystalline depending on the substrate. Panel (b) gives a comparison of our crystallization diagram with other studies.[22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38] Note that, while we here consider only the crystallization of sputtered thin films by post annealing, the crystallization diagram also fits for comparable samples obtained by PLD.[11, 12, 13, 14, 15, 16] fully crystalline samples, where no time dependence of the crystallinity was measured. As already anticipated, YIG exhibits different crystallization behavior depending on the substrate. Note, that polycrystalline growth on SiO\({}_{\text{x}}\) or any arbitrary substrate needs notably higher temperatures than epitaxial growth, where an annealing at 660 \({}^{\circ}\)C for 100 h would be necessary to result in a fully crystalline film. The different temperatures and times necessary to induce crystallization stem from the different types of substrates. For YIG on GGG and YAG the seed for the crystallization is given by the lattice of the substrate. Therefore, we ascribe the discrepancy between YAG and GGG to the different lattice mismatch compared to YIG. In the YIG thin films on YAG a higher strain is expected to exist in the film, which leads to the formation of energetically costly dislocations. This in turn results in the slightly higher temperature needed for YIG to crystallize on YAG. On SiO\({}_{\text{x}}\), however, a significantly higher temperature than for the lattice matched substrates is needed for crystalline YIG to form. Here, as no initial seed is given by the substrate, nucleation is required, which is a thermally activated process that needs additional energy, i.e. higher temperatures. This random formation of seeds leads to a polycrystalline YIG thin film on SiO\({}_{\text{x}}\) A comparison with the literature shows, that parameters which have been previously reported to result in a fully crystalline YIG layer, fit into our extracted area, (cp. Fig. 5(b)). [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] Additionally to the sputtered films, also amorphous films obtained from PLD with subsequent annealing fit in the observed regions. [11; 12; 13; 14; 15; 16] The extracted diagram in Fig. 5 therefore acts as a general description for the crystallization of YIG thin films out of the amorphous phase. ## IV Conclusion An extensive time and temperature series was used to analyze the crystallization kinetics of sputtered amorphous YIG thin films on different substrates. We find the formation of single crystalline YIG thin films on garnet substrates where the growth on gadolinium gallium garnet can be coherently described in a solid-phase epitaxy picture, whereas a more complicated growth scheme is found on yttrium aluminum garnet. On SiO\({}_{\text{x}}\) a polycrystalline YIG thin film develops, with slower crystallization dynamics than for the garnet substrates. A fully crystalline YIG film on GGG was found for temperatures as low as 537 \({}^{\circ}\)C and annealing times of 110 h. On silicon oxide (representing any type of amorphous or non lattice matched substrate), the nucleation of the YIG crystals is not expected for reasonable time scales below 660 \({}^{\circ}\)C. The results summarized in Tab. 1 allow for the determination of the crystallization velocity of YIG on those substrates for any temperature. Thus, we provide a complete description of the crystallization process from the amorphous phase for YIG on GGG, YAG and arbitrary substrates such as SiO\({}_{\text{x}}\), which allows us to define the range in which crystalline YIG thin films can be obtained. ## V Acknowledgments This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 446571927. We cordially thank F. Michaud and J. Ben Youssef from the Universite de Bretagne Occidentale in Brest (France) for fruitful discussions and for letting us use their in-situ X-ray diffractometer. We also gratefully acknowledge technical support and advice by the nano.lab facility of the University Konstanz.
2303.05618
Tropical Geometry, Quantum Affine Algebras, and Scattering Amplitudes
The goal of this paper is to make a connection between tropical geometry, representations of quantum affine algebras, and scattering amplitudes in physics. The connection allows us to study important and difficult questions in these areas: (1) We give a systematic construction of prime modules (including prime non-real modules) of quantum affine algebras using tropical geometry. We also introduce new objects which generalize positive tropical Grassmannians. (2) We propose a generalization of Grassmannian string integrals in physics, in which the integrand is no longer a finite, but rather an infinite product indexed by prime modules of a quantum affine algebra. We give a general formula of $u$-variables using prime tableaux (corresponding to prime modules of quantum affine algebras of type $A$) and Auslander-Reiten quivers of Grassmannian cluster categories. (3) We study limit $g$-vectors of cluster algebras. This is another way to obtain prime non-real modules of quantum affine algebras systematically. Using limit $g$-vectors, we construct new examples of non-real modules of quantum affine algebras.
Nick Early, Jian-Rong Li
2023-03-09T23:15:09Z
http://arxiv.org/abs/2303.05618v4
# Tropical Geometry, Quantum Affine Algebras, and Scattering Amplitudes ###### Abstract. The goal of this paper is to make a connection between tropical geometry, representations of quantum affine algebras, and scattering amplitudes in physics. The connection allows us to study important and difficult questions in these areas: 1. We give a systematic construction of prime modules (including prime non-real modules) of quantum affine algebras using tropical geometry. We also introduce new objects which generalize positive tropical Grassmannians. 2. We propose a generalization of Grassmannian string integrals in physics, in which the integrand is a product indexed by prime modules of a quantum affine algebra. We give a general formula of \(u\)-variables using prime tableaux (corresponding to prime modules of quantum affine algebras of type \(A\)) and Auslander-Reiten quivers of Grassmannian cluster categories. 3. We study limit \(g\)-vectors of cluster algebras. This is another way to obtain prime non-real modules of quantum affine algebras systematically. Using limit \(g\)-vectors, we construct new examples of non-real modules of quantum affine algebras. ###### Contents * 1 Introduction * 2 Quantum Affine Algebras and Hernandez-Leclerc's Category \(\mathcal{C}_{\ell}\) * 3 Grassmannian Cluster Algebras and Tropical Grassmannians * 4 Newton Polytopes and Tropical Fans for Grassmannian Cluster Algebras * 5 Semistandard Young Tableaux and Generalized Root Polytopes * 6 From Facets to Prime Modules * 7 Explicit Description of 2-column Prime Tableaux * 8 More evidence of Conjecture 6.1 * 9 Newton Polytopes and Tropical Fans for Quantum Affine Algebras * 10 Physical Motivation: Stringy Integrals and CEGM Scattering Amplitudes * 11 Limit \(g\)-vectors, Limit Facets, and Prime Non-real Modules * 12 Limit \(g\)-vectors for Type \(D_{n}\) Quantum Affine Algebras * 13 Discussion ## 1. Introduction Quantum groups were introduced independently by Drinfeld [32] and Jimbo [69] around 1985. A quantum affine algebra is a Hopf algebra that is a \(q\)-deformation of the universal enveloping algebra of an affine Lie algebra [28]. Quantum affine algebras have many applications to physics, for example, to the theory of solvable lattice models in quantum statistical mechanics [11, 51], integrable systems [36]. Quantum affine algebras also have many connections to different areas of mathematics, for example, cluster algebras [65], KLR algebras [72], geometric representation theory [88], representations of affine Hecke algebras and \(p\)-adic groups [31, 59, 79]. Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) and \(U_{q}(\widehat{\mathfrak{g}})\) the corresponding quantum affine algebra [28]. Chari and Pressley have classified simple finite dimensional \(U_{q}(\widehat{\mathfrak{g}})\)-modules. They proved that every simple finite dimensional \(U_{q}(\widehat{\mathfrak{g}})\)-module corresponds to an \(I\)-tuple of Drinfeld polynomials, where \(I\) is the set of vertices of the Dynkin diagram of \(\mathfrak{g}\), and equivalently, corresponds to a dominant monomial \(M\) in certain formal variables \(Y_{i,a}\), \(i\in I\), \(a\in\mathbb{C}^{*}\). The simple \(U_{q}(\widehat{\mathfrak{g}})\)-module corresponding to \(M\) is denoted by \(L(M)\). A simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(M)\) is called prime if it is not isomorphic to \(L(M^{\prime})\otimes L(M^{\prime\prime})\) for any non-trivial modules \(L(M^{\prime})\), \(L(M^{\prime\prime})\), see [31]. When \(\mathfrak{g}=\mathfrak{sl}_{2}\), all prime modules of \(U_{q}(\widehat{\mathfrak{sl}_{2}})\) are Kirillov-Reshetikhin modules [31]. Kirillov-Reshetikhin modules are simple \(U_{q}(\widehat{\mathfrak{g}})\)-modules which correspond to dominant monomials of the form \(Y_{i,s}Y_{i,s+2d_{i}}\cdots Y_{i,s+2rd_{i}}\), where \(i\in I\), and \(d_{i}\)'s are diagonal entries of a diagonal matrix \(D\) such that \(DC\) is symmetric, \(C\) is the Cartan matrix of \(\mathfrak{g}\) (we choose \(D\) such that \(d_{i}\)'s are as small as possible). In general, to classify prime modules of \(U_{q}(\widehat{\mathfrak{g}})\) is an important and difficult problem in representation theory, see for example, [16, 26, 31, 65, 86]. Hernandez and Leclerc [65] made a breakthrough to the problem of constructing prime modules of \(U_{q}(\widehat{\mathfrak{g}})\) using the theory of cluster algebras [53]. For every simple Lie algebra \(\mathfrak{g}\) over \(\mathbb{C}\), they constructed a cluster algebra with initial cluster variables given by certain Kirillov-Reshetikhin modules. Using cluster algebras, prime modules can be generated using a procedure called mutation. The prime modules generated in this way are cluster variables. They conjectured that all cluster variables (resp. cluster monomials) are real prime modules (resp. real modules), and all real prime modules (resp. real modules) are cluster variables (resp. cluster monomials). Here a simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(M)\) is called real if \(L(M)\otimes L(M)\) is still simple [78]. One direction of their conjecture "all cluster variables (resp. cluster monomials) are real prime modules (resp. real modules)" is proved by Qin in [89] and by Kang, Kashiwara, Kim, Oh, and Park in [72, 73, 74, 75]. The other direction "all real prime modules (resp. real modules) are cluster variables (resp. cluster monomials)" of the conjectural is widely open [68]. It is shown in [37] that all prime snake modules of types \(A,B\) are cluster variables and in [39] that all snake modules of types \(A,B\) are cluster monomials. Recently, Lapid and Minguez [79] classified all real simple \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-modules (in the language of representations of \(p\)-adic groups) satisfying a certain condition called regular. This classification is surprisingly related to the classification of rationally smooth Schubert varieties in type \(A\) flag varieties. They also gave more conjectures and results in a more recent work [80]. By the results in [72, 73, 74, 75, 89], using the procedure of mutations, one can generate a large family of prime modules (these prime modules are cluster variables). On the other hand, there are many prime modules which are not real (and thus not cluster variables). These non-real modules are also important in applications. For example, it is shown in [7, 34, 63] that non-real prime tableaux (corresponding to non-real prime modules by [23]) determine the so-called square roots and are used to construct algebraic letters in the computations of Feynman integrals in the study of scattering amplitudes in \(\mathcal{N}=4\) super Yang-Mills theory in physics. The goal of this paper is to make a connection between tropical geometry, representations of quantum affine algebras, and scattering amplitudes in physics. The connection allows us to study important and difficult questions in these areas: use tropical geometry to construct prime modules (conjecturally we can obtain all prime modules including prime non-real modules) of quantum affine algebras, to propose a generalization of Grassmannian string integrals in physics, and study limit \(g\)-vectors of cluster algebras. First consider the case where \(\mathfrak{g}\) is of type \(A\), i.e. \(\mathfrak{g}=\mathfrak{sl}_{k}\) for some positive integer \(k\). Simple modules of \(U_{q}(\widehat{\mathfrak{sl}_{k}})\) correspond to dual canonical basis elements of a quotient \(\mathbb{C}[\operatorname{Gr}(k,n,\sim)]\) of the Grassmannian cluster algebra \(\mathbb{C}[\operatorname{Gr}(k,n)]\)[65, 23]. We define Newton polytopes \(\mathbf{N}_{k,n}^{(d)}\) by using the formula for dual canonical basis elements of \(\mathbb{C}[\operatorname{Gr}(k,n)]\), [23] as follows. Denote by \(\mathcal{T}_{k,n}^{(0)}\) the set of all one-column tableaux which are cyclic shifts of the one-column tableau with entries \(\{1,2,\ldots,k-2,k\}\). For \(d\geq 0\), we define (see Definition 4.1) \[\mathbf{N}_{k,n}^{(d)}=\operatorname{Newt}\left(\prod_{T\in\mathcal{T}_{k,n}^{ (d)}}\operatorname{ch}_{T}(x_{i,j})\right),\] where \(\mathcal{T}_{k,n}^{(d)}\) (\(d\geq 1\)) is the set of tableaux which correspond to facets of \(\mathbf{N}_{k,n}^{(d-1)}\), \(\operatorname{ch}_{T}(x_{ij})\) is the evaluation of \(\operatorname{ch}_{T}=\operatorname{ch}(T)\) on the web matrix [99] (see Section 3.3) and \(\operatorname{ch}(T)\) is given in Theorem 5.8 and Definition 5.9 in [23], see Section 3.1. The facets of the Newton polytope \(\mathbf{N}_{k,n}^{(0)}\) have been classified in recent work of the first author [45]; there are exactly of them \(\binom{n}{k}-n\), one for each of \(\binom{n}{k}-n\) generalized positive roots. We give a procedure to construct the highest \(l\)-weights (equivalently, the tableaux corresponding to the highest \(l\)-weight monomials, see [23]) of simple \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-modules from facets of \(\mathbf{N}_{k,n}^{(d)}\) explicitly, see Section 6. We conjecture that for every \(k\leq n\) and \(d\geq 0\), every facet of \(\mathbf{N}_{k,n}^{(d)}\) gives a prime module of \(U_{q}(\widehat{\mathfrak{sl}_{k}})\) and every prime \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-module corresponds to a facet of the Newton polytope \(\mathbf{N}_{k,n}^{(d)}\) for some \(d\), see Conjecture 6.1). The procedure gives a systematical way to construct prime modules of \(U_{q}(\widehat{\mathfrak{sl}_{k}})\). We show that a module corresponding to a 2-column tableau is prime if and only if the 2-column tableau is the union of two one-column tableaux which are not weakly separated and which are noncrossing, see Theorem 7.3. We check that all the facets of \(\mathbf{N}_{3,9}^{(1)}\) correspond to prime modules of \(U_{q}(\widehat{\mathfrak{sl}_{3}})\) in Section 8. This gives more evidence of Conjecture 6.1. The study of Newton polytopes of Laurent expansions of cluster variables was initiated by Sherman and Zelevinsky in their study of rank 2 cluster algebras [102]. It was further developed in [15, 48, 71, 81, 82, 85]. Our definition of Newton polytopes in this paper involve not only cluster variables but also other prime elements in the dual canonical basis of cluster algebras. We also count the number of prime modules corresponding to 2-column tableaux: for \(k\leq n/2\), the number of 2-column prime tableaux is \(a_{k,n,2}-b_{k,n}\), where \(a_{k,n,m}=\prod_{i=1}^{k}\prod_{j=1}^{m}\frac{n-i+j}{k+m-i-j+1}\) and \(b_{k,n}=\binom{n}{k}+\sum_{j=1}^{k}j\binom{n}{k-j,2j,n-k-j}\), where \(\binom{n}{a,b,c}=\frac{n!}{a!b!c!}\), see Proposition 7.4. We give an explicit conjectural description of a very large family of prime \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-modules: for any \(k\)-element subsets \(J_{1},\ldots,J_{r}\) of \([n]\) such that each pair of them is noncrossing and not weakly separated. Then \(T=\cup_{i=1}^{r}T_{J_{i}}\) is a prime tableau, see Conjecture 8.1. This conjecture is proved in the case of \(r=2\), see Theorem 7.3. We also introduce another version (non-recursive) of Newton polytopes \(\mathbf{N}_{k,n}^{\prime(d)}\) for Grassmannian cluster algebras, see Definition 4.3. The normal fans \(\mathcal{N}(\mathbf{N}_{k,n}^{(d)})\) are generalizations of tropical Grassmannians \(\operatorname{Trop}^{+}\!G(k,n)\) (see [99]), see Section 4.2. It would be aninteresting and important problem to find a combinatorial model which describes the facets of \(\mathcal{N}(\mathbf{N}_{k,n}^{(d)})\), and to relate them to scattering amplitudes. We generalize the construction in Section 1.1 to general quantum affine algebras \(U_{q}(\widehat{\mathfrak{g}})\). For any simple Lie algebra over \(\mathbb{C}\) and \(\ell\geq 1\), we define a sequence of Newton polytopes \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) (\(d\in\mathbb{Z}_{\geq 0}\)) recursively. Let \(\mathcal{M}\) be the set of all equivalence classes of Kirillov-Reshetikhin modules of \(U_{q}(\widehat{\mathfrak{g}})\) in Hernandez and Leclerc's category \(\mathcal{C}_{\ell}\)[65, 67]. Denote \(\mathcal{M}^{(0)}=\mathcal{M}\) and \(\mathcal{M}^{(d+1)}\) (\(d\geq 0\)) the collection of equivalence classes of \(U_{q}(\widehat{\mathfrak{g}})\)-modules which correspond to facets of \[\mathbf{N}_{\mathfrak{g},\ell}^{(d)}:=\operatorname{Newt}\left(\prod_{[L(M)] \in\mathcal{M}^{(d)}}\widetilde{\chi}_{q}(L(M))/M\right),\] where \(\widetilde{\chi}_{q}(L(M))\) is the truncated \(q\)-characters [50] of the \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(M)\), see Section 9. The definition of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) is recursive and at each step, we give an explicit construction of simple \(U_{q}(\widehat{\mathfrak{g}})\)-modules from facets of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\). We conjecture that (1) for any \(d\geq 0\), every facet of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) correspond to a prime \(U_{q}(\widehat{\mathfrak{g}})\)-module and (2) every prime \(U_{q}(\widehat{\mathfrak{g}})\)-module corresponds to a facet of the Newton polytope \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) for some \(d\), see Conjecture 9.5. We also introduce another version (non-recursive) of Newton polytopes \(\mathbf{N^{\prime}}_{\mathfrak{g},\ell}^{(d)}\) for representations of quantum affine algebras, see Definition 9.3. In 1969, Z. Koba and K. Nielsen [76] introduced an integral representation for the Veneziano-type \(n\)-point function, parametrized so that a quantum field theoretic phenomenon known as crossing-symmetry, which asserts that particles are indistinguishable from anti-particles traveling back in time, is manifest. The integrand is expressed as a product of certain cross-ratios, with exponents the kinematic Mandelstam parameters; the cross-ratios are solutions to a particularly combinatorially nice set of binomial algebraic equations, of the form \[u_{j_{1},j_{3}}+\prod_{\{j_{2},j_{4}\}}u_{j_{2},j_{4}}=1,\] where the product is over all pairs \(\{j_{2},j_{4}\}\) such that \(j_{1}<j_{2}<j_{3}<j_{4}\) up to cyclic rotation. These equations, later rediscovered by Brown [19] in the context of multiple zeta values and moduli spaces, characterize a certain partial compactification of the moduli space \(\mathfrak{M}_{0,n}\) of \(n\) distinct points on the Riemann sphere, which is closely related to the tropical Grassmannian \(\operatorname{Trop}G(2,n)\), and in our context a certain subset of it, the _positive_ tropical Grassmannian \(\operatorname{Trop}^{+}\!G(2,n)\). For more recent work which is important in our context, see [4, 6]. A generalization of the Koba-Nielsen string integral was announced by Arkani-Hamed, Lam and Spradlin [7] using finite-type (Grassmannian) cluster algebras, where type \(A\) corresponds to usual string amplitudes, and developed in detail in [6]. However, Grassmannian cluster algebras for \(\operatorname{Gr}(k,n)\) with \((k-2)(n-k-2)>3\) not only have infinitely-many cluster variables, but it turns out that not all physically relevant elements of Lusztig's dual canonical basis can be constructed using a finite sequence of cluster mutations (there are prime non-real elements in the dual canonical basis). Finding a systematic description of prime modules is a deep and important problem in theoretical physics, since it is exactly the cases \(k=4\) and \(n\geq 8\) which are of interest to amplitudes. In the theory of quantum affine algebras, prime modules do not have an analog in the representation theory of simple Lie algebras: a module is prime if it cannot be decomposed nontrivially as the tensor product of two other modules. The fact that representations of quantum affine algebras possess both additive and multiplicative structures appears to be very important in our context. The main physical contribution of this work is to propose the definition of a Grassmannian string integral (as in [4]) with an integrand involving a product which is now in general infinite; the key point is that the integrand should be indexed by prime tableaux, in which case our results in the rest of the paper will be directly applicable to study physical aspects of the integral. Our proposal is valid for any Grassmannian \(\operatorname{Gr}(k,n)\), where \(k=2\) corresponds to the usual Koba-Nielsen string integral. So for \(2\leq k\leq n-2\) and every \(d\geq 1\), we define \[\mathbf{I}_{k,n}^{(d)} = (\alpha^{\prime})^{a}\int_{\left(\mathbb{R}_{>0}^{n-k-1}\right)^ {\operatorname{x}(k-1)}}\left(\prod_{(i,j)}\frac{dx_{i,j}}{x_{i,j}}\right) \left(\prod_{T}\operatorname{ch}_{T}^{-\alpha^{\prime}c_{T}}(x_{i,j})\right),\] where the second product is over all tableaux \(T\) such that the face \(\mathbf{F}_{T}\) corresponding to \(T\) (see Section 6.4) is a (codimension one) facet of \(\mathbf{N}_{k,n}^{(d-1)}\), \(a\), \(\alpha^{\prime}\), \(c_{T}\) are some parameters, see Section 10.1, Formulas (10.2), (10.3), (10.4), for more details. We point out that the character polynomials \(\operatorname{ch}_{T}\) are manifestly positive in the interior of the totally nonnegative Grassmannian, see [23, Section 5.3]. We give a general formula of \(u\)-variables using prime tableaux (corresponding to prime modules of quantum affine algebras of type \(A\)) and Auslander-Reiten quivers of Grassmannian cluster categories \(\operatorname{CM}(B_{k,n})\)[70]. For every mesh in the Auslander-Reiten quiver of \(\operatorname{CM}(B_{k,n})\), where we label the vertices by tableaux, we define the corresponding \(u\)-variable as \[u_{S}=\frac{\prod_{i=1}^{r}\operatorname{ch}_{T_{i}}}{\operatorname{ch}_{S} \operatorname{ch}_{S^{\prime}}}, \tag{1.1}\] see Definition 10.5. We conjecture that there are unique integers \(a_{T,T^{\prime}}\), where \(T\), \(T^{\prime}\) are prime tableaux, such that \(u\)-variables (1.1) are solutions of the system of equations \[u_{T}+\prod_{T^{\prime}\epsilon\operatorname{PSSYT}_{k,n}}u_{T^{\prime}}^{a_{ T,T^{\prime}}}=1, \tag{1.2}\] see Conjecture 10.6. The equations (1.2) are called \(u\)-equations. General \(u\)-equations have been introduced in [2, 3] in the setting of representations of quiver with relations and cluster categories of finite type. In our paper, we work in the setting of the Grassmannian cluster category \(\operatorname{CM}(B_{k,n})\)[70]. We also give a definition of stringy integral for Grassmannian cluster algebras using \(u\)-variables. Denote by \(\operatorname{PSSYT}_{k,n}\) the set of all prime tableaux of rectangular shapes and with \(k\) rows and with entries in \([n]\). For \(k\leq n\), we define \[\mathbf{I}_{k,n}^{(\infty)} = (\alpha^{\prime})^{a}\int_{\left(\mathbb{R}_{>0}^{n-k-1}\right)^ {\times(k-1)}}\prod_{i,j}\frac{dx_{i,j}}{x_{i,j}}\prod_{T\in\operatorname{ PSSYT}_{k,n}}(u_{T})^{\alpha^{\prime}U_{T}},\] where \(\alpha^{\prime}\), \(U_{T}\) are some parameters, and \(u_{T}\) is the \(u\)-variable corresponding to a prime tableau \(T\), See Definition 10.4. This new integrand involves an infinite product of \(u\)-variables, indexed by prime tableaux. We also generalize the stringy integral to the setting of general quantum affine algebras and define stringy integrals using prime modules of quantum affine algebras, see Section 10.3. Recently, Arkani-Hamed, Lam, Spradlin [7], and Drummond, Foster, Gurdogan, Kalousios [34], and Henke, Papathanasiou [63], have constructed limit \(g\)-vectors for the Grassmannian cluster algebra \(\mathbb{C}[\operatorname{Gr}(k,n)]\) using infinite sequence of mutations. These limit \(g\)-vectors do not correspond to cluster variables in \(\mathbb{C}[\operatorname{Gr}(k,n)]\) but correspond to prime elements in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n)]\). Motivated by these works, we define limit \(g\)-vectors for any cluster algebra of infinite type, see Definition 11.1. We say that a facet of a Newton polytope for a quantum affine algebra is a limit facet if the facet corresponds to a \(U_{q}(\widehat{\mathfrak{g}})\)-module whose \(g\)-vector is a limit \(g\)-vector of the cluster algebra for \(U_{q}(\widehat{\mathfrak{g}})\). We say that a \(U_{q}(\widehat{\mathfrak{g}})\)-module corresponds to a limit \(g\)-vector if the \(g\)-vector of the module is a limit \(g\)-vector of the cluster algebra for \(U_{q}(\widehat{\mathfrak{g}})\). We conjecture that every module corresponding to a limit \(g\)-vector is prime and non-real. This is another way to obtain prime non-real \(U_{q}(\widehat{\mathfrak{g}})\)-modules systematically. Using limit \(g\)-vectors, we construct new examples of non-real modules of quantum affine algebras. As an example, we prove that the module \(L(Y_{2,-4}Y_{2,0})\) in type \(D_{4}\) is non-real, see Section 12.3. The paper is organized as follows. In Section 2, we recall results of quantum affine algebras and Hernandez-Leclerc's category \(\mathcal{C}_{\ell}\). In Section 3, we recall results of Grassmannian cluster algebras and tropical Grassmannians. In Section 4, we define a sequence of Newton polytopes and tropical fans for Grassmannian cluster algebras. In Section 5, we study relations between semistandard Young tableaux of rectangular shapes and generalized root polytopes. In Section 6, we construct prime modules from facets. In Section 7, we explicitly describe \(2\)-column prime tableaux. In Section 8, we give more evidence of Conjecture 6.1 that facets of Newton polytopes correspond to prime modules. In Section 9, we define Newton polytopes and tropical fans for general quantum affine algebras. In Section 10, we generalize Grassmannian string integrals and study \(u\)-equations and \(u\)-variables. In Section 11, we study limit \(g\)-vectors for general cluster algebras. In Section 12, we give an example of prime non-real module of \(U_{q}(\widehat{\mathfrak{g}})\) when \(\mathfrak{g}\) is of type \(D_{4}\) using limit \(g\)-vectors. In Section 13, we discuss some future directions of the paper. ### Acknowledgements The authors would like to thank Fedor Petrov for his help of proving Proposition 7.4. The authors would like to thank Nima Arkani-Hamed, Freddy Cachazo, James Drummond, Omer Gurdogan, Min Huang, Jiarui Fei, Lecheng Ren, Marcus Spradlin, and Anastasia Volovich for helpful discussions, and Georgios Papathanasiou for helpful comments. This research was supported in part by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy-EXC-2094-390783311. JRL is supported by the Austrian Science Fund (FWF): Einzelprojekte P34602. This research received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 725110), Novel structures in scattering amplitudes. ## 2. Quantum Affine Algebras and Hernandez-Leclerc's Category \(\mathcal{C}_{\ell}\) In this section, we recall results of quantum affine algebras [28, 50], Hernandez-Leclerc's category \(\mathcal{C}_{\ell}\) and cluster algebra structure on the Grothendieck ring of \(\mathcal{C}_{\ell}\)[65]. ### Quantum Affine Algebras Let \(\mathfrak{g}\) be a simple finite-dimensional Lie algebra and \(I\) the set of vertices of the Dynkin diagram of \(\mathfrak{g}\). Denote by \(\{\omega_{i}:i\in I\}\), \(\{\alpha_{i}:i\in I\}\), \(\{\alpha_{i}^{\vee}:i\in I\}\) the set of fundamental weights, the set of simple roots, the set of simple coroots, respectively. Denote by \(P\) the integral weight lattice and \(P^{+}\) the set of dominant weights. The Cartan matrix is \(C=(\alpha_{j}(\alpha_{i}^{\vee}))_{i,j\in I}\). Let \(D=\operatorname{diag}(d_{i}:i\in I)\), where \(d_{i}\)'s are minimal positive integers such that \(DC\) is symmetric. Let \(z\) be an indeterminate. The quantum Cartan matrix \(C(z)\) is defined as follows: for \(i\in I\), \(C_{i,i}(z)=z^{d_{i}}+z^{-d_{i}}\), and for \(i\neq j\in I\), \(C_{ij}(z)=[C_{ij}]_{z}\), where \([m]_{z}=\frac{z^{m}-z^{-m}}{z-z^{-1}}\), see [50]. The quantum affine algebra \(U_{q}(\widehat{\mathfrak{g}})\) is a Hopf algebra that is a \(q\)-deformation of the universal enveloping algebra of \(\widehat{\mathfrak{g}}\)[33, 69]. In this paper, we take \(q\) to be a non-zero complex number which is not a root of unity. Denote by \(\mathcal{P}\) the free abelian group generated by formal variables \(Y_{i,a}^{\pm 1}\), \(i\in I\), \(a\in\mathbb{C}^{*}\), denote by \(\mathcal{P}^{+}\) the submonoid of \(\mathcal{P}\) generated by \(Y_{i,a}\), \(i\in I\), \(a\in\mathbb{C}^{*}\). Let \(\mathcal{C}\) denote the monoidal category of finite-dimensional representations of the quantum affine algebra \(U_{q}(\widehat{\mathfrak{g}})\). Any finite dimensional simple object in \(\mathcal{C}\) is a highest \(l\)-weight module with a highest \(l\)-weight \(m\in\mathcal{P}^{+}\), denoted by \(L(m)\) (see [29]). The elements in \(\mathcal{P}^{+}\) are called dominant monomials. Frenkel and Reshetikhin [50] introduced the \(q\)-character map which is an injective ring morphism \(\chi_{q}\) from the Grothendieck ring of \(\mathcal{C}\) to \(\mathbb{Z}\mathcal{P}=\mathbb{Z}[Y^{\pm 1}_{i,a}]_{i\in I,a\in\mathbb{C}^{\times}}\). For a \(U_{q}(\widehat{\mathfrak{g}})\)-module \(V\), \(\chi_{q}(V)\) encodes the decomposition of \(V\) into common generalized eigenspaces for the action of a large commutative subalgebra of \(U_{q}(\widehat{\mathfrak{g}})\) (the loop-Cartan subalgebra). These generalized eigenspaces are called \(l\)-weight spaces and generalized eigenvalues are called \(l\)-weights. One can identify \(l\)-weights with monomials in \(\mathcal{P}\)[50]. Then the \(q\)-character of a \(U_{q}(\widehat{\mathfrak{g}})\)-module \(V\) is given by (see [50]) \[\chi_{q}(V)=\sum_{m\in\mathcal{P}}\dim(V_{m})m\in\mathbb{Z}\mathcal{P},\] where \(V_{m}\) is the \(l\)-weight space with \(l\)-weight \(m\). Let \(\mathcal{Q}^{+}\) be the monoid generated by \[A_{i,a}=Y_{i,aq_{i}}Y_{i,aq_{i}^{-1}}\left(\prod_{j:C_{ji}=-1}Y_{j,a}^{-1} \right)\left(\prod_{j:C_{ji}=-2}Y_{j,aq}^{-1}Y_{j,aq^{-1}}^{-1}\right)\left( \prod_{j:C_{ji}=-3}Y_{j,aq^{2}}^{-1}Y_{j,a}^{-1}Y_{j,aq^{-2}}^{-1}\right), \tag{2.1}\] where \(q_{i}=q^{d_{i}}\), \(i\in I\). There is a partial order \(\preccurlyeq\) on \(\mathcal{P}\) (cf. [50]) defined by \(m^{\prime}\preccurlyeq m\) if and only if \(mm^{\prime-1}\in\mathcal{Q}^{+}\). For \(i\in I\), \(a\in\mathbb{C}^{\times}\), \(k\in\mathbb{Z}_{\geq 1}\), the modules \[X^{(k)}_{i,a}\coloneqq L(Y_{i,a}Y_{i,aq^{2}\cdots}Y_{i,aq^{2k-2}})\] are called Kirillov-Reshetikhin modules. The modules \(X^{(1)}_{i,a}=L(Y_{i,a})\) are called fundamental modules. ### Hernandez-Leclerc's Category \(\mathcal{C}_{\ell}\) and Truncated \(q\)-characters We recall the definition of Hernandez-Leclerc's category \(\mathcal{C}_{\ell}\)[65]. For integers \(a\leq b\), we denote \([a,b]=\{i:a\leq i\leq b\}\) and \([a]=\{i:1\leq i\leq a\}\). Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) and let \(\mathcal{C}\) be the category of finite-dimensional \(U_{q}(\widehat{\mathfrak{g}})\)-modules. In [65], [67], Hernandez and Leclerc introduced a full subcategory \(\mathcal{C}_{\ell}=\mathcal{C}_{\ell}^{\mathfrak{g}}\) of \(\mathcal{C}\) for every \(\ell\in\mathbb{Z}_{\geq 0}\). Let \(I\) be the set of vertices of the Dynkin diagram of \(\mathfrak{g}\). We fix \(a\in\mathbb{C}^{*}\) and denote \(Y_{i,s}=Y_{i,aq^{*}}\), \(i\in I\), \(s\in\mathbb{Z}\). For \(\ell\in\mathbb{Z}_{\geq 0}\), denote by \(\mathcal{P}_{\ell}\) the subgroup of \(\mathcal{P}\) generated by \(Y_{i,\xi(i)-2r}^{\pm 1}\), \(i\in I\), \(r\in[0,d\ell]\), where \(d\) is the maximal diagonal element in the diagonal matrix \(D\), and \(\xi:I\to\mathbb{Z}\) is a certain function called height function defined in Section 2.2 in [66] and Definition 4.1 in [55]. Denote by \(\mathcal{P}_{\ell}^{+}\) the submonoid of \(\mathcal{P}^{+}\) generated by \(Y_{i,\xi(i)-2r}\), \(i\in I\), \(r\in[0,d\ell]\). An object \(V\) in \(\mathcal{C}_{\ell}\) is a finite-dimensional \(U_{q}(\widehat{\mathfrak{g}})\)-module which satisfies the condition: for every composition factor \(S\) of \(V\), the highest \(l\)-weight of \(S\) is a monomial \(\mathcal{P}_{\ell}^{+}\), [65]. Simple modules in \(\mathcal{C}_{\ell}\) are of the form \(L(M)\) (see [28], [65]), where \(M\in\mathcal{P}_{\ell}^{+}\). **Remark 2.1**.: In [65], fundamental modules in \(\mathcal{C}_{\ell}\) are \(L(Y_{i,-\xi(i)-2r})\), \(i\in I\), \(r\in[0,\ell]\), see Definition 3.1 in [65]. In this paper, we slightly modify \(\mathcal{C}_{\ell}\) such that the fundamental modules in \(\mathcal{C}_{\ell}\) are \(L(Y_{i,\xi(i)\text{-}2r})\), \(i\in I\), \(r\in[0,d\ell]\), where \(d\) is the maximal diagonal element in the diagonal matrix \(D\). For a module \(V\) in \(\mathcal{C}_{\ell}\), the truncated \(q\)-character \(\widetilde{\chi}_{q}(V)\) is the Laurent polynomial obtained from the \(q\)-character \(\chi_{q}(V)\) defined in Section 2.1 by removing all monomials which have a factor \(Y_{i,s}\) or \(Y_{i,s}^{-1}\), where \(Y_{i,s}\) is not in \(\mathcal{P}_{\ell}^{+}\), see [66, 67]. Hernandez and Leclerc constructed a cluster algebra for every category \(\mathcal{C}_{\ell}\) of every quantum affine algebra \(U_{q}(\widehat{\mathfrak{g}})\)[65, 67]. The cluster algebra for \(\mathcal{C}_{\ell}\) of \(U_{q}(\widehat{\mathfrak{sl}_{k}})\) is isomorphic to the cluster algebra for a certain quotient \(\mathbb{C}[\operatorname{Gr}(k,n,\sim)]\) (see Section 3.1) of the Grassmannian cluster algebra \(\mathbb{C}[\operatorname{Gr}(k,n)]\)[23, 65, 96], \(n=k+\ell+1\). ## 3. Grassmannian Cluster Algebras and Tropical Grassmannians In this section, we recall results of Grassmannian cluster algebras, [96, 23] and tropical Grassmannians [97, 99]. ### Grassmannian Cluster Algebras and Semistandard Young Tableaux For \(k\leq n\), the Grassmannian \(\operatorname{Gr}(k,n)\) is the set of \(k\)-dimensional subspaces in an \(n\)-dimensional vector space. In this paper, we denote by \(\operatorname{Gr}(k,n)\) (the affine cone over) the Grassmannian of \(k\)-dimensional subspaces in \(\mathbb{C}^{n}\), and denote by \(\mathbb{C}[\operatorname{Gr}(k,n)]\) its coordinate ring. This algebra is generated by Plucker coordinates \[p_{i_{1},\ldots,i_{k}},\quad 1\leq i_{1}<\cdots<i_{k}\leq n.\] In this paper, we use \(p_{J}\) for Plucker coordinates and \(P_{J}\) for its tropical version, where \(J\) is a \(k\)-element subset of \([n]\). It was shown by Scott [96] that the ring \(\mathbb{C}[\operatorname{Gr}(k,n)]\) has a cluster algebra structure. Define \(\mathbb{C}[\operatorname{Gr}(k,n,\sim)]\) to be the quotient of \(\mathbb{C}[\operatorname{Gr}(k,n)]\) by the ideal generated by \(P_{i,\ldots,i+k-1}-1\), \(i\in[n-k+1]\). In [23], it is shown that the elements in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n,\sim)]\) are in bijection with semistandard Young tableaux in \(\operatorname{SSYT}(k,[n],\sim)\). A semistandard Young tableau is a Young tableau with weakly increasing rows and strictly increasing columns. For \(k,n\in\mathbb{Z}_{\geq 1}\), we denote by \(\operatorname{SSYT}(k,[n])\) the set of rectangular semistandard Young tableaux with \(k\) rows and with entries in \([n]\) (with arbitrarly many columns). The empty tableau is denoted by \(\mathds{1}\). For \(S,T\in\operatorname{SSYT}(k,[n])\), let \(S\cup T\) be the row-increasing tableau whose \(i\)th row is the union of the \(i\)th rows of \(S\) and \(T\) (as multisets), [23]. It is shown in Section 3 in [23] that \(S\cup T\) is semistandard for any pair of semistandard tableaux \(S,T\). We call \(S\) a factor of \(T\), and write \(S\subset T\), if the \(i\)th row of \(S\) is contained in that of \(T\) (as multisets), for \(i\in[k]\). In this case, we define \(\frac{T}{S}=S^{-1}T=TS^{-1}\) to be the row-increasing tableau whose \(i\)th row is obtained by removing that of of \(S\) from that of \(T\) (as multisets), for \(i\in[k]\). A tableau \(T\in\operatorname{SSYT}(k,[n])\) is trivial if each entry of \(T\) is one less than the entry below it. For any \(T\in\operatorname{SSYT}(k,[n])\), we denote by \(T_{\operatorname{red}}\subset T\) the semistandard tableau obtained by removing a maximal trivial factor from \(T\). For a trivial \(T\), one has \(T_{\operatorname{red}}=\mathds{1}\). Let "\(\sim\)" be the equivalence relation on \(S,T\in\operatorname{SSYT}(k,[n])\) defined by: \(S\sim T\) if and only if \(S_{\operatorname{red}}=T_{\operatorname{red}}\). We denote by \(\operatorname{SSYT}(k,[n],\sim)\) the set of \(\sim\)-equivalence classes. The elements in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n,\sim)]\) are in bijection with simple modules in the category \(\mathcal{C}_{\ell}\) of \(U_{q}(\widehat{\mathfrak{sl}_{k}})\) in Section 3.1, see [65, 23]. A one-column tableau is called a fundamental tableau if its content is \([i,i+k]\setminus\{r\}\) for \(r\in\{i+1,\ldots,i+k-1\}\). Any tableau in \(\operatorname{SSYT}(k,[n])\) is \(\sim\)-equivalent to a unique semistandard tableau whose columns are fundamental tableaux, see Lemma 3.13 in [23]. By [23, Theorem 5.8], for every \(T\in\operatorname{SSYT}(k,[n])\), the corresponding element \(\widehat{\operatorname{ch}}(T)\) in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n,\sim)]\) is given by \[\widetilde{\operatorname{ch}}(T)=\sum_{u\in S_{m}}(-1)^{\ell(uw_{T})}\mathbf{ p}_{uw_{0},w_{T}w_{0}}(1)p_{u;T^{\prime}}\in\mathbb{C}[\operatorname{Gr}(k,n, \sim)], \tag{3.1}\] where \(m\) is the number of columns of \(T^{\prime}\), \(T^{\prime}\) is the tableau whose columns are fundamental tableaux and such that \(T\sim T^{\prime}\), \(p_{u;T^{\prime}}\) is a certain monomial of Plucker coordinates, \(w_{T}\) is a certain permutation in \(S_{m}\), \(\mathbf{p}_{u,v}(q)\) is a Kazhdan-Lusztig polynomial, see [23]. Let \(T^{\prime\prime}=T^{\prime}T^{-1}\) and define \(\operatorname{ch}(T)=\frac{1}{p_{T^{\prime\prime}}}\widetilde{\operatorname{ ch}}(T)\), where \(p_{T^{\prime\prime}}=p_{T^{\prime\prime}}\cdots p_{T^{\prime\prime}{}_{r}}\), \(T^{\prime\prime}{}_{i}\)'s are columns of \(T^{\prime\prime}\). We also denote \(\operatorname{ch}_{T}=\operatorname{ch}(T)\). ### Relation between Dominant Monomials and Tableaux In Section 2.2, we recalled Hernandez and Leclerc's category \(\mathcal{C}_{\ell}\). It is shown in Theorem 3.17 in [23] that in the case of \(\mathfrak{g}=\mathfrak{sl}_{k}\), the monoid \(\mathcal{P}_{\ell}^{+}\) (we take the height function to be \(\xi(i)=i-2\), \(i\in[k-1]\)) of dominant monomials is isomorphic to the monoid of semistandard Young tableaux \(\operatorname{SSYT}(k,[n],\sim)\), \(n=k+\ell+1\). The correspondence of dominant monomials and tableaux is induced by the following map sending fundamental monomials to fundamental tableaux: \[Y_{i,s}\mapsto T_{i,s}, \tag{3.2}\] where \(T_{i,s}\) is a one-column tableau consisting of entries \(\frac{i-s}{2},\frac{i-s}{2}+1,\ldots,\frac{i-s}{2}+k-i-1,\frac{i-s}{2}+k-i+1, \ldots,\frac{i-s}{2}+k\). We denote the monomial corresponding to a tableau \(T\) by \(M_{T}\) and denote the tableau corresponding to a monomial \(M\) by \(T_{M}\). Note that by the definition of \(\mathcal{C}_{\ell}\) and the choice of the height function \(\xi(i)=i-2\), \(i\in[k-1]\), the indices of \(Y_{i,s}\) in the highest \(l\)-weight monomials of simple modules in \(\mathcal{C}_{\ell}\) satisfy \(i-s\pmod{2}=0\). When computing the monomial corresponding to a given tableau, we first decompose the tableau into a union of fundamental tableaux. Then we send each fundamental tableau to the corresponding fundamental monomial. For example, the tableaux \([[[1,2,4,6],[3,5,7,8]]\) (each list is a column of the tableau), \([[1,3,5,7],[2,4,6,8]]\) correspond to the modules \[L(Y_{2,-6}Y_{1,-3}Y_{3,-3}Y_{2,0}),\quad L(Y_{1,-7}Y_{2,-4}Y_{1,-5}Y_{3,-1}Y_{2,-2}Y_{3,1}),\] respectively. Recall that a simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(M)\) is called prime if it is not isomorphic to \(L(M^{\prime})\otimes L(M^{\prime\prime})\) for any non-trivial modules \(L(M^{\prime})\), \(L(M^{\prime\prime})\)[31]. A simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(M)\) is called real if \(L(M)\otimes L(M)\) is still simple [78]. We say that a tableau \(T\) is prime (resp. real) if the corresponding \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-module \(L(M_{T})\) is prime (resp. real). The problem of classification of prime \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-modules in the category \(\mathcal{C}_{\ell}\) (\(\ell\geq 0\)) is equivalent to the problem of classification of prime tableaux in \(\mathrm{SSYT}(k,[n])\), \(n\geq k+1\), [23]. ### Matroid Subdivisions and Tropical Grassmannians We recall results of tropical Grassmannians [97, 99] and matroid subdivisions [22, 45]. For integers \(1\leq k\leq n-1\), the hypersimplex \(\Delta_{k,n}\)[56, 103] is the \(k\)th cross-section of a cube, \[\Delta_{k,n}=\left\{x\in[0,1]^{n}:\sum_{j=1}^{n}x_{j}=k\right\}.\] It is the convex hull of \(\binom{n}{k}\) vectors of the form \(e_{J}=\sum_{j\in J}e_{j}\) for \(J\in\binom{[n]}{k}\), where \(\binom{[n]}{k}\) is the set of \(k\)-element subsets of \([n]\). The following constructions are standard in convex and combinatorial geometry, [40, 58, 62, 77, 98]. **Definition 3.1** ([40, 58, 77]).: A matroid polytope \(P\) is a subpolytope of a hypersimplex \(\Delta_{k,n}\), all of whose edges are edges of \(\Delta_{k,n}\). A matroid subdivision (also called matroidal subdivision) of \(\Delta_{k,n}\) is a polytopal subdivision \((P_{1},\ldots,P_{t})\) of \(\Delta_{k,n}\) such that each \(P_{1},\ldots,P_{t}\) is a matroid polytope. A matroid subdivision \((P_{1},\ldots,P_{t})\) is called _regular_ if there exists a piecewise linear function on \(\Delta_{k,n}\) whose regions of linearity are exactly the polytopes \(P_{i}\). A matroid polytope such that the defining inequalities are of the form \(x_{i}+x_{i+1}+\cdots+x_{j}\geq r_{ij}\) for some integers \(r_{ij}\) with \(\{i,i+1,\ldots,j\}\) a cyclic interval, is called a positroid polytope, see Section 2 in [83], Proposition 5.6 in [9], and Section 2.2 in [43], where the indices are assumed to be cyclic modulo \(n\). A positroid subdivision of \(\Delta_{k,n}\) is a matroid subdivision \((P_{1},\ldots,P_{t})\) of \(\Delta_{k,n}\) such that each \(P_{i}\) is a positroid polytope, see Section 2 in [83], Section 5 in [9], and Section 2.2 in [43]. Let \(\mathcal{P}=(P_{1},\ldots,P_{t})\) and \(\mathcal{P}^{\prime}=(P^{\prime}_{1},\ldots,P^{\prime}_{t})\) be two two matroid subdivisions of \(\Delta_{k,n}\). It is said that \(\mathcal{P}\) refines \(\mathcal{P}^{\prime}\) if every maximal cell \(P^{\prime}_{i}\) of \(\mathcal{P}^{\prime}\) is a union of maximal cells of \(\mathcal{P}\), and it is said that \(\mathcal{P}\) coarsens \(\mathcal{P}^{\prime}\) if every maximal cell \(P_{i}\) of \(\mathcal{P}\) is a union of maximal cells of \(\mathcal{P}^{\prime}\), see Definition 2.3.8 in [40]. The tropical Grassmannian \(\operatorname{Trop}G(k,n)\), introduced in [97], parametrizes realizable tropical linear spaces; it is the tropical variety of the Plucker ideal of the Grassmannian \(\operatorname{Gr}(k,n)\). For general \((k,n)\) the Plucker ideal contains higher degree generators and to calculate \(\operatorname{Trop}G(k,n)\) quickly becomes a completely intractable problem, but for \(k=2\), \(\operatorname{Trop}G(2,n)\) is completely characterized by the tropicalization of the \(3\)-term tropical Plucker relations. We present the full definition and then immediately specialize to the so-called positive tropical Grassmannians. **Definition 3.2** ([97]).: Given \(e=(e_{1},\ldots,e_{N})\in\mathbb{Z}_{\geq 0}^{N}\), denote \(\mathbf{x}^{e}=x_{1}^{e_{1}}\ldots x_{N}^{e_{N}}\). Let \(E\subset\mathbb{Z}_{\geq 0}^{N}\). If \(f=\sum_{ee\in E}f_{e}\mathbf{x}^{e}\) is nonzero, denote by \(\operatorname{Trop}\left(f\right)\) the set of all points \((X_{1},\ldots,X_{N})\) such that for the collection of numbers \(\sum_{i=1}^{N}e_{i}X_{i}\) for \(e\) ranging over \(E\), the minimum of the collection is achieved at least twice. We say that \(\operatorname{Trop}\left(f\right)\) is the tropical hypersurface associated to \(f\). The _tropical Grassmannian_\(\operatorname{Trop}G(k,n)\) is the intersection of all tropical hypersurfaces \(\operatorname{Trop}\left(f\right)\) where \(f\) ranges over all elements in the Plucker ideal. On the other hand, in [99], Speyer-Williams introduced the _positive_ tropical Grassmannian \(\operatorname{Trop}^{+}G(k,n)\), which was later shown independently in [8, 100] to be equal to the positive Dressian, which is characterized by the \(3\)-term tropical Plucker relations, \[\pi_{Lac}+\pi_{Lbd}=\min\{\pi_{Lab}+\pi_{Lcd},\pi_{Lad}+\pi_{Lbc}\},\] for each pair \(\left(L,\{a,b,c,d\}\right)\in\binom{[n]}{k-2}\times\binom{[n]\setminus L}{4}\) with \(a<b<c<d\). Generalized positive roots were defined in [22] and developed in depth in [45] in the context of root polytopes and CEGM scattering amplitudes [21]. We now recall the definition of generalized positive roots1. Footnote 1: The name originates from [45, Theorem 4.3], according to which the their convex hull, the generalized root polytope \(\mathcal{R}_{n-k}^{(k)}\), admits a flag-unimodular triangulation which specializes to that the triangulation of the type \(A\) root polytope of Gelfand-Graev-Postnikov in the context of hypergeometric systems [57]. We use \(\binom{[n]}{k}^{nf}\) to denote the set of \(k\)-element subsets of \([n]\) which are nonfrozen, i.e., not of the form \([i,i+k-1]\) up to cyclic shifts. **Definition 3.3** ([22]).: Given any \(J=\{j_{1}<\cdots<j_{k}\}\in\binom{[n]}{k}^{nf}\), the _generalized positive root_\(\gamma_{J}\) is the linear function on the space \(\mathbb{T}^{k-1,n-k}\): \[\gamma_{J}=\sum_{t=j_{1}}^{j_{2}-2}\alpha_{1,t}+\sum_{t=j_{2}-1}^{j_{3}-3} \alpha_{2,t}+\cdots+\sum_{t=j_{k-1}-(k-2)}^{j_{k}-k}\alpha_{k-1,t}. \tag{3.3}\] When there is no risk of confusion we also call the vector \(v_{J}\) dual to \(\gamma\) a generalized positive root: \[v_{J}=\sum_{t=j_{1}}^{j_{2}-2}e_{1,t}+\sum_{t=j_{2}-1}^{j_{3}-3}e_{2,t}+ \cdots+\sum_{t=j_{k-1}-(k-2)}^{j_{k}-k}e_{k-1,t}.\] Denote \[X(k,n)=\left\{g\in\operatorname{Gr}(k,n):\prod_{J}p_{J}(g)\neq 0\right\}/( \mathbb{C}^{*})^{n}.\] We construct an embedding \[\left(\mathbb{CP}^{n-k-1}\right)^{\times(k-1)}\hookrightarrow X(k,n)\] of a Cartesian product of projective spaces into \(X(k,n)\). Define a \((k-1)\times(n-k-1)\) polynomial-valued matrix \(M_{k,n}\) with entries \(m_{i,j}(x)\), with \((i,j)\in[1,k-1]\times[1,n-k]\), defined by \[m_{i,j}(\{x_{a,b}:(a,b)\in[i,k-1]\times[1,j]\}) = (-1)^{k+i}\sum_{1\leq b_{i}\leq b_{i+1}\leq\cdots\leq b_{k-1}\leq j }x_{i,b_{i}}x_{i+1,b_{i+1}}\cdots x_{k-1,b_{k-1}}.\] For the embedding \((\mathbb{CP}^{n-k-1})^{\times k-1}\hookrightarrow X(k,n)\), we construct a \(k\times(n-k)\) matrix \(M\) (called web matrix [99]) with \(M_{k,n}\) as its upper right block: \[M = \begin{bmatrix}1&&0&m_{1,1}&\cdots&m_{1,n-k}\\ &\ddots&&\vdots&\ddots&\vdots\\ &&1&&m_{k-1,1}&&m_{k-1,n-k}\\ 0&&1&1&\cdots&1\end{bmatrix}\!. \tag{3.4}\] For instance, for rank \(k=3\) we have \[M_{3,6} = \begin{bmatrix}x_{1,1}x_{2,1}&x_{1,1}x_{2,12}+x_{1,2}x_{2,2}&x_ {1,1}x_{2,123}+x_{1,2}x_{2,23}+x_{1,3}x_{2,3}\\ -x_{2,1}&-x_{2,12}&&-x_{2,123}\end{bmatrix}\] and for the embedding we have \[M=\begin{bmatrix}1&0&0&x_{1,1}x_{2,1}&x_{1,1}x_{2,12}+x_{1,2}x_ {2,2}&x_{1,1}x_{2,123}+x_{1,2}x_{2,23}+x_{1,3}x_{2,3}\\ 0&1&0&-x_{2,1}&&-x_{2,12}&&-x_{2,123}\\ 0&0&1&1&1&1\end{bmatrix}\!.\] Here we abbreviate for example \(x_{i,23}=x_{i,2}+x_{i,3}\). For any \(k\)-subset \(J\) of \([n]\), define \(e_{J}=\sum_{j\in J}e_{j}\). Further denote by \(\{e^{J}:J\in\binom{[n]}{k}\}\) the standard basis of \(\mathbb{R}^{\binom{n}{k}}\). **Definition 3.4** ([84]).: A pair of \(k\)-element subsets \(I,J\) is said to be _weakly separated_ if the difference of indicator vectors \(e_{I}-e_{J}\) alternates sign at most twice, that is one does not have the pattern \(e_{a}-e_{b}+e_{c}-e_{d}\) for \(a<b<c<d\), up to cyclic rotation. **Definition 3.5** ([91]).: A pair \(I=\{i_{1}<\ldots<i_{k}\}\), \(J=\{j_{1}<\ldots<j_{k}\}\) of \(k\)-subsets of \([n]\) is said to be noncrossing if for each \(1\leq a<b\leq k\), either the pair \(\{i_{a},i_{a+1},\ldots,i_{b}\}\), \(\{j_{a},j_{a+1},\ldots,j_{b}\}\) is weakly separated, or \(\{i_{a+1},\ldots,i_{b-1}\}\neq\{j_{a+1},\ldots,j_{b-1}\}\). **Remark 3.6**.: Here it is easy to see that Definition 3.4 is a slight reformulation of the original construction given in [84]. Similarly, Definition 3.5 is clearly equivalent to the one given in [91]. Denote by \(\mathbf{NC}_{k,n}\) the poset of all collections of pairwise noncrossing nonfrozen \(k\)-element subsets of \([n]\), ordered by inclusion. Denote \(\mathbb{T}^{k-1,n-k}=(\mathbb{T}^{n-k})^{\times(k-1)}\), where \(\mathbb{T}^{n-k}=\mathbb{R}^{n-k}/\mathbb{R}(1,\ldots,1)\). ## 4. Newton Polytopes and Tropical Fans for Grassmannian Cluster Algebras In this section, we define Newton polytopes and tropical fans for Grassmannian cluster algebras. ### Newton polytopes for Grassmannian cluster algebras In what follows, we give a recursive construction of a collection of Newton polytopes \(\mathbf{N}_{k,n}^{(d)}\) (\(d\in\mathbb{Z}_{\geq 0}\)), starting from the Planar Kinematics (PK) polytope \(\Pi_{k,n}\)[22], see also [45], which is equal to \(\mathbf{N}_{k,n}^{(0)}\) in the present notation. For any tableau \(T\in\operatorname{SSYT}(k,[n])\), we evaluate \(\operatorname{ch}_{T}\) on the web matrix \(M\) in (3.4) and obtain a polynomial in \(x_{i,j}\)-coordinates. Note that there is a monomial transformation relating the \(x_{i,j}\) coordinates to the so-called \(X\)-coordinates [52] in cluster algebras: \(X_{ij}=\frac{x_{k-i,j}}{x_{k-i,j+1}}\). For any tableaux \(T\) with columns \(T_{1},\ldots,T_{r}\), define \[\gamma_{T}=\gamma_{T_{1}}+\cdots+\gamma_{T_{r}},\quad v_{T}=v_{T_{1}}+\cdots+ v_{T_{r}}, \tag{4.1}\] where \(\gamma_{T_{i}}=\gamma_{J_{i}}\) and \(v_{T_{i}}=v_{J_{i}}\) are defined in Section 3.3 and \(J_{i}\) is the sorted content of the one-column tableau \(T_{i}\). **Definition 4.1**.: Let \(\mathcal{T}_{k,n}^{(0)}\) be the set of all one-column tableaux which are obtained by cyclic shifts of the one-column tableau with entries \(1,2,\ldots,k-1,k+1\). For \(d\geq 0\), we define recursively \[\mathbf{N}_{k,n}^{(d)}=\operatorname{Newt}\left(\prod_{T\in\mathcal{T}_{k,n}^ {(d)}}\operatorname{ch}_{T}(x_{i,j})\right),\] where \(\mathcal{T}_{k,n}^{(d+1)}\) is the set of all tableaux which correspond to facets of \(\mathbf{N}_{k,n}^{(d)}\). More precisely, \[\mathcal{T}_{k,n}^{(d+1)}=\left\{T:\gamma_{T}\text{ is minimized on a facet of }\mathbf{N}_{k,n}^{(d)}\right\},\] see Section 6 for details on computing \(\mathcal{T}_{k,n}^{(d)}\). In particular, the so-called _Planar Kinematics_ (PK) polytope [22], denoted there \[\Pi_{k,n}=\mathrm{Newt}\left(\prod_{j=1}^{n}\frac{p_{j,j+1,\ldots,j+k-2,k}}{p_{j, j+1,\ldots,j+k-2,k-1}}\right),\] is the same as \(\mathbf{N}_{k,n}^{(0)}\) noting that when we evaluated on the matrix \(M\) all denominators are monomials in the \(x_{i,j}\) coordinates. For example, evaluating on the matrix \(M\) we have \[\mathbf{N}_{2,6}^{(0)}=\frac{x_{1,12}x_{1,23}x_{1,34}x_{1,1234}}{x_{1,1}x_{1, 2}x_{1,3}x_{1,4}}\] and \[\mathbf{N}_{3,6}^{(0)} = \frac{\left(x_{1,1}x_{2,1}+x_{1,1}x_{2,2}+x_{1,2}x_{2,2}\right) \left(x_{1,2}x_{2,2}+x_{1,2}x_{2,3}+x_{1,3}x_{2,3}\right)\left(x_{1,123} \right)\left(x_{2,123}\right)}{x_{1,1}x_{1,2}x_{1,3}x_{2,1}x_{2,2}x_{2,3}},\] where we abbreviate for example \(x_{1,123}=x_{1,1}+x_{1,2}+x_{1,3}\). On the other hand, the polytope \(\mathbf{N}_{k,n}^{(1)}\), which is closely related2 to the positive tropical Grassmannian, is given by Footnote 2: In particular, the normal fan of \(\mathbf{N}_{k,n}^{(1)}\) has the following property: its cones are in bijection with the cones in the positive tropical Grassmannian \(\mathrm{Trop}^{+}G(k,n)\). \[\mathbf{N}_{k,n}^{(1)}=\mathrm{Newt}\left(\prod_{J\in\binom{[n]}{k}}p_{J}\right).\] **Remark 4.2**.: We are concerned with the facets of polytopes \(\mathbf{N}_{k,n}^{(0)},\mathbf{N}_{k,n}^{(1)},\ldots,\mathbf{N}_{k,n}^{(d)}\). Motivated in part by work of Arkani-Hamed, Frost, Plamondon, Salvatori, and Thomas, [2] on polyhedra modeled on punctured surfaces which they call surfacehedra, having infinitely many Minkowski summands, ultimately (\(d\to\infty\)) we are interested in a new object (denoted by \(\mathbf{N}_{k,n}^{(\infty)}\)) which again has infinitely many Minkowski summands (and infinitely many facets). In our proposal, these Minkowski summands are by construction in bijection with prime tableaux in \(\mathrm{SSYT}(k,[n])\) (equivalently prime modules of the quantum affine algebra in the category \(\mathcal{C}_{\ell}\)). We define another version of Newton polytopes non-recursively. For \(k\leq n\) and \(r\in\mathbb{Z}_{\geq 1}\), denote by \(\mathrm{SSYT}_{k,n}^{r}\) the set of all tableaux in \(\mathrm{SSYT}(k,[n])\) with \(r\) or less columns. **Definition 4.3**.: For \(k\leq n\) and \(d\in\mathbb{Z}_{\geq 1}\), we define \[\mathbf{N^{\prime}}_{k,n}^{(d)}=\mathrm{Newt}\left(\prod_{T\in\mathrm{SSYT}_{ k,n}^{d}}\mathrm{ch}_{T}(x_{i,j})\right).\] ### Tropical Fans for Grassmannian Cluster Algebras Recall that given a polytope \(P\) in a real vector space \(V\), its normal fan \(\mathcal{N}(P)\) is the polyhedral complex on the dual space \(V^{*}\), (closed) faces consist of all linear functionals minimized on a given face of \(P\). In the following, we describe the normal fan of the Newton polytope \(\mathbf{N}_{k,n}^{(d)}\) defined in Section 4.1. The evaluation of \(\operatorname{ch}_{T}=\operatorname{ch}(T)\) on the web matrix \(M\)[99] (see Section 3.3), we obtain a subtraction free polynomial in the \(x_{i,j}\) coordinates. For example, for \(\operatorname{Gr}(2,5)\), we have that \[p_{1,2}(M) =p_{1,3}=p_{1,4}=p_{1,5}=1,\ p_{2,3}(M)=x_{1,1},\ p_{3,4}(M)=x_{1, 2},\ p_{4,5}(M)=x_{1,3},\] \[p_{2,4}(M) =x_{1,1}+x_{1,2},\ p_{2,5}(M)=x_{1,1}+x_{1,2}+x_{1,3},\ p_{3,5}(M)= x_{1,2}+x_{1,3}.\] Recall that we denote by \(\mathcal{T}_{k,n}^{(0)}\) the set of all tableaux obtained by cyclic shifts of the one-column tableau with entries \(1,2,\ldots,k-1,k+1\). By tropicalizing all \(\operatorname{ch}_{T}(M)\), \(T\in\mathcal{T}_{k,n}^{(0)}\), we obtain piecewise linear functions in the space of dimension \((k-1)(n-k-1)\) parametrized by \(y_{i,j}\) (\(y_{i,j}\) is the tropical version of \(x_{i,j}\)). Such a function is linear on a collection of cones; these cones assemble to define a polyhedral fan. The common refinement of these fans is the normal fan \(\mathcal{N}(\mathbf{N}_{k,n}^{(0)})\) of the Newton polytope \(\mathbf{N}_{k,n}^{(0)}\). By [45, Corollary 10.5], the set of rays of \(\mathcal{N}(\mathbf{N}_{k,n}^{(0)})\) is given by \[\left\{\operatorname{Ray}(v_{J}):J\in\binom{[n]}{k}^{nf}\right\},\] where \(\operatorname{Ray}(v)=\{cv:c\geq 0\}\) is the ray in the direction of a vector \(v\). For \(d\geq 1\), the normal fan \(\mathcal{N}(\mathbf{N}_{k,n}^{(d)})\) can be constructed from \(\mathcal{N}(\mathbf{N}_{k,n}^{(d-1)})\) as follows. Let \(\mathcal{T}_{k,n}^{(d)}\) be the set of all tableaux corresponding to rays of \(\mathcal{N}(\mathbf{N}_{k,n}^{(d-1)})\), that is \[\mathcal{T}_{k,n}^{(d)}=\left\{T:\operatorname{Ray}(v_{T})\text{ is a ray of }\mathcal{N}(\mathbf{N}_{k,n}^{(d-1)})\right\}.\] Here \(v_{T}\) is defined in Equation (4.1) and the construction of tableaux from rays is given in Section 6. Indeed, in Section 6 we construct tableaux from facets of Newton polytopes. The construction of tableaux from rays of normal fans is the same. **Remark 4.4**.: Tropical fans for Grassmannian cluster algebras have been defined in [34, 13], by tropical evaluations of finite sets of cluster variables. The tropical fans \(\mathcal{N}(\mathbf{N}_{k,n}^{(d)})\) defined here use not only cluster variables but also other prime elements in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n)]\). ### Relation with positive tropical Grassmannians Recall that we use \(\binom{[n]}{k}^{nf}\) to denote the set of \(k\)-element subsets of \([n]\) which are nonfrozen, i.e., not of the form \([i,i+k-1]\) up to cyclic shifts, and recall that \(\{e^{J}:J\in\binom{[n]}{k}\}\) is the standard basis of \(\mathbb{R}^{\binom{n}{k}}\). For each \(J\in\binom{[n]}{k}^{nf}\), recall that \(\mathfrak{h}_{J}\in\mathbb{R}^{\binom{[n]}{k}}\)[45, 46] is defined by \[\mathfrak{h}_{J}=-\frac{1}{n}\sum_{I\in\binom{[n]}{k}}\min\left\{L_{1}(e_{J}-e _{I}),L_{2}(e_{J}-e_{I}),\ldots,L_{n}(e_{J}-e_{I})\right\}e^{I}, \tag{4.2}\] where \[L_{j}(x)=x_{j+1}+2x_{j+2}+\cdots+(n-1)x_{j-1}.\] The _lineality subspace_\(\mathrm{Lin}_{k,n}\) of \(\mathbb{R}^{\binom{n}{k}}\) is defined by \[\mathrm{Lin}_{k,n}=\mathrm{span}\left\{\sum_{J\in\binom{[n]}{k},J_{3}j}e^{J}, \ j=1,\ldots,n\right\},\] see [46, Definition 2.1]. Clearly, \(\dim(\mathrm{Lin}_{k,n})=n\). For each \(J=\{j_{1},\ldots,j_{k}\}\in\binom{[n]}{k}^{nf}\), define a cubical collection of \(k\)-element subsets of \(\{1,\ldots,n\}\) by \[\mathcal{U}(J)=\left\{\{j_{1}+t_{1},\ldots,j_{k}+t_{k}\}:t_{i}\in\{0,1\},\ \text{and}\ t_{i}=0\ \text{whenever}\ j_{i}+1\in J\right\},\] where addition is modulo \(n\), see [45]. Denote by \(\omega_{J}(y)\)[45] the tropical planar cross-ratio \[\omega_{J}=\sum_{J^{\prime}\in\mathcal{U}(J)}(-1)^{k-\#(J^{\prime}\cap J)+1}P _{J^{\prime}}(y),\] where \(P_{J^{\prime}}(y)=\mathrm{Trop}(p_{J^{\prime}})(y)\) is the tropicalization of the Plucker coordinate \(p_{J^{\prime}}(x)\), evaluated on the web matrix \(M=(x_{i,j})_{k\times n}\) in Section 3.3. Denote by \(\mathcal{F}_{n}^{(k)}:\mathbb{R}^{(k-1)\times(n-k)}\to\mathbb{R}^{\binom{n}{k }}_{k}/\mathrm{Lin}_{k,n}\) the map \[\mathcal{F}_{n}^{(k)}(y) = \sum_{J\in\binom{[n]}{k}^{nf}}\omega_{J}(y)\mathfrak{h}_{J}. \tag{4.3}\] The normal fan \(\mathcal{N}(\mathbf{N}^{(1)}_{k,n})\) defined in Section 4.2 has been shown [8, Proposition 11.5] to satisfy the following property: its cones are in bijection with the cones in the positive tropical Grassmannian \(\mathrm{Trop}^{+}G(k,n)\), as defined by Speyer and Williams [99]. In particular, this bijection is achieved via the piecewise-linear map \(\mathcal{F}_{n}^{(k)}\), which is equal (modulo a change in parameterization) to the map \(\mathrm{Trop}(\Phi_{2})\) defined in [99, Section 4], see also [100]. ## 5. Semistandard Young Tableaux and Generalized Root Polytopes In this section, we study relation between semistandard tableaux and generalized root polytopes. ### Isomorphism of Monoids Recall that the set \(\operatorname{SSYT}(k,[n],\sim)\) of all \(\sim\)-equivalence classes of semistandard Young tableaux of rectangular shape with \(k\) rows and with entries in \([n]\) form a monoid under the multiplication "\(\cup\)" [23], see Section 3.1. This monoid is isomorphic to the monoid \(\mathcal{P}_{\ell}^{+}\) of dominant monomials in \(\mathcal{C}_{\ell}^{s_{k}}\), \(n=k+\ell+1\). The vector space \(\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k)}\) form a monoid \((\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k)},+)\) generated by \(e_{i,j}\), \(i\in[k-1]\), \(j\in[n-k]\), where \(e_{i,j}\)'s form a standard basis of \(\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k)}\) (\(e_{i,j}\)'s also form a standard basis of \(\mathbb{R}^{(k-1)\times(n-k)}\)). **Lemma 5.1**.: _We have an isomorphism of monoids_ \[(\operatorname{SSYT}(k,[n],\sim),\cup)\to(\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k )},+).\] Proof.: For \(i\in[k-1]\), \(j\in[n-k]\), denote by \(T_{i,j}\) the fundamental tableau with entries \([j,j+k]\smallsetminus\{i+j\}\). By Lemma 3.13 in [23], every tableau in \(\operatorname{SSYT}(k,[n],\sim)\) is \(\sim\)-equivalent to the union of a set of fundamental tableaux. The isomorphism \((\operatorname{SSYT}(k,[n],\sim),\cup)\to(\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k )},+)\) is induced by \(T_{i,j}\mapsto e_{i,j}\). The inverse isomorphism is given as follows. Every element \(v\) in \((\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k)},+)\) can be written as \(v=\sum_{i,j}c_{i,j}e_{i,j}\) for some positive integers \(c_{i,j}\). Let \(T_{v}=\cup_{i,j}T_{i,j}^{\cup c_{i,j}}\). The inverse isomorphism is given by \(v\mapsto T_{v}\). We denote by \(T_{v}\) the tableau in \(\operatorname{SSYT}(k,[n],\sim)\) corresponding to \(v\in\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k)}\) and denote by \(v_{T}\) the element in \(\mathbb{Z}_{\geq 0}^{(k-1)\times(n-k)}\) corresponding to \(T\in\operatorname{SSYT}(k,[n],\sim)\). ### Generalized Root Polytopes For any collection \(\mathcal{J}=\{J_{1},\ldots,J_{m}\}\in\mathbf{NC}_{k,n}\) of nonfrozen subsets \(J_{i}\), define \[[\mathcal{J}] = \text{Convex hull}(\{0,v_{J_{1}},\ldots,v_{J_{m}}\}).\] The generalized root polytope \(\mathcal{R}_{n-k}^{(k)}\) is the convex hull of all generalized positive roots \(v_{J}\), \[\mathcal{R}_{n-k}^{(k)}=\left\{v_{J}\in\mathbb{T}^{k-1,n-k}:J\in\binom{[n]}{k} ^{nf}\right\},\] where we remind that \(\mathbb{T}^{k-1,n-k}=(\mathbb{T}^{n-k})^{\times(k-1)}\) and \(\mathbb{T}^{n-k}=\mathbb{R}^{n-k}/\mathbb{R}(1,\ldots,1)\). **Theorem 5.2** ([45, Theorem 1.2]).: _The set of simplices \(\{[\mathcal{J}]:\mathcal{J}\in\mathbf{NC}_{k,n}\}\) defines a flag, unimodular triangulation of \(\mathcal{R}_{n-k}^{(k)}\): simplices in the triangulation have equal volume and are in bijection with pairwise noncrossing collections of nonfrozen \(k\)-element subsets. In particular, the set of cones_ \[\mathcal{C}_{\mathcal{J}}=\left\{\sum_{J\in\mathcal{J}}c_{J}v_{J}:c_{J}>0\right\}\] _assemble to define a complete simplicial in \(\mathbb{T}^{k-1,n-k}\), and any point in \(\mathbb{T}^{k-1,n-k}\) lies in the relative interior of a unique cone in the fan._ Now under the isomorphism in Lemma 5.1, one-column tableaux correspond to generalized positive roots; thus, Theorem 5.2 says that any linear combination of generalized positive roots with real coefficients decomposes uniquely as a linear combination, with positive coefficients, indexed by a pairwise noncrossing collection. This means that if we restrict to integer coefficients then the triangulation has a beautiful representation-theoretic interpretation in terms of tableaux! We give a proof in the next subsection of this result for integer coefficients \(c_{J}\) using representation theory of quantum affine algebras, and study the relation between semistandard Young tableaux and noncrossing tuples. **Example 5.3**.: Let \[v=-v_{1,5,9}+2v_{2,6,10}+3v_{3,7,11}+4v_{4,8,12}.\] Then \(v\) has the following noncrossing decomposition using Theorem 5.2: \[v = v_{1,2,6}+v_{2,8,10}+v_{2,9,10}+3v_{3,8,10}+2v_{4,6,10}+3v_{4,7,1 0}+v_{7,8,10}\] ### Semistandard Young Tableaux and Noncrossing Tuples First we consider the case of \(k=2\). **Lemma 5.4**.: _For every tableau \(T\in\operatorname{SSYT}(2,[n])\), there is a unique unordered \(m\)-tuple \((S_{1},\ldots,S_{m})\) of one-column tableaux which are pairwise noncrossing such that \(T=S_{1}\cup...\cup S_{m}\)._ Proof.: First note that for \(2\)-row one column tableaux \(\begin{array}{c|c}a\\ b\end{array}\), \(\begin{array}{c|c}a\\ b\end{array}\), they are noncrossing if and only if they are weakly separated. If \(b=a+1\), then \(\begin{array}{c|c}a\\ b\end{array}\) is frozen and it is weakly separated with any \(2\)-row one-column tableau. Let \(T\) be a \(2\)-row one column tableau and let \(T^{\prime}\) be the tableau obtained from \(T\) by removing all factors of the form \(\begin{array}{c|c}a\\ \hline\end{array}\). Denote these frozen factors by \(T^{\prime\prime}_{1},\ldots,T^{\prime\prime}_{t}\). By Theorem 1.1 in [23], \(T^{\prime}\) corresponds to a simple \(U_{q}(\widehat{\mathfrak{sl}_{2}})\)-module \(L(M_{T})=L(M_{T^{\prime}})\). By Sections 4.8, 4.9, 4.11 in [27], every prime \(U_{q}(\widehat{\mathfrak{sl}_{2}})\)-module is a Kirillov-Reshetikhin module and every simple \(U_{q}(\widehat{\mathfrak{sl}_{2}})\)-module is decomposed as a tensor product of Kirillov-Reshetikhin modules (note that evaluation modules of \(U_{q}(\widehat{\mathfrak{sl}_{2}})\) are Kirillov-Reshetikhin modules). Therefore \[\chi_{q}(L(M_{T}))=\chi_{q}(L(M_{1}))\cdots\chi_{q}(L(M_{r})) \tag{5.1}\] for some Kirillov-Reshetikhin modules \(L(M_{1}),\ldots,L(M_{r})\). Every Kirillov-Reshetikhin module corresponds to a one-column tableau, see Section 3.3 in [23]. Let \(T_{M_{1}},\ldots,T_{M_{r}}\) be the one-column tableaux corresponding to \(L(M_{1}),\ldots,L(M_{r})\) respectively. By Equation (5.1), we have that for any \(i,j\), \(L(M_{i})\otimes L(M_{j})\) is simple. Hence by Theorem 1.1 in [84], \(T_{M_{i}}\) and \(T_{M_{j}}\) are weakly separated. Therefore \(T=T_{M_{1}}\cup\cdots\cup T_{M_{r}}\cup T^{\prime\prime}_{1}\cup\cdots\cup T^{ \prime\prime}_{t}\) and any two one-column tableaux in the \(\cup\)-product are weakly separated. **Example 5.5**.: Let \(T=\begin{array}{c|c **Example 5.7**.: Let \(T=\begin{array}{c|c}\hline 1&2\\ \hline 3&4\\ \hline 5&6\\ \hline 7&8\\ \hline\end{array}\). The unique noncrossing \(2\)-tuple of one-column tableaux corresponding to \(T\) is \(\begin{array}{c|c}\hline 1&2\\ \hline 4&3\\ \hline 5&6\\ \hline 8&7\\ \hline\end{array}\). ## 6. From Facets to Prime Modules In this section, we describe a procedure to produce a simple \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-module from every facet of the Newton polytope defined in Section 4 and we conjecture that the obtained simple \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-module is prime. ### A Procedure to Produce a Simple \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-module from a Given Facet Adapting the results of [23] (see Section 3.2), it suffices to give a procedure to produce a semistandard Young tableau from a given facet. The Newton polytope \(\mathbf{N}_{k,n}^{(d)}\) defined in Section 4 is described using certain equations and inequalities in its H-representation (represent the polytope by an intersection half-spaces and hyperplanes). Let \(F\) be a facet of the Newton polytope \(\mathbf{N}_{k,n}^{(d)}\). The normal vector \(v_{F}\) of \(F\) is the coefficient vector in one of the inequalities in the H-representation of \(\mathbf{N}_{k,n}^{(d)}\). If there is an entry of the vector \(v_{F}\) which is negative, then we add some vectors which are coefficients of the equations in the H-representation of \(\mathbf{N}_{k,n}^{(d)}\) to \(v_{F}\) such that the resulting vector \(v_{F}^{\prime}\) all have non-negative entries. The vector \(v_{F}^{\prime}\) can be written as \(v_{F}^{\prime}=\sum_{i,j}c_{i,j}e_{i,j}\) for some positive integers \(c_{i,j}\), where \(e_{i,j}\) is the standard basis of \(\mathbb{R}^{(k-1)\times(n-k)}\). By Lemma 5.1, each \(e_{i,j}\) corresponds to a fundamental tableau \(T_{i,j}\) which is defined to be the one-column tableau with entries \([j,j+k]\setminus\{i+j\}\). The tableau \(T_{F}\) corresponding to \(F\) is obtained from \(\cup_{i,j}T_{i,j}^{\cup c_{i,j}}\) by removing all frozen factors (if any). **Conjecture 6.1**.: 1. _For_ \(d\in\mathbb{Z}_{\geq 0}\)_,_ \(k\leq n\)_, and every facet_ \(F\) _of the Newton polytope_ \(\mathbf{N}_{k,n}^{(d)}\)_, we have that the corresponding tableau_ \(T_{F}\) _is prime._ 2. _For_ \(k\leq n\) _and every nonfrozen prime tableau_ \(T\) _in_ \(\mathrm{SSYT}(k,[n])\)_, there is_ \(d\geq 1\) _and a facet_ \(F\) _of_ \(\mathbf{N}_{k,n}^{(d)}\) _such that_ \(T=T_{F}\)_._ Conjecture 6.1 gives systematic procedure to construct all prime \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-modules. We will give an explicit description of all \(2\)-column prime tableaux in Section 7. Namely, we will prove that a \(2\)-column tableau in \(\mathrm{SSYT}(k,[n])\) is prime if and only if it is the union of two one-column tableaux which are noncrossing and not weakly separated, in Section 7. ### Example: \(\mathrm{Gr}(3,6)\) In the case of \(\mathbb{C}[\mathrm{Gr}(3,6)]\), we use the web matrix (see Section 3.3) \[M=\begin{bmatrix}1&0&0&x_{1,1}x_{2,1}&x_{1,1}x_{2,12}+x_{1,2}x_{2,2}&x_{1,1}x_{2, 123}+x_{1,2}x_{2,23}+x_{1,3}x_{2,3}\\ 0&1&0&-x_{2,1}&-x_{2,12}&-x_{2,123}\\ 0&0&1&1&1&1\end{bmatrix},\] where we abbreviate for example \(x_{2,23}=x_{2,2}+x_{2,3}\). Evaluating all Plucker coordinates on \(M\) and take their product, we obtain a polynomial \(p\). The Newton polytope \(\mathbf{N}_{3,6}^{(1)}\) is the Newton polytope defined by the vertices given by the exponents of monomials of \(p\). The H-representation of \(\mathbf{N}_{3,6}^{(1)}\) is given by \[\begin{split}&(0,0,0,1,1,1)\cdot x-20=0,\ (1,1,1,0,0,0)\cdot x-10=0,\ (0,1,1,0,0,0)\cdot x-4\geq 0,\\ &(0,0,1,0,0)\cdot x-1\geq 0,\ (0,0,0,0,1,1)\cdot x-11\geq 0,\ (0,0,0,0,0,1)\cdot x-4\geq 0, \\ &(0,0,1,1,0,0)\cdot x-6\geq 0,\ (0,0,0,0,1,0)\cdot x-4\geq 0,\ (0,0,0,1,0,0)\cdot x-4\geq 0, \\ &(1,0,0,0,0)\cdot x-1\geq 0,\ (1,0,0,0,1,0)\cdot x-6\geq 0,\ (1,1,0,0,1,1)\cdot x-16\geq 0, \\ &(1,1,0,0,0,0)\cdot x-4\geq 0,\ (0,0,0,1,1,0)\cdot x-11\geq 0,\ (0,1,0,0,0,0)\cdot x-1\geq 0, \\ &(1,0,0,0,1,1)\cdot x-14\geq 0,\ (0,1,0,0,0,1)\cdot x-6\geq 0,\ (1,1,0,0,0,1)\cdot x-11\geq 0, \end{split} \tag{6.1}\] where \((0,0,0,1,1,1)\cdot x\) is the inner product of the vectors \((0,0,0,1,1,1)\) and \(x\). Now we compute the tableau corresponding to each facet. For example, for the facet \(F\) with the normal vector \(v_{F}=(0,1,1,0,0,0)\) in the first line of (6.1), we have that \(v_{F}=e_{1,2}+e_{1,3}\). The generalized roots \(e_{1,2}\), \(e_{1,3}\) corresponds to tableaux \(\begin{bmatrix}2\\ 4\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}4\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}5\\ 6\\ \end{bmatrix}\), \(\begin{bmatrix}2\\ 4\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}4\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3 \\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3\\ 5\\ \end{bmatrix}\), \(\begin{bmatrix}3 5\\ \end{bmatrix}\), \(\begin{bmatrix}3 5\\ \end{bmatrix}\), \(\begin{bmatrix}3 5\\ \end{bmatrix}\), \(\begin{bmatrix}3 5\\ \end{bmatrix}\), \( ### Examples: \(\operatorname{Gr}(3,8)\), \(\operatorname{Gr}(4,8)\) In the case of \(\mathbb{C}[\operatorname{Gr}(3,8)]\), the PK polytope \(\mathbf{N}^{(0)}_{3,8}\) has \(\binom{8}{3}-8=48\) facets, \(\mathbf{N}^{(1)}_{3,8}\) has \(120\) facets (see also Proposition 6.2 in [13]), \(\mathbf{N}^{(2)}_{3,8}\) has \(128\) facets but it is not a simple polytope; \(\mathbf{N}^{(d)}_{3,8}\) has \(128\) facets and \(\mathbf{N}^{(d)}_{3,8}\) is a simple polytope for any \(d\geq 3\). This agrees with the fact that the dual canonical basis \(\mathbb{C}[\operatorname{Gr}(3,8)]\) has \(128\) prime elements (not including frozen variables). In the case of \(\mathbb{C}[\operatorname{Gr}(4,8)]\), \(\mathbf{N}^{(0)}_{4,8}\) has \(\binom{8}{4}-8=62\) facets, \(\mathbf{N}^{(1)}_{4,8}\) has \(360\) facets. Four facets of \(\mathbf{N}^{(1)}_{4,8}\) correspond to prime non-real tableaux \[\begin{array}{|c| Let \(T\in\mathrm{SSYT}(k,[n])\). By Lemma 3.13 in [23], there is a unique semistandard tableau \(T^{\prime}\) in \(\mathrm{SSYT}(k,[n])\) whose columns \(T^{\prime}{}_{1},\ldots,T^{\prime}{}_{m}\) are fundamental tableaux and \(T\sim T^{\prime}\). For each fundamental tableau \(T_{i,j}\) with entries \([j,j+k]\setminus\{i+j\}\), we define \(\alpha(T_{i,j})=\alpha_{i,j}\). Let \(\gamma_{T}=\sum_{j=1}^{m}\alpha(T^{\prime}{}_{j})\) and let \(\mathbf{F}_{T}=\{x\in\mathbf{N}_{k,n}^{(d)}:\gamma_{T}(y)\geq\gamma_{T}(x), \forall y\in\mathbf{N}_{k,n}^{(d)}\}\). Then \(\mathbf{F}_{T}\) is the face of \(\mathbf{N}_{k,n}^{(d)}\) such that \(\gamma_{T}\) is minimized on \(\mathbf{F}_{T}\). We say that the tableau \(T\) corresponds to a facet if \(\mathbf{F}_{T}\) is a facet of \(\mathbf{N}_{k,n}^{(d)}\). To compute \(\mathbf{F}_{T}\), we compute the monomials in \(p\) which are minimized by \(\gamma_{T}\), where \(p\) is the polynomial which defines \(\mathbf{N}_{k,n}^{(d)}\). The face \(\mathbf{F}_{T}\) is the face of \(\mathbf{N}_{k,n}^{(d)}\) which is the convex hull of the exponent vectors of these monomials. Conjecture 6.1 (2) is equivalent to the following conjecture. **Conjecture 6.2**.: _For \(k\leq n\) and any nonfrozen prime tableau \(T\in\mathrm{SSYT}(k,[n])\), there exists \(d\geq 0\) such that the face \(\mathbf{F}_{T}\) of \(\mathbf{N}_{k,n}^{(d)}\) has codimension \(1\)._ We say that two tableaux \(T\), \(T^{\prime}\) (resp., two simple modules \(L(M)\), \(L(M^{\prime})\)) are compatible if \(\mathrm{ch}(T)\mathrm{ch}(T^{\prime})=\mathrm{ch}(T\cup T^{\prime})\) (resp., \(\chi_{q}(L(M))\chi_{q}(L(M^{\prime}))=\chi_{q}(L(MM^{\prime}))\)). We give a conjecture about compatibility of two prime tableaux (equivalently, two prime modules). **Conjecture 6.3**.: _Let \(k\leq n\) and \(T\), \(T^{\prime}\) be two distinct prime tableaux in \(\mathrm{SSYT}(k,[n])\). Then \(T\), \(T^{\prime}\) are compatible if and only if there exists \(d\geq 0\), such that the faces \(\mathbf{F}_{T}\), \(\mathbf{F}_{T^{\prime}}\) corresponding to \(T\), \(T^{\prime}\) are facets and the intersection of \(\mathbf{F}_{T}\), \(\mathbf{F}_{T^{\prime}}\) is nonempty._ **Example 6.4**.: Consider a tableau \(T=\young(1123445)\) and \(\mathbf{N}_{3,6}^{(1)}\). We now check that \(\mathbf{F}_{T}\) is not a facet of \(\mathbf{N}_{3,6}^{(1)}\). The tableau \(T^{\prime}\) whose columns are fundamental tableaux and such that \(T^{\prime}\sim T\) is \(\young(1123445)\). We have that \(\gamma_{T}=\alpha_{2,1}+\alpha_{1,1}+\alpha_{2,2}\). We compute the exponents of the monomials in \(p\) which take the minimal value when applying \(\gamma_{T}\), where \(p\) is the polynomial which defines \(\mathbf{N}_{3,6}^{(1)}\). These exponents define the integer lattice points in \(\mathbf{F}_{T}\). The affine span of these points is the face \(\mathbf{F}_{T}\) and it is of codimension \(2\). Therefore \(\mathbf{F}_{T}\) is not a (codimension \(1\)) facet of \(\mathbf{N}_{3,6}^{(1)}\). Similarly, \(\mathbf{F}_{T}\) is not a facet of \(\mathbf{N}_{3,6}^{(d)}\) for any \(d\geq 1\). On the other hand, \(T\) is non-prime. Indeed, we have that \(\mathrm{ch}(T)=p_{134}p_{125}\). This agrees with Conjecture 6.2. We give an example that the facets of two compatible prime tableaux have a nonempty intersection. **Example 6.5**.: In Example 6.4, we see that \(\mathrm{ch}\!\left(\begin{array}{c|c}\framebox{1}&1\\ \framebox{2}&3\\ \framebox{4}&5\end{array}\right)=p_{134}p_{125}\). So the two tableaux \(T=\begin{array}{c|c}\framebox{1}\\ \framebox{3}\\ \framebox{4}&7^{\prime}=\begin{array}{c|c}\framebox{1}\\ \framebox{2}\\ \framebox{5}\end{array}\end{array}\) are compatible. Both of the faces \(\mathbf{F}_{T}\) and \(\mathbf{F}_{T^{\prime}}\) in \(\mathbf{N}_{3,6}^{(1)}\) have codimension 1. The intersection of the facets \(\mathbf{F}_{T}\) and \(\mathbf{F}_{T^{\prime}}\) is nonempty and has codimension 2. This example verifies Conjecture 6.3. ## 7. Explicit Description of 2-column Prime Tableaux In this section, we prove that a 2-column tableau is prime if and only if it is the union of two one-column tableaux which are noncrossing and not weakly separated. We also compute the number of 2-column prime tableaux in \(\mathrm{SSYT}(k,[n])\). **Lemma 7.1**.: _Suppose that \(T_{1},T_{2}\) are 1-column tableaux and they are noncrossing and not weakly separated. Then for any pair of 1-column tableaux \(S_{1},S_{2}\) such that \(S_{1}\cup S_{2}=T_{1}\cup T_{2}\), we have that \(S_{1},S_{2}\) are not weakly separated._ Proof.: Suppose that \(T_{1},T_{2}\) are 1-column tableaux and they are noncrossing and not weakly separated. By Lemma 5.6, for every pair of 1-column tableaux \(S_{1},S_{2}\) such that \(S_{1}\cup S_{2}=T_{1}\cup T_{2}\), either \(\{S_{1},S_{2}\}=\{T_{1},T_{2}\}\) or \(S_{1},S_{2}\) are crossing. If \(\{S_{1},S_{2}\}=\{T_{1},T_{2}\}\), then \(S_{1},S_{2}\) are not weakly separated. If \(\{S_{1},S_{2}\}\neq\{T_{1},T_{2}\}\), then \(S_{1},S_{2}\) are crossing. If there are \(1\leq a<b\leq k\) such that the sub-tableau of \(S_{1}\) consisting of the \(a\)th to \(b\)th rows of \(S_{1}\) and the sub-tableau of \(S_{2}\) consisting of the \(a\)th to \(b\)th rows of \(S_{2}\) are not weakly separated, then \(S_{1}\), \(S_{2}\) are not weakly separated. Now suppose that for any \(1\leq a<b\leq k\), the sub-tableau of \(S_{1}\) consisting of the \(a\)th to \(b\)th rows of \(S_{1}\) and the sub-tableau of \(S_{2}\) consisting of the \(a\)th to \(b\)th rows of \(S_{2}\) are weakly separated. This contradicts the fact that \(S_{1}\), \(S_{2}\) are crossing. **Example 7.2**.: Let \(T_{1}=\begin{array}{c|c}\framebox{1}\\ \framebox{4}\\ \framebox{5}\\ \framebox{8}\end{array}\). We have that \(T_{1},T_{2}\) are noncrossing and not weakly separated. All pairs of 1-column tableaux \(S_{1},S_{2}\) such that \(S_{1}\cup S_{2}=T_{1}\cup T_{2}\) are \[\begin{array}{c|c|c}\framebox{1}&2\\ \framebox{3}&4\\ \framebox{5}&6\\ \framebox{7}&8\end{array}\] All of these pairs are not weakly separated. **Theorem 7.3**.: _Let \(L(M)\) be a simple \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-module such that \(T_{M}\) is a 2-column tableau. Then \(L(M)\) is prime if and only if there are one-column tableaux \(T_{1},T_{2}\) such that \(T_{M}=T_{1}\cup T_{2}\), and \(T_{1},T_{2}\) are noncrossing and not weakly separated._ Proof.: By [84, Theorem 1.1] and [95, Proposition 3], two quantum Plucker coordinates in the quantum Grassmannian cluster algebra are quasi-commute if and only if the corresponding \(k\)-subsets are weakly separated. This implies that for any \(k\)-element subsets \(J,J^{\prime}\) of \([n]\), the \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-module \(L(M_{J})\otimes L(M_{J^{\prime}})\) is simple if and only if \(J,J^{\prime}\) are weakly separated. By Lemma 5.6, there is a unique pair \(T_{1},T_{2}\) of one-column tableaux \(T_{1},T_{2}\) such that \(T_{1},T_{2}\) are noncrossing and \(T_{M}=T_{1}\cup T_{2}\). Suppose that \(T_{1},T_{2}\) are weakly separated. Then \(L(M_{T_{1}})\otimes L(M_{T_{2}})\) is simple. It follows that \(\chi_{q}(L(M))=\chi_{q}(L(M_{T_{1}}))\chi_{q}(L(M_{T_{2}}))\). Therefore \(L(M)\) is not prime. Now suppose that \(T_{1},T_{2}\) are not weakly separated. By Lemma 7.1, for any pair \(T_{1}^{\prime},T_{2}^{\prime}\) of 1-column tableaux such that \(T_{1}\cup T_{2}=T_{1}^{\prime}\cup T_{2}^{\prime}\), we have that \(T_{1}^{\prime},T_{2}^{\prime}\) are not weakly separated. Therefore \(\mathrm{ch}(T_{M})\neq\mathrm{ch}(T_{1}^{\prime})\mathrm{ch}(T_{2}^{\prime})\), \(\chi_{q}(L(M))\neq\chi_{q}(L(M_{T_{1}^{\prime}}))\chi_{q}(L(M_{T_{2}^{\prime}}))\), for any pair of 1-column tableaux \(T_{1}^{\prime},T_{2}^{\prime}\) such that \(T_{M}=T_{1}^{\prime}\cup T_{2}^{\prime}\). Therefore \(L(M)\) and \(T_{M}\) are prime. Denote \(\binom{n}{a,b,c}=\frac{n!}{a!b!c!}\) and \(I\triangle J=(I\smallsetminus J)\cup(J\smallsetminus I)\) for two sets \(I,J\). **Proposition 7.4**.: _For \(k\leq n/2\), the number of 2-column prime tableaux is \(a_{k,n,2}-b_{k,n}\), where \(a_{k,n,m}=\prod_{i=1}^{k}\prod_{j=1}^{m}\frac{n-i+j}{k+m-i-j+1}\) and \(b_{k,n}=\binom{n}{k}+\sum_{j=1}^{k}j\binom{n}{k-j,2j,n-k-j}\)._ Proof.: The number of semistandard Young tableaux of rectangular shape with \(k\) rows and with entries in \(\{1,\ldots,n\}\) and with \(m\) columns is \(a_{k,n,m}\), see [101]. Assume that \(k\leq n/2\). If \(I=J\), then \(I,J\) are weakly separated and there are \(\binom{n}{k}\) choices of \(I=J\). Now assume that \(I\neq J\). Denote \(|I-J|=|J-I|=j\). Since \(|I\cap J|=k-j\), \(|I\Delta J|=2j\), there are \(\binom{n}{k-j,2j,n-k-j}\) ways to fix the sets \(I\cap J\) and \(I\Delta J\). Since either \(I\smallsetminus J\) or \(J\smallsetminus I\) should be a segment of \(s\) consecutive elements of the \(2j\) elements in \(I\Delta J\), there are \(2j\) choices of \(I-J\). Since the pair \((I,J)\) is unordered, there are \(\frac{1}{2}\sum_{j=1}^{k}2j\binom{n}{k-j,2j,n-k-j}\) choices of weakly separated pairs \((I,J)\) (unordered) in the case of \(I\neq J\). It follows that the number of unordered weakly separated pairs among all Plucker coordinates is \(b_{k,n}\). Therefore the number of 2-column prime tableaux is \(a_{k,n,2}-b_{k,n}\). **Remark 7.5**.: It is conjectured in [12] that for \(k\leq n/2\), there are \[\sum_{r=3}^{k}\left(\frac{2r}{3}\cdot p_{1}(r)+2r\cdot p_{2}(r)+4r\cdot p_{3} (r)\right)\cdot\binom{n}{2r}\binom{n-2r}{k-r}\] 2-column cluster variables in \(\mathbb{C}[\mathrm{Gr}(k,n)]\), where \(p_{i}(r)\) is the number of partitions \(r=r_{1}+r_{2}+r_{3}\) such that \(r_{1},r_{2},r_{3}\in\mathbb{Z}_{\geq 1}\) and \(|\{r_{1},r_{2},r_{3}\}|=i\). The number \(a_{k,n,2}-b_{k,n}\) in Proposition 7.4 includes prime tableaux which are not cluster variables. ## 8. More evidence of Conjecture 6.1 In this section, we verify that the facets of \(\mathbf{N}_{3,9}^{(1)}\) correspond to prime modules of \(U_{q}(\widehat{\mathfrak{sl}_{3}})\)-modules. This gives more evidence of Conjecture 6.1. We also give an explicit conjectural description of the highest \(l\)-weights of a very large family of prime \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)-modules. ### Facets of \(\mathbf{N}_{3,9}^{(1)}\) correspond to prime modules There are 471 facets of \(\mathbf{N}_{3,9}^{(1)}\) (see also [60, 90]). These facets give 471 tableaux in \(\mathrm{SSYT}(3,[9])\). We now verify that these tableaux are prime. Among these tableaux, there are 75 one-column tableaux. These correspond to all non-frozen Plucker coordinates in \(\mathbb{C}[\mathrm{Gr}(3,9)]\). There are 168 two-column tableaux in the 471 tableaux. These tableaux can be obtained from the two prime 2-column tableaux in \(\mathrm{SSYT}(3,[6])\) by replacing \(1<2<\cdots<6\) by \(a_{1}<a_{2}<\cdots<a_{6}\) (\(a_{i}\in[9]\)). There are 156 tableaux with 3 columns in the 471 tableaux. Totally there are 228 prime tableaux with 3 columns in \(\mathrm{SSYT}(3,[9])\) (this can be seen by translating the results about the number of indecomposable modules in Grassmannian cluster category \(\mathrm{CM}(B_{3,9})\) in [12]). The 156 tableaux are part of them. Up to promotion [92, 93, 94], these 156 tableaux are \[\begin{array}{|c| There are 3 tableaux which have 5 columns in these 471 tableaux. These three tableaux are promotions of the following tableau \begin{tabular}{|c|c|c|c|} \hline 1 & 1 & 2 & 4 & 5 \\ \hline 2 & 3 & 4 & 7 & 8 \\ \hline 5 & 6 & 7 & 8 & 9 \\ \hline \end{tabular} These 3 tableaux are in the list of cluster variables obtained in [24]. Therefore they are prime. ### Coarsest Matroid Subdivisions and Prime Modules It is conjectured in [46] that all pairwise noncrossing but not weakly separated collections in \(\mathbf{NC}_{k,n}\) induce coarsest positroidal subdivisions of \(\Delta_{k,n}\). We conjecture that all pairwise noncrossing but not weakly separated collections in \(\mathbf{NC}_{k,n}\) give prime tableaux. **Conjecture 8.1**.: _Let \(J_{1},\ldots,J_{r}\) be \(k\)-element subsets of \([n]\) such that each pair of them is noncrossing and not weakly separated. Then \(T=\cup_{i=1}^{r}T_{J_{i}}\) is a prime tableau._ Conjecture 8.1 gives an explicit description of the highest \(l\)-weights of a very large family of prime \(U_{q}(\widehat{\mathbf{sl}_{k}})\)-modules. Note, however, there are tableaux which do not correspond to coarsest positroidal subdivisions of \(\Delta_{k,n}\) but which are still prime. For example, in the case of \(\operatorname{Gr}(3,8)\), the eight tableaux in (8.1) map via \(v_{T}\mapsto\mathcal{F}_{n}^{(3)}(v_{T})\) (see the formula (4.3)) to positive tropical Plucker vectors, where \(v_{T}\) is defined in Section 5.1. The positive tropical Plucker vectors induce positroidal subdivisions of \(\Delta_{3,8}\) which are not coarsest (see Theorem 6.4 in [13]) and do not generate rays of \(\operatorname{Trop}^{+}\!G(3,8)\). On the other hand, all of the eight tableaux in (8.1) are indeed prime, see [24, 90]. \[\begin{array}{ They correspond to 3 prime tableaux in \(\mathrm{SSYT}(3,[9])\): \[\begin{array}{|c We will explain how to compute these Newton polytopes in the following subsections and give a construction of highest \(l\)-weights of simple \(U_{q}(\widehat{\mathfrak{g}})\)-modules which correspond to facets of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) in Section 9.4. **Remark 9.2**.: In type \(A\), the definition of the Newton polytopes \(\mathbf{N}_{\mathfrak{s}\mathfrak{l}_{k},\ell}^{(d)}\) is slightly different from the definition of the Newton polytopes \(\mathbf{N}_{k,n}^{(d)}\) (\(n=k+\ell+1\)) in Section 4 for Grassmannian cluster algebras. Here \(\mathbf{N}_{\mathfrak{s}\mathfrak{l}_{k},\ell}^{(0)}\) is defined using all Kirillov-Reshetikhin modules of \(U_{q}(\widehat{\mathfrak{s}\mathfrak{l}_{k}})\). Finite dimensional simple \(U_{q}(\widehat{\mathfrak{s}\mathfrak{l}_{k}})\)-modules in \(\mathcal{C}_{\ell}\), \(n=k+\ell+1\), correspond to tableaux in \(\mathrm{SSYT}(k,[n],\sim)\)[23]. In Section 4, \(\mathbf{N}_{k,n}^{(0)}\) is defined using all cyclic shifts of the one-column tableau with entries \(1,2,\ldots,k-1,k+1\). These one-column tableaux correspond to a set of minimal affinizations of \(U_{q}(\widehat{\mathfrak{s}\mathfrak{l}_{k}})\)[20, 23, 30]. Figure 1. Given a tableau \(T\) with \(39\) columns corresponding to a (conjectural) prime module of \(U_{q}(\widehat{\mathfrak{s}\mathfrak{l}_{3}})\), we use the map \(\mathcal{F}_{9}^{(3)}(v_{T})\) to define a positive tropical Plücker vector (8.2), expanded in the \(\mathfrak{h}_{abc}\) basis of \(\mathbb{R}^{([9])}\), see also [44] on matroidal weighted blade arrangements. The induced matroid subdivision is dual to the (unweighted) graph in Figure 2. Prime tableaux with more and more columns may appear with the same diagram but with different (integer) coefficients; however the matroid subdivision will not change. We now define another version of Newton polytopes for quantum affine algebras non-recursively. For \(d\in\mathbb{Z}_{\geq 1}\), denote by \(\mathcal{P}_{\ell}^{+,d}\) the set of all dominant monomials in \(\mathcal{P}_{\ell}^{+}\) with degrees less or equal to \(d\). **Definition 9.3**.: For a simple Lie algebra \(\mathfrak{g}\) over \(\mathbb{C}\), \(\ell\geq 1\), and \(d\in\mathbb{Z}_{\geq 1}\), we define \[\mathbf{N^{\prime}}_{\mathfrak{g},\ell}^{(d)}=\mathrm{Newt}\left(\prod_{M \in\mathcal{P}_{\ell}^{+,d}}\widetilde{\chi}_{q}(L(M))/M\right).\] ### Truncated \(q\)-characters and F-polynomials In [67], for every \(\ell\geq 0\), Hernandez and Leclerc constructed an algebra \(A_{\ell}\) defined by a quiver with potential using their initial seed for the cluster algebra \(K_{0}(\mathcal{C}_{\ell})\). They introduced certain distinguished \(A_{\ell}\)-modules \(K(m)\) for every simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(m)\). Recall that [78] a simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(m)\) is real if \(L(m)\otimes L(m)\) is simple. Hernandez and Leclerc (Conjecture 5.3 in [67]) conjectured that for every real simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(m)\), the truncated \(q\)-character \(\widetilde{\chi}_{q}(L(m))\) of \(L(m)\) is equal to \(mF_{K(m)}\), where \(F_{K(m)}\) is the F-polynomial of \(K(m)\), [41, 42]. By Theorem 4.1 in [49] (Conjecture 1 in [50]), we have that for every simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(m)\) (not necessarily real), \(\widetilde{\chi}_{q}(L(m))\) is equal to \(m\) times a polynomial in Figure 2. Dual graph to the matroid subdivision of \(\Delta_{3,9}\) that is induced by the positive tropical Plücker vector (8.2): place a node at the center of each maximal cell in the subdivision; two nodes are connected by an edge if the corresponding cells share an internal codimension \(1\) face. This graph is also sometimes called the tight span of the matroid subdivision. Here the subdivision is finest; it has \(\binom{9-2}{3-1}=21\) maximal cells and cannot be further decomposed into a collection of matroid polytopes. \((i\in I,\,a\in C^{*})\) with constant term \(1\), where \(A_{i,a}^{-1}\) is defined in (2.1). Denote by \(v_{i,s}=A_{i,s}^{-1}\), \(i\in I\), \(s\in\mathbb{Z}\), and we fixed \(a\in\mathbb{C}^{*}\) and write \(A_{i,s}=A_{i,aq^{s}}\). Given a simple module \(L(m)\), after factoring out \(m\) and replacing \(A_{i,s}^{-1}\) by \(v_{i,s}\) in \(\widetilde{\chi}_{q}(L(m))\), we obtain a polynomial in \(v_{i,s}\). In Section 9.4, we will use the polynomials in \(v_{i,s}\) corresponding to simple modules to compute Newton polytopes in Definitions 9.1 and 9.3. ### \(g\)-vectors and highest \(l\)-weights By results in [67, Section 5.2.2], see also [39, Section 2.6], [23, Section 7], we have that for any simple \(U_{q}(\widehat{\mathfrak{g}})\)-module \(L(M)\), its \(g\)-vector is obtained as follows. The dominant monomial \(M\) can be written as \(M=\prod_{i,s}Y_{i,s}^{a_{i,s}}\) for some non-negative integers \(a_{i,s}\), where the product runs over all fundamental modules \(L(Y_{i,s})\) in \(\mathcal{C}_{\ell}\). On the other hand, \(M\) can also be written as \(M=\prod_{i}M_{i}^{g_{i}}\) for some integers \(g_{i}\), where the product runs over all initial cluster variables and frozen variables \(L(M_{i})\) in \(\mathcal{C}_{\ell}\). The \(g_{i}\)'s are the unique solution of \(\prod_{i,s}Y_{i,s}^{a_{i,s}}=\prod_{i}M_{i}^{g_{i}}\). With a chosen order, \(g_{i}\)'s form the \(g\)-vector of \(L(M)\). A simple module \(L(M)\) is determined by its \(g\)-vector uniquely. **Remark 9.4**.: In this paper, every element in the dual canonical basis of \(K_{0}(\mathcal{C}_{\ell})\) and \(\mathbb{C}[\operatorname{Gr}(k,n)]\) has a \(g\)-vector in the above sense even if it is not a cluster monomial. Given a simple module \(L(M)\), let \(g_{M}\) be the vector obtained from the \(g\)-vector of \(L(M)\) by forgetting the entries corresponding to the frozens. We also call \(g_{M}\) the \(g\)-vector \(L(M)\). Fix an order of the initial cluster variables (not including frozens), say \(z_{1},\ldots,z_{m}\). Given any vector \(g\in\mathbb{Z}^{m}\), the monomial \(M^{\prime}=z_{1}^{g_{1}}\cdots z_{m}^{g_{m}}\) can be written as \(M^{\prime}=AB^{-1}\) for two dominant monomials \(A,B\). The monomial \(B\) is of the form \(B=\prod_{i\in I}Y_{i,s_{1}}^{u_{i,1}}\cdots Y_{i,s_{r_{i}}}^{u_{i,r_{i}}}\) for some positive integers \(r_{i}\), \(u_{i,j}\). Let \(B^{\prime}=\prod_{i\in I}(Y_{i,\xi(i)}\cdots Y_{i,\xi(i)+2\ell})^{\max(u_{i,j} ;j=1,\ldots,r_{i})}\). Then \(M^{\prime}B^{\prime}\) is a dominant monomial and \(M^{\prime}B^{\prime}\) cannot be written as a product of a dominant monomial and a frozen variable. We denote \(M_{g}=M^{\prime}B^{\prime}\) and say that \(L(M_{g})\) corresponds to the vector \(g\). ### From facets to prime modules Fix an order of the initial cluster variables. Every \(v_{i,s}\) is the X-variable at a vertex of the initial quiver of \(K_{0}(\mathcal{C}_{\ell})\) (see Lemma 4.15 in [67]). We order the variables \(v_{i,s}\) according to the order of initial cluster variables. Given a facet \(\mathbf{F}\) of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) with the inward normal vector \(v_{\mathbf{F}}\) of \(\mathbf{F}\), let \(L(M_{\mathbf{F}})\) be the simple \(U_{q}(\widehat{\mathfrak{g}})\)-module corresponding to \(v_{\mathbf{F}}\), see Section 9.3. Recall that two simple \(U_{q}(\widehat{\mathfrak{g}})\)-modules \(L(M)\), \(L(M^{\prime})\) are called compatible if the identity \(\chi_{q}(L(M))\chi_{q}(L(M^{\prime}))=\chi_{q}(L(MM^{\prime}))\) holds. **Conjecture 9.5**.: _Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) and \(\ell\geq 1\). We have the following._ 1. _For any_ \(d\geq 0\)_, every facet of_ \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) _corresponds to a prime_ \(U_{q}(\widehat{\mathfrak{g}})\)_-module in_ \(\mathcal{C}_{\ell}\)_._ 2. _For every prime_ \(U_{q}(\widehat{\mathfrak{g}})\)_-module (nonfrozen)_ \(L(M)\) _in_ \(\mathcal{C}_{\ell}\)_, there exists_ \(d\geq 0\) _such that_ \(L(M)\) _corresponds to a facet of the polytope_ \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) _._ 3. _For any two distinct prime modules in_ \(\mathcal{C}_{\ell}\)_, they are compatible if and only if there is some_ \(d\geq 0\) _such that there are two facets of_ \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) _corresponding to them and the intersection of these two facets is nonempty._ In the following subsections, we compute some examples of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) and \(\mathbf{N^{\prime}}_{\mathfrak{g},\ell}^{(d)}\). ### Example: \(\mathfrak{g}\) is of type \(A_{1}\) and \(\ell=2\) Consider the case of type \(A_{1}\) and \(\ell=2\). The category \(\mathcal{C}_{\ell}\) has \(5\) prime modules (not including frozen variables). We choose a height function \(\xi(1)=-1\), see Section 2.2. The truncated \(q\)-characters of Kirillov-Reshetikhin modules (not including frozen variables) in \(\mathcal{C}_{\ell}\) are \[\widetilde{\chi}_{q}(L(Y_{1,-1}))=Y_{1,-1},\quad\widetilde{\chi}_ {q}(L(Y_{1,-3}))=Y_{1,-3}+Y_{1,-1}^{-1}=Y_{1,-3}(1+v_{1,-2}),\] \[\widetilde{\chi}_{q}(L(Y_{1,-5}))=Y_{1,-5}+Y_{1,-3}^{-1}=Y_{1,-3 }Y_{1,-5}(1+v_{1,-4}),\quad\widetilde{\chi}_{q}(L(Y_{1,-1}Y_{1,-3}))=Y_{1,-1}Y _{1,-3},\] \[\widetilde{\chi}_{q}(L(Y_{1,-3}Y_{1,-5}))=Y_{1,-3}Y_{1,-5}+\frac {Y_{1,-5}}{Y_{1,-1}}+\frac{1}{Y_{1,-1}Y_{1,-3}}=Y_{1,-3}Y_{1,-5}(1+v_{1,-2}+v_ {1,-4}v_{1,-2}).\] We take the order of the initial cluster variables as \(L(Y_{1,-1})\), \(L(Y_{1,-3}Y_{1,-1})\). The corresponding order of variables \(v_{i,s}\) is \(v_{1,-2}\), \(v_{1,-4}\). The Newton polytope \(\mathbf{N}_{\mathfrak{sl}_{2},2}^{(0)}\) is given by the following half-spaces: \[(-1,0)\cdot x+2\geq 0,\ (1,-1)\cdot x+1\geq 0,\ (1,0)\cdot x+0\geq 0,\ (0,1)\cdot x+0\geq 0,\ (0,-1)\cdot x+2\geq 0. \tag{9.1}\] The inward normal vectors \[(1,0),\ (-1,1),\ (-1,0),\ (0,-1),\ (0,1)\] of these facets are exactly the \(g\)-vectors of prime modules in \(\mathcal{C}_{2}^{\mathfrak{sl}_{2}}\). These facets correspond to the following prime modules respectively: \[L(Y_{1,-5}),\ L(Y_{1,-1}Y_{1,-3}),\ L(Y_{1,-1}),\ L(Y_{1,-3}),\ L(Y_{1,-3}Y_{1, -5}).\] The Newton polytope \(\mathbf{N}_{\mathfrak{sl}_{2},2}^{(d)}\) also has \(5\) facets for any \(d\geq 1\). ### Example: \(\mathfrak{g}\) is of type \(A_{2}\), \(\ell=2\) In the case of type \(A_{2}\), \(\ell=2\), we choose the height function \(\xi(1)=-1\), \(\xi(2)=0\). There are \(16\) prime modules (not including the two frozens) in the category \(\mathcal{C}_{\ell}\). We have the following truncated \(q\)-characters of Kirillov-Reshetikhin modules (not including initial cluster variables and frozen variables) in \(\mathcal{C}_{\ell}\): \[\widetilde{\chi}_{q}(L(Y_{1,-3}))=\frac{1}{Y_{2,0}}+\frac{Y_{2,-2}}{Y_{1,-1}}+ Y_{1,-3}=Y_{1,-3}(1+v_{1,-2}+v_{1,-2}v_{2,-1}),\] \[\widetilde{\chi}_{q}(L(Y_{1,-5}))=\frac{1}{Y_{2,-2}}+\frac{Y_{2,-4}}{Y_{1,-3}}+ Y_{1,-5}=Y_{1,-5}(1+v_{1,-4}+v_{1,-4}v_{2,-3}),\] \[\widetilde{\chi}_{q}(L(Y_{2,-2}))=Y_{1,-3}Y_{1,-5}+\frac{Y_{1,-5}}{Y_{2,0}}+ \frac{1}{Y_{2,0}Y_{2,-2}}+\frac{Y_{2,-4}}{Y_{2,0}Y_{1,-3}}+\frac{Y_{2,-2}Y_{1,-5 }}{Y_{1,-1}}+\frac{Y_{2,-2}Y_{2,-4}}{Y_{1,-1}Y_{1,-3}}\] \[=Y_{1,-3}Y_{1,-5}(1+v_{1,-2}+v_{1,-2}v_{1,-4}+v_{1,-2}v_{2,-1}+v_{ 1,-2}v_{2,-1}v_{1,-4}+v_{1,-2}v_{2,-1}v_{1,-4}+v_{1,-2}v_{2,-1}v_{1,-4}v_{2,-3}),\] \[\widetilde{\chi}_{q}(L(Y_{2,-2}Y_{2,-4}))=Y_{2,-2}Y_{2,-4}+\frac{Y_ {1,-1}Y_{2,-4}}{Y_{2,0}}+\frac{Y_{1,-1}Y_{1,-3}}{Y_{2,0}Y_{2,-2}}=Y_{2,-2}Y_{2,- 4}(1+v_{2,-1}v_{2,-3}+v_{2,-1}),\] We take the order of the initial cluster variables as \(L(Y_{1,-1})\), \(L(Y_{1,-3}Y_{1,-1})\), \(L(Y_{2,0})\), \(L(Y_{2,-2}Y_{2,0})\). The corresponding order of the variables \(v_{i,s}\) is \(v_{1,-2}\), \(v_{1,-4}\), \(v_{2,-1}\), \(v_{2,-3}\). The Newton polytope \(\mathbf{N}_{\text{sl}_{3},2}^{(0)}\) is given by the following half-spaces: \[(-1,0,0,0)\cdot x+3\geq 0,\ (0,-1,0,0)\cdot x+2\geq 0,\ (0,0,-1,0) \cdot x+4\geq 0,\] \[(0,1,0,-1)\cdot x+2\geq 0,\ (0,0,1,-1)\cdot x+2\geq 0,\ (0,0,1,0) \cdot x+0\geq 0,\] \[(0,0,0,1)\cdot x+0\geq 0,\ (1,-1,0,0)\cdot x+1\geq 0,\ (1,0,0,0) \cdot x+0\geq 0,\] \[(1,0,-1,0)\cdot x+2\geq 0,\ (0,1,0,0)\cdot x+0\geq 0,\ (-1,0,0,1) \cdot x+2\geq 0,\ (0,1,1,-1)\cdot x+1\geq 0.\] The inward normal vectors of these facets correspond to the following prime modules respectively: \[L(Y_{1,-1}),\ L(Y_{1,-1}Y_{1,-3}),\ L(Y_{2,0}),\ L(Y_{1,-5}Y_{2,- 2}Y_{2,0}),\ L(Y_{2,-2}),\ L(Y_{2,-4}Y_{2,-2}),\ L(Y_{2,-4}),\] \[L(Y_{1,-3}),\ L(Y_{1,-5}Y_{1,-3}),\ L(Y_{1,-5}Y_{1,-3}Y_{2,0}),\ L (Y_{1,-5}),\ L(Y_{2,-4}Y_{1,-1}),\ L(Y_{1,-5}Y_{2,-2}).\] We have the following truncated \(q\)-characters: \[\widetilde{\chi}_{q}(L(Y_{2,-2}Y_{1,-5}))=Y_{2,-2}Y_{1,-5}+\frac{ Y_{1,-1}}{Y_{2,0}Y_{2,-2}}+\frac{Y_{1,-1}Y_{1,-5}}{Y_{2,0}}+\frac{Y_{2,-2}Y_{2,-4} }{Y_{1,-3}}+\frac{Y_{1,-1}Y_{2,-4}}{Y_{2,0}Y_{1,-3}}\] \[=Y_{2,-2}Y_{1,-5}(v_{1,-4}v_{2,-3}v_{2,-1}+v_{1,-4}v_{2,-1}+v_{1, -4}+v_{2,-1}+1).\] \[\widetilde{\chi}_{q}(L(Y_{1,-1}Y_{2,-4}))=Y_{1,-1}Y_{2,-4}+\frac{ Y_{1,-1}Y_{1,-3}}{Y_{2,-2}}=Y_{1,-1}Y_{2,-4}(1+v_{2,-3}),\] \[\widetilde{\chi}_{q}(L(Y_{2,0}Y_{2,-2}Y_{1,-5}))=Y_{2,0}Y_{2,-2}Y_ {1,-5}+\frac{Y_{2,0}Y_{2,-2}Y_{2,-4}}{Y_{1,-3}}=Y_{2,0}Y_{2,-2}Y_{1,-5}(v_{1,- 4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,0}Y_{1,-3}Y_{1,-5}))=Y_{2,0}Y_{1,-3}Y_{1,-5 }+\frac{Y_{2,0}Y_{2,-2}Y_{1,-5}}{Y_{1,-1}}+\frac{Y_{2,0}Y_{2,-2}Y_{2,-4}}{Y_{1,- 1}Y_{1,-3}}\] \[=Y_{2,0}Y_{1,-3}Y_{1,-5}(v_{1,-2}v_{1,-4}+v_{1,-2}+1).\] The Newton polytope \(\mathbf{N}_{\mathfrak{sl}_{3},2}^{(1)}\) is given by the following half-spaces: \[(-1,0,0,0)\cdot x+4\geq 0,(0,-1,0,0)\cdot x+5\geq 0,(0,0,-1,0) \cdot x+5\geq 0,(0,0,0,-1)\cdot x+6\geq 0,\] \[(0,0,1,0)\cdot x+0\geq 0,(0,0,1,-1)\cdot x+3\geq 0,(0,1,0,0) \cdot x+0\geq 0,(1,-1,0,0)\cdot x+3\geq 0,\] \[(1,0,-1,0)\cdot x+3\geq 0,(1,0,0,0)\cdot x+0\geq 0,(1,0,0,-1) \cdot x+5\geq 0,(0,0,0,1)\cdot x+0\geq 0,\] \[(0,1,0,-1)\cdot x+3\geq 0,(0,1,1,-1)\cdot x+2\geq 0,(1,-1,-1,0) \cdot x+7\geq 0,(-1,0,0,1)\cdot x+3\geq 0.\] The inward normal vectors of these facets are exactly the \(g\)-vectors of prime modules (not including frozens) in \(\mathcal{C}_{2}^{\mathfrak{sl}_{3}}\). We have the following truncated \(q\)-character: \[\widetilde{\chi}_{q}(L(Y_{1,-3}Y_{2,0}))=Y_{2,0}Y_{1,-3}+\frac{Y_{2,0}Y_{2,-2} }{Y_{1,-1}}=Y_{2,0}Y_{1,-3}(v_{1,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,0}Y_{1,-3}Y_{2,-2}Y_{1,-5}))=Y_{2,-2}Y_{1,-5}+ \frac{Y_{2,-2}Y_{2,-4}}{Y_{1,-3}}+Y_{2,0}Y_{1,-3}Y_{2,-2}Y_{1,-5}+\frac{Y_{2,0 }Y_{2,-2}{}^{2}Y_{1,-5}}{Y_{1,-1}}+\frac{Y_{2,0}Y_{2,-2}{}^{2}Y_{2,-4}}{Y_{1,- 1}Y_{1,-3}}\] \[=Y_{2,0}Y_{1,-3}Y_{2,-2}Y_{1,-5}(v_{1,-2}v_{2,-1}+v_{1,-2}v_{1,-4}+v_{1,-2}+v_{ 1,-2}v_{2,-1}v_{1,-4}+1).\] The Newton polytope \(\mathbf{N}_{\mathfrak{sl}_{3},2}^{(d)}\) has 16 facets for any \(d\geq 1\). On the other hand, there are 16 prime modules (not including the two frozens) in \(\mathcal{C}_{2}^{\mathfrak{sl}_{3}}\). Therefore there is a one to one correspondence between facets of \(\mathbf{N}_{k,n}^{(d)}\) (\(d\geq 2\)) and prime modules in \(\mathcal{C}_{2}^{\mathfrak{sl}_{3}}\). ### Example: \(\mathfrak{g}\) is of type \(B_{n}\) and \(\ell=1\) Consider the case of type \(B_{2}\) and \(\ell=1\). We choose a height function as shown in Figure 3, see [65, 67] for the definition of the cluster algebra associated to \(\mathcal{C}_{\ell}\). There are 25 prime modules (not including the 3 frozen modules) in \(\mathcal{C}_{1}^{B_{2}}\) are \[L(Y_{1,2}),\ L(Y_{1,0}),\ L(Y_{1,-2}),\ L(Y_{1,-4}),\ L(Y_{2,1}),\ L(Y_{2,-1}), \ L(Y_{2,-3}),\ L(Y_{2,-5}),\] \[L(Y_{1,-4}Y_{2,1}),\ L(Y_{1,-4}Y_{1,2}),\ L(Y_{1,0}Y_{2,-5}),\ L(Y_{1,2}Y_{2,-3}),\ L(Y_{2,-5}Y_{2,1}),\ L(Y_{2,-5}Y_{2,-3}),\] \[L(Y_{2,-3}Y_{2,-1}),\ L(Y_{2,-1}Y_{2,1}),\ L(Y_{1,-4}Y_{2,-1}Y_{2,1}),\ L(Y_{1,0}Y_{2,-5}Y_{2,-3}),\ L(Y_{1,2}Y_{2,-5}Y_{2,-3}),\] \[L(Y_{1,2}Y_{2,-3}Y_{2,-1}),\ L(Y_{2,-5}Y_{2,-3}Y_{2,-1}),\ L(Y_{2,-3}Y_{2,-1}Y_{ 2,1}),\ L(Y_{1,0}Y_{1,2}Y_{2,-5}Y_{2,-3}),\] \[L(Y_{1,2}Y_{2,-5}Y_{2,-3}Y_{2,-1}),\ L(Y_{1,2}Y_{2,-5}Y_{2,-3}Y_{2,-1}).\] We have the following truncated \(q\)-characters of prime modules (not including initial cluster variables and frozen variables) in \(\mathcal{C}_{\ell}\): \[\widetilde{\chi}_{q}(L(Y_{2,-1}))=Y_{2,-1}(v_{2,0}+1),\quad\widetilde{\chi}_{q}( L(Y_{1,-2}))=Y_{1,-2}(v_{1,0}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,2}Y_{2,-3}))=Y_{1,2}Y_{2,-3}(v_{2,-2}+1),\quad \widetilde{\chi}_{q}(L(Y_{1,0}Y_{2,-5}))=Y_{1,0}Y_{2,-5}(v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,1}Y_{1,-4}))=Y_{2,1}Y_{1,-4}(v_{1,-2}+1),\quad \widetilde{\chi}_{q}(L(Y_{2,-3}))=Y_{2,-3}(v_{1,0}v_{2,-2}+v_{2,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,1}Y_{2,-5}))=Y_{2,1}Y_{2,-5}(v_{1,-2}v_{2,-4}+v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,-5}))=Y_{2,-5}(v_{1,-2}v_{2,-4}+v_{2,-4}+v_{2,0}v_ {1,-2}v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,-1}Y_{2,1}Y_{1,-4}))=Y_{2,-1}Y_{2,1}Y_{1,-4}(v_{2,0}v_{1,-2}+v_{1,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,2}Y_{2,-1}Y_{2,-3}))=Y_{1,2}Y_{2,-1}Y_{2,-3}(v_{2,0}v_{2,-2}+v_{2,0}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,2}Y_{1,-4}))=Y_{1,2}Y_{1,-4}(v_{2,0}v_{1,-2}+v_{1,-2}+v_{2,0}v_{1,-2}v_{2,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,-1}Y_{2,-3}))=Y_{2,-1}Y_{2,-3}(v_{2,0}v_{2,-2}+v_{ 2,0}+v_{1,0}v_{2,0}v_{2,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,0}Y_{1,2}Y_{2,-3}Y_{2,-5}))=Y_{1,0}Y_{1,2}Y_{2,-3} Y_{2,-5}(v_{2,-2}v_{2,-4}+v_{2,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,2}Y_{2,-3}Y_{2,-5}))=Y_{1,2}Y_{2,-3}Y_{2,-5}(v_{2,- 2}v_{2,-4}+v_{2,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,2}Y_{2,-3}Y_{2,-5}))=Y_{1,2}Y_{2,-3}Y_{2,-5}(v_{2,- 2}v_{2,-4}+v_{2,-2}+v_{1,-2}v_{2,-2}v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,-4}))=Y_{1,-4}(v_{2,0}v_{1,-2}+v_{1,-2}+v _{2,0}v_{1,-2}v_{2,-2}+v_{1,0}v_{2,0}v_{1,-2}v_{2,-2}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,2}Y_{2,-1}Y_{2,-3}Y_{2,-5}))=Y_{1,2}Y_ {2,-1}Y_{2,-3}Y_{2,-5}(v_{2,0}v_{2,-2}+v_{2,0}+v_{2,0}v_{2,-2}v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,0}Y_{2,-3}Y_{2,-5}))=Y_{1,0}Y_{2,-3}Y_ {2,-5}(v_{1,0}v_{2,-2}+v_{2,-2}v_{2,-4}+v_{2,-2}+v_{1,0}v_{2,-2}v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,-1}Y_{2,-3}Y_{2,-5}))= Y_{2,-1}Y_{2,-3}Y_{2,-5}(v_{2,0}v_{2,-2}+v_{2,0}\] \[+v_{1,0}v_{2,0}v_{2,-2}+v_{2,0}v_{2,-2}v_{2,-4}+v_{1,0}v_{2,0}v_{ 2,-2}v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{2,-3}Y_{2,-5}))= Y_{2,-3}Y_{2,-5}(v_{1,0}v_{2,-2}+v_{2,-2}v_{2,-4}+v_{2,-2}+v_{1,0}v_{ 2,-2}v_{2,-4}\] \[+v_{1,-2}v_{2,-2}v_{2,-4}+v_{1,0}v_{1,-2}v_{2,-2}v_{2,-4}+1),\] \[\widetilde{\chi}_{q}(L(Y_{1,2}Y_{2,-1}Y_{2,-3}{}^{2}Y_{2,-5}))=Y_{1,2}Y_{2,-1}Y_{2,-3}{}^{2}Y_{2,-5}(2v_{2,0}v_{2,-2}+v_{2,0}v_{2,-2}{}^{2}+v_{1, 0}v_{2,0}v_{2,-2}{}^{2}\] \[+v_{2,0}v_{2,-2}{}^{2}v_{2,-4}+v_{2,0}+v_{2,-2}+v_{1,0}v_{2,0}v_{ 2,-2}+v_{2,0}v_{2,-2}v_{2,-4}+v_{1,0}v_{2,0}v_{2,-2}{}^{2}v_{2,-4}+1).\] By using the above F-polynomials, we find that \(\mathbf{N}^{\prime(5)}_{B_{n},1}\) has 25 facets. The polytope \(\mathbf{N}^{\prime(d)}_{B_{n},1}\)\((d\geq 5)\) also has 25 facets and it is the type \(D_{5}\) associahedron. We expect that for every \(n\in\mathbb{Z}_{\geq 2}\), \(\mathbf{N}^{\prime(d)}_{B_{n},1}\) (\(d\) is large enough) is the type \(D_{2n+1}\) associahedron. ### Tropical Fans for Quantum Affine Algebras Let \(\mathcal{M}^{(0)}=\mathcal{M}\) be the set of all equivalence classes of Kirillov-Reshetikhin modules of \(U_{q}(\widehat{\mathfrak{g}})\) in \(\mathcal{C}_{\ell}\). By tropicalizing all \(\widetilde{\chi}_{q}(L(M))/M\), \(L(M)\in\mathcal{M}\), we obtain piecewise linear functions in the space of dimension \(r-m\) parametrized by \(y_{i,j}\) (\(y_{i,j}\) is the tropical version of \(v_{i,j}\)), where \(r\) is the number of fundamental modules and \(m\) is the number of frozen variables in \(\mathcal{C}_{\ell}\). Such a function is linear on a collection of cones; these cones assemble to define a polyhedral fan. The common refinement of these fans is the normal fan \(\mathcal{N}(\mathbf{N}^{(0)}_{\mathfrak{g},\ell})\) of the Newton polytope \(\mathbf{N}^{(0)}_{\mathfrak{g},\ell}\). For \(d\geq 1\), let \(\mathcal{M}^{(d)}\) be the set of all equivalence classes of simple modules corresponding to rays of \(\mathcal{N}(\mathbf{N}^{(d-1)}_{\mathfrak{g},\ell})\). By tropicalizing all \(\chi_{q}(L(M))\), \(L(M)\in\mathcal{M}^{(d)}\), and using the same procedure as above, we obtain the normal fan \(\mathcal{N}(\mathbf{N}^{(d)}_{\mathfrak{g},\ell})\) of the Newton polytope \(\mathbf{N}^{(d)}_{\mathfrak{g},\ell}\) defined in Section 9.1. ## 10. Physical Motivation: Stringy Integrals and CEGM Scattering Amplitudes In this section, we propose a formula which extends the main construction in the work of Arkani-Hamed, He, Lam [4] on so-called Grassmannian string integrals, and Cachazo, Early, Guevara, Mizera (CEGM) [21] on generalized biadjoint scalar amplitudes. Grassmannian string integrals and generalized biadjoint scalar amplitudes are related by taking a certain \(\alpha^{\prime}\to 0\) limit of the stringy integral. ### Stringy Integrals For Grassmannian Cluster Algebras From a physical point of view, the central objective of this subsection which we describe here is twofold. First, in this subsection we give an explicit formula for a completion of the stringy integral by making use of _all_ of the elements in Lusztig's dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n)]\); the rest of the subsection aims to provide a combinatorial framework for the evaluation of a limit which is standard in physics, the so-called \(\alpha^{\prime}\to 0\) limit of the Grassmannian string integral, which is known to be given ([4, Claim 1]) by the CEGM scattering equations formula. Such calculations are still highly nontrivial, but the formula which we propose removes an enormous amount of redundancy by making use of character polynomials for only _prime_ tableaux. It is known that any simple \(U_{q}(\widehat{\mathfrak{g}})\)-module decomposes as the tensor product of prime simple modules [28]. Therefore the \(q\)-character4 of any simple \(U_{q}(\widehat{\mathfrak{g}})\)-module is the product of the \(q\)-characters of its prime factors [50]. Footnote 4: There is a connection between the \(q\)-character of a simple module \(L(M)\) and the polynomial \(\operatorname{ch}_{T_{M}}\), where \(T_{M}\) is the tableau corresponding to \(M\), see Sections 3.1 and 3.2. Moreover, our formula is essentially nonrecursive using \(\operatorname{ch}_{T}\) in Theorem 5.8 in [23], and it is more general than possible constructions coming from cluster algebras which use only cluster variables. Arkani-Hamed, He, and Lam introduced Grassmannian string integrals in [4, Equation (6.11)]: \[\mathbf{I}_{k,n} = (\alpha^{\prime})^{a}\int_{(\mathbb{R}_{>0}^{n-k-1})^{\times(k-1) }}\left(\prod_{(i,j)}\frac{dx_{i,j}}{x_{i,j}}\right)\left(\prod_{J}p_{J}^{- \alpha^{\prime}c_{J}}(x_{i,j})\right), \tag{10.1}\] where \(a=(k-1)(n-k-1)\), \(\alpha^{\prime}\), \(c_{J}\) are some parameters, \(p_{J}\)'s are Plucker coordinates, the product runs over all \(k\)-element subsets of \([n]\). We emphasize that the original formulations (10.1) in [21] and [4] involved only the finite collection of all Plucker coordinates. We now define the completion of the Grassmannian string integral, using for the integrand all prime elements in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n)]\). **Definition 10.1**.: For \(2\leq k\leq n-2\) and every \(d\geq 1\), we define \[\mathbf{I}_{k,n}^{(d)} = (\alpha^{\prime})^{a}\int_{(\mathbb{R}_{>0}^{n-k-1})^{\times(k-1)} }\left(\prod_{(i,j)}\frac{dx_{i,j}}{x_{i,j}}\right)\left(\prod_{T}\mathrm{ch}_{ T}^{-\alpha^{\prime}c_{T}}(x_{i,j})\right). \tag{10.2}\] where the second product is over all tableaux \(T\) such that the face \(\mathbf{F}_{T}\) corresponding to \(T\) (see Section 6.4) is a (codimension one) facet of \(\mathbf{N}_{k,n}^{(d-1)}\). Here we abbreviate \(a=(k-1)(n-k-1)\). Also \(c_{T}\), \(x_{i,j}>0\) are positive (real) parameters, and \(\alpha^{\prime}\) is a parameter known in physics as the string tension. The first product is over \((i,j)\in[1,k-1]\times[1,n-k-1]\), and we have chosen the normalization where \(x_{i,n-k}=1\) for all \(i=1,\ldots,k-1\). In the integral (10.2) we have conditions under which the integral converges, namely that the parameters \(\alpha_{i,j}\) and \(c_{T}\) must be chosen such that the origin is in the interior of the Newton polytope, see [4, Claim 1] for details. Denote by \(\mathrm{PSSYT}_{k,n}^{r}\subset\mathrm{SSYT}(k,[n])\) the set of prime tableaux in \(\mathrm{SSYT}(k,[n])\) with \(r\) or less columns and by \(\mathrm{PSSYT}_{k,n}\subset\mathrm{SSYT}(k,[n])\) the set of all prime tableaux in \(\mathrm{SSYT}(k,[n])\). It is natural5 to introduce the \(d\to\infty\) limit of the Grassmannian string integral (10.2): Footnote 5: See talks by Arkani-Hamed, Frost, Plamondon, Salvatori, and Thomas in [2, 3]. \[\mathbf{I}_{k,n}^{(\infty)} = (\alpha^{\prime})^{a}\int_{(\mathbb{R}_{>0}^{n-k-1})^{\times(k-1) }}\left(\prod_{(i,j)}\frac{dx_{i,j}}{x_{i,j}}\right)\left(\prod_{T\mathrm{ePSSYT }_{k,n}}\mathrm{ch}_{T}^{-\alpha^{\prime}c_{T}}(x_{i,j})\right). \tag{10.3}\] For finite type cluster algebras, our integrand is finite. However, starting at \((k,n)=(3,9)\) the integrand involves an infinite product. We also introduce another version of the Grassmannian string integral (10.2) using all prime tableaux up to certain columns. **Definition 10.2**.: For \(2\leq k\leq n-2\) and \(r\geq 1\), we define \[\mathbf{I}_{k,n}^{\prime(r)} = (\alpha^{\prime})^{a}\int_{(\mathbb{R}_{>0}^{n-k-1})^{\times(k-1) }}\left(\prod_{(i,j)}\frac{dx_{i,j}}{x_{i,j}}\right)\left(\prod_{T\mathrm{ePSSYT }_{k,n}^{r}}\mathrm{ch}_{T}^{-\alpha^{\prime}c_{T}}(x_{i,j})\right), \tag{10.4}\] where \(a=(k-1)(n-k-1)\), and \(\alpha^{\prime}\), \(c_{T}\) are certain parameters defined in the same way as Definition 10.1. Note that in the limit \(r\to\infty\) the integrands for (10.4) and (10.3) coincide. In this way we have a combinatorial construction which relates prime tableaux to stringy integrals, and a geometric interpretation of the set of all prime tableaux in terms of a polytope. The polynomials \(\mathrm{ch}_{T}\) that appear in the integrands of (10.2) and (10.4) are in bijection with prime tableaux and can be calculated using (3.2). An important problem which may help with the evaluation will be investigated in Section 10.2: to rewrite Equation (10.3) in terms of rational functions which are invariant under the torus action, that is the so-called \(u\)-variables [5], and then to calculate the binary relations among them. See also [45] for another physical application of binary relations in the context of CEGM scattering amplitudes. By [4, Claim 1], the leading order term in the series expansion around \(\alpha^{\prime}=0\) has a beautiful interpretation as the volume of a polytope, where the simple poles correspond to facets. The polytope is dual to the Newton polytope \(\mathbf{N}_{k,n}^{(1)}\). This leading order contribution was formulated originally in [21] by Cachazo, Early, Guevara and Mizera (CEGM) using the scattering equations formalism. **Remark 10.3**.: The stringy integral in Equation (10.3) converges if and only if the origin is in the interior of the Newton polytope, see [4, Claim 1]; however, the \(\alpha^{\prime}\to 0\) limit is calculated by the CEGM scattering equations formula [21], which has no such convergence limitation. It turns out that the limit \(\alpha^{\prime}\to 0\) of the Grassmannian string integral (10.1) coincides with the CEGM scattering equations formula [4]. Let us sketch the CEGM formula, referring to [21] for details. First we define a scattering potential function \[\mathcal{S}_{k,n}^{(d=1)}=\sum_{J}\log(p_{J})\mathfrak{s}_{J},\] where \(p_{J}\) is the maximal \(k\times k\) minor with column set \(J=\{j_{1},\ldots,j_{k}\}\), and the _Mandelstam variables_\(\mathfrak{s}_{J}\) are coordinate functions on the _kinematic space_ \[\mathcal{K}(k,n)=\left\{(\mathfrak{s}_{J})\in\mathbb{R}^{\binom{n}{k}}:\sum_ {J:J\ni i}\mathfrak{s}_{J}=0,i=1,\ldots,n\right\}.\] Then [21] defined the (planar) generalized biadjoint scalar amplitude \[m_{n}^{(k)}=\sum_{\text{cccrit}(\mathcal{S}_{k,n}^{(1)})}\frac{1}{\det^{ \prime}\Phi}\left(\prod_{j=1}^{n}\frac{1}{p_{j,j+1,\ldots,j+k-1}(c)}\right)^{2},\] where the sum is over all critical points \(c\) of \(\mathcal{S}_{k,n}^{(1)}\), and where \(\det^{\prime}\Phi\) is the so-called reduced Hessian determinant (see [21, Equation 2.4] for details). For example, \[m_{4}^{(2)}=\frac{1}{s_{12}}+\frac{1}{s_{23}},\] \[m_{5}^{(2)}=\frac{1}{s_{12}s_{34}}+\frac{1}{s_{23}s_{45}}+\frac{1}{s_{34}s_{1 5}}+\frac{1}{s_{12}s_{45}}+\frac{1}{s_{23}s_{15}},\] and \[\begin{array}{ll}m_{6}^{(2)}=&\frac{1}{s_{12}s_{34}s_{56}}+\frac{1}{s_{12}s_ {56}s_{123}}+\frac{1}{s_{23}s_{56}s_{123}}+\frac{1}{s_{23}s_{56}s_{234}}+\frac {1}{s_{34}s_{56}s_{234}}+\frac{1}{s_{16}s_{23}s_{45}}+\frac{1}{s_{12}s_{34}s_{ 345}}\\ &+\frac{1}{s_{12}s_{45}s_{123}}+\frac{1}{s_{12}s_{45}s_{345}}+\frac{1}{s_{16}s_ {23}s_{234}}+\frac{1}{s_{16}s_{34}s_{234}}+\frac{1}{s_{16}s_{34}s_{345}}+\frac {1}{s_{16}s_{45}s_{345}}+\frac{1}{s_{23}s_{45}s_{123}}.\end{array} \tag{10.5}\] In general, Cachazo-He-Yuan [25] introduced a compact formula for biadjoint scalar amplitudes (as well as amplitudes for many other Quantum Field Theories). The \(k\geq 3\) analog was discovered by Cachazo-Early-Guevara-Mizera (CEGM); they have have been the subject of intensive study since their introduction [21]. A second expression for the leading order in the expansion around \(\alpha^{\prime}\to 0\) is a compact expression involving piecewise-linear functions, \[\lim_{\alpha^{\prime}\to 0}\mathbf{I}_{k,n}^{(\infty)}=\int_{\mathbb{T}^{k-1,n-k}} \exp\left(-\sum_{T\in\mathrm{PSSYT}_{k,n}}\mathfrak{s}_{T}\mathrm{ch}_{T}^{ \mathrm{Trop}}(y_{i,j})\right)dy_{i,j},\] where \(\mathrm{ch}_{T}^{\mathrm{Trop}}(y_{i,j})\) is the usual tropicalization of the character polynomial \(\mathrm{ch}_{T}(x_{i,j})\), and where \(\mathbb{T}^{k-1,n-k}=(\mathbb{T}^{n-k})^{\times(k-1)}\) and \(\mathbb{T}^{n-k}=\mathbb{R}^{n-k}/\mathbb{R}(1,\ldots,1)\). The fundamental tropical integral of this form was defined first in [22], called there the _global Schwinger parametrization_ of Feynman diagrams. In this work we propose a generalization of the integrand which includes all prime elements in Lusztig's dual canonical basis, as parameterized by prime tableaux. Clearly there are many questions about this integral which we leave to future work. One of these is highly nontrivial: * To evaluate the tropical limit, one has to either compute an infinite Minkowski sum, or else find a way to evaluate the CEGM formula for a scattering potential that involves an infinite summation indexed by prime tableaux. ### \(u\)-equations and \(u\)-variables In what follows, building on [4, Section 6.2], we propose a system of so-called \(u\)-variables for the (infinite) Grassmannian string integral \(\mathbf{I}_{k,n}^{(\infty)}\) defined in Equation (10.3). The motivation is to make the integrand manifestly compatible with the singularities of the function obtained by taking the \(\alpha^{\prime}\to 0\) limit. To construct the infinite integrand is an important problem [6, Section 12.3]. The second step, to characterize the binary relations among the \(u\)-variables, will be considered in future work. The new integrand is reorganized as a product of cross-ratios \(u_{T}\) on the Grassmannian \(\mathrm{Gr}(k,n)\), the so-called \(u\)-variables, one for each prime tableau \(T\). The \(u\)-variables [5] have been defined for finite-type cluster algebras arising from \(\mathrm{Gr}(2,n)\)[6] (and see the original work of Koba-Nielsen [76] in the physics literature), but for general Grassmannians \(\mathrm{Gr}(k,n)\) the cluster algebras are of infinite type and new methods are required [4]. The main idea of our solution is to construct \(u\)-variables for Grassmannian string integrals as ratios of characters \(\mathrm{ch}_{T}\) of prime tableaux \(T\) (equivalently, as ratios of \(q\)-characters of prime modules of the quantum affine algebra \(U_{q}(\widehat{\mathfrak{sl}_{k}})\)). In this way, our proposal for an integrand with infinite product of \(u\)-variables which satisfy certain binary-type identities, as has been explored in the finite type case in [5]. We first formulate our proposal and then we label the \(u\)-variables and \(u\)-equations for \(\mathbf{I}_{3,6}^{(2)}\) (in our notation) in [4] to using prime tableaux. **Definition 10.4**.: For \(k\leq n\), we define \[\mathbf{I}_{k,n}^{(\infty)} = (\alpha^{\prime})^{a}\int_{\left(\mathbb{R}_{>0}^{n-k-1}\right)^ {\times(k-1)}}\prod_{i,j}\frac{dx_{i,j}}{x_{i,j}}\prod_{T\in\mathrm{PSSYT}_{k, n}}(u_{T})^{\alpha^{\prime}U_{T}}, \tag{10.6}\] where \(\alpha^{\prime}\), \(U_{T}\) are some parameters, and \(u_{T}\) is the \(u\)-variable corresponding to a prime tableau \(T\) which is defined in (10.7). Jensen, King, and Su [70] introduced an additive categorification of Grassmannian cluster algebras using a category \(\mathrm{CM}(B_{k,n})\) of Cohen-Macaulay \(B_{k,n}\)-modules, where \(B_{k,n}\) is a certain quotient of the complete path algebra of a certain quiver. According to their result, there is a one to one correspondence between cluster variables \(\mathbb{C}[\mathrm{Gr}(k,n)]\) and reachable (meaning that the module can be obtained by mutations) rigid indecomposable modules in \(\mathrm{CM}(B_{k,n})\). On the other hand, cluster variables of \(\mathbb{C}[\mathrm{Gr}(k,n)]\) are in one to one correspondence with reachable (meaning that the tableau can be obtained by mutations) prime real tableaux in \(\mathrm{SSYT}(k,[n])\). Therefore there is a one to one correspondence between reachable rigid indecomposable modules in \(\mathrm{CM}(B_{k,n})\) and reachable prime real tableaux in \(\mathrm{SSYT}(k,[n])\). In particular, in finite type cases, there is a one to one correspondence between indecomposable modules (in finite type, all indecomposable modules are rigid) in \(\mathrm{CM}(B_{k,n})\) and prime tableaux in \(\mathrm{SSYT}(k,[n])\) (in finite type, all prime tableau are real). We conjecture that in general, there is a one to one correspondence between indecomposable modules in \(\mathrm{CM}(B_{k,n})\) and prime tableaux in \(\mathrm{SSYT}(k,[n])\). Denote by \(M_{T}\) the indecomposable module in \(\mathrm{CM}(B_{k,n})\) corresponding to a prime tableau \(T\). We can label the Auslander-Reiten quiver [10, 70] by prime tableaux instead of indecomposable modules, see Figure 4. **Definition 10.5**.: For every mesh in the Auslander-Reiten quiver of \(\mathrm{CM}(B_{k,n})\), we define the corresponding \(u\)-variable as \[u_{S}=\frac{\prod_{i=1}^{r}\mathrm{ch}_{T_{i}}}{\mathrm{ch}_{S}\mathrm{ch}_{S^ {\prime}}}. \tag{10.7}\] Here we label the \(u\)-variables by semistandard Young tableaux rather than noncrossing tuples. The mesh can be degenerate. For example, in Figure 4, \(u_{126}=\frac{p_{136}}{p_{126}}\). **Conjecture 10.6**.: _There are unique integers \(a_{T,T^{\prime}}\), where \(T\), \(T^{\prime}\) are prime tableaux, such that \(u\)-variables (10.7) are solutions of the system of equations_ \[u_{T}+\prod_{T^{\prime}\in\operatorname{PSSYT}_{k,n}}u_{T^{\prime}}^{a_{T,T^{ \prime}}}=1.\] The equations in Conjecture 10.6 are called \(u\)-equations. **Remark 10.7**.: General \(u\)-equations have been introduced in [2, 3] in the setting of representations of quiver with relations and cluster categories of finite type. In our paper, we work in the setting of the Grassmannian cluster category \(\operatorname{CM}(B_{k,n})\)[70]. We expect that \(a_{T,T^{\prime}}\) is the compatibility degree defined in [54] when tableaux \(T\), \(T^{\prime}\) are cluster variables. We give an example to explain Conjecture 10.6. **Example 10.8**.: In the case of \(\mathbb{C}[\operatorname{Gr}(3,6)]\), the \(u\)-equations are \[u_{124}+u_{135}u_{136}u_{235}u_{236}u_{356}u_{135,246}=1,\] \[u_{125}+u_{136}u_{346}u_{135,246}u_{246}u_{236}u_{146}=1,\] \[u_{135}+u_{135,246}u_{246}^{2}u_{124,356}u_{245}u_{346}u_{236}u_ {146}u_{256}u_{124}=1,\] \[u_{124,356}+u_{135}u_{136}u_{145}u_{146}u_{235}u_{236}u_{245}u_{2 46}u_{135,246}^{2}=1,\] and their cyclic shifts. Note that the cyclic shifts of the indices of all Plucker coordinates in \(\operatorname{ch}_{T}\) corresponds to promotions of \(T\), [93]. The solutions of the \(u\)-equations can be read from the Auslander-Reiten quiver. We have \[u_{126}=\frac{p_{136}}{p_{126}},\ u_{345}=\frac{p_{346}}{p_{345}},\ u_{125}= \frac{p_{126}p_{135}}{p_{125}p_{136}},\ u_{136}=\frac{\operatorname{ch}_{135,24 6}}{p_{136}p_{245}},\ u_{245}=\frac{p_{345}p_{246}}{p_{245}p_{346}},\] \[u_{346}=\frac{\operatorname{ch}_{124,356}}{p_{346}p_{125}},\ u_{124,356}= \frac{p_{125}p_{134}p_{356}}{\operatorname{ch}_{124,356}p_{135}},\ u_{134}= \frac{p_{135}p_{234}}{p_{134}p_{235}},\ u_{135}=\frac{p_{136}p_{145}p_{235}}{ p_{135}\operatorname{ch}_{135,246}},\] \[u_{235}=\frac{\operatorname{ch}_{135,246}}{p_{235}p_{146}},\ u_{135,246}= \frac{p_{146}p_{245}p_{236}}{\operatorname{ch}_{135,246}p_{246}},\ u_{146}= \frac{p_{246}p_{156}}{p_{146}p_{256}},\ u_{246}=\frac{p_{346}p_{256}p_{124}}{ p_{246}\operatorname{ch}_{124,356}},\] \[u_{256}=\frac{\operatorname{ch}_{124,356}}{p_{256}p_{134}},\ u_{234}=\frac{p_{2 35}}{p_{234}},\ u_{156}=\frac{p_{256}}{p_{156}},\ u_{356}=\frac{p_{135}p_{456}}{ p_{356}p_{145}},\ u_{145}=\frac{\operatorname{ch}_{135,246}}{p_{145}p_{236}},\] \[u_{236}=\frac{p_{246}p_{123}}{p_{236}p_{124}},\ u_{124}=\frac{\operatorname{ ch}_{124,356}}{p_{124}p_{356}},\ u_{456}=\frac{p_{145}}{p_{456}},\ u_{123}=\frac{p_{124}}{p_{123}},\] where we use \(\mathrm{ch}_{T_{1},\ldots,T_{r}}\) to denote \(\mathrm{ch}_{T}\), and \(T_{i}\)'s are columns of \(T\). Here \(\mathrm{ch}_{124,356}=p_{124}p_{356}-p_{123}p_{456}\), and \(\mathrm{ch}_{135,246}=p_{145}p_{236}-p_{123}p_{456}\). We checked that the \(u\)-variables satisfy \(u\)-equations directly. The solution agrees with Section 9.3 of [4]. The same computations can be done for other finite type cases. The Auslander-Reiten quivers for Grassmannian cluster categories \(\mathrm{CM}(B_{k,n})\) of finite type have been computed in [70] (the vertices are labelled by Cohen-Macaulay modules in \(\mathrm{CM}(B_{k,n})\)) and [38] (the vertices are labelled by tableaux). When \(\mathbb{C}[\mathrm{Gr}(k,n)]\) is of infinite type, the Auslander-Reiten quiver of the Grassmannian cluster category \(\mathrm{CM}(B_{k,n})\) has infinitely many components. We will study Conjecture 10.6 about the \(u\)-equations and their solutions in the future. ### Stringy Integrals For Quantum Affine Algebras We generalize the stringy integrals in Section 10.1 to the setting for any quantum affine algebra as follows. Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) and \(\ell\geq 0\). Recall that \(\hat{I}=\{(i,s):i\in I,s=\xi(i)-2d_{i}r,r\in[0,\ell]\}\), where \(\xi:I\to\mathbb{Z}\) is a chosen height function, see Section 2. Figure 4. The Auslander-Reiten quiver for \(\mathrm{CM}(B_{3,6})\) with vertices labelled by tableaux. **Definition 10.9**.: For every simple Lie algebra \(\mathfrak{g}\) over \(\mathbb{C}\), \(\ell\geq 1\), and \(d\geq 1\), we define \[\mathbf{I}_{\mathfrak{g},\ell}^{(d)} = (\alpha^{\prime})^{|\hat{I}|}\int_{\mathbb{R}_{>0}^{|\hat{I}|}} \left(\prod_{(i,s)\in\hat{I}}\frac{dY_{i,s}}{Y_{i,s}}\right)\left(\prod_{M} \chi_{q}(L(M))^{-\alpha^{\prime}c_{M}}\right), \tag{10.8}\] where the product is over all dominant monomials \(M\) such that the modules \(L(M)\) correspond to facets of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d-1)}\), and \(\alpha^{\prime}\), \(c_{M}\) are some parameters. We also define another version of stringy integrals for quantum affine algebras. **Definition 10.10**.: For every simple Lie algebra \(\mathfrak{g}\) over \(\mathbb{C}\), \(\ell\geq 1\), and \(d\geq 1\), we define \[\mathbf{I}_{\mathfrak{g},\ell}^{\prime} = (\alpha^{\prime})^{|\hat{I}|}\int_{\mathbb{R}_{>0}^{|\hat{I}|}} \left(\prod_{(i,s)\in\hat{I}}\frac{dY_{i,s}}{Y_{i,s}}\right)\left(\prod_{M} \chi_{q}(L(M))^{-\alpha^{\prime}c_{M}}\right), \tag{10.9}\] where the product is over all dominant monomials \(M\) in \(\mathcal{P}_{\ell}^{+}\) of degree less or equal to \(d\), and \(\alpha^{\prime}\), \(c_{M}\) are some parameters. We also define stringy integrals for quantum affine algebras in the case when \(d\to\infty\) as follows. **Definition 10.11**.: For every simple Lie algebra \(\mathfrak{g}\) over \(\mathbb{C}\) and \(\ell\geq 1\), we define \[\mathbf{I}_{\mathfrak{g},\ell}^{(\infty)} = (\alpha^{\prime})^{|\hat{I}|}\int_{\mathbb{R}_{>0}^{|\hat{I}|}} \left(\prod_{(i,s)\in\hat{I}}\frac{dY_{i,s}}{Y_{i,s}}\right)\left(\prod_{M} \chi_{q}(L(M))^{-\alpha^{\prime}c_{M}}\right), \tag{10.10}\] where the product is over all dominant monomials \(M\) such that \(L(M)\)'s are prime modules in \(\mathcal{C}_{\ell}\), and \(\alpha^{\prime}\), \(c_{M}\) are some parameters. We hope that the stringy integrals for quantum affine algebras will have applications to physics. ## 11. Limit \(g\)-vectors, Limit Facets, and Prime Non-real Modules In this section, we study prime non-real modules of quantum affine algebras using limit \(g\)-vectors. ### Limit \(g\)-vectors and Limit Facets It is observed in [34, 64] that some prime non-real elements in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n)]\) which can be computed using limit \(g\)-vectors (these limit \(g\)-vectors are called limit rays in [34, 64]). We generalize the concept of limit \(g\)-vectors to any cluster algebra in the following. For a vector \(v=(v_{1},\ldots,v_{m})\) in \(\mathbb{R}^{m}\), denote its \(l^{2}\)-norm by \(\|v\|=\sqrt{\sum_{i=1}^{m}|v_{i}|^{2}}\). **Definition 11.1**.: For a cluster algebra \(\mathcal{A}\) of infinite type of rank \(m\), we say that a sequence of \(g\)-vectors \(g_{1},g_{2},\ldots\) of \(\mathcal{A}\) has a limit \(g\) if the greatest common factor of entries of \(g\) is \(1\) and for every \(\epsilon>0\), there is a positive integer \(N\) such that for every \(j\geq N\), there is some positive real number \(c_{j}\) such that \(\|c_{j}g-g_{j}\|<\epsilon\). **Definition 11.2**.: Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) and let \(\ell\in\mathbb{Z}_{\geq 1}\), \(d\in\mathbb{Z}_{\geq 0}\). We say that a facet of a Newton polytope \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) defined in Section 9 for a quantum affine algebra is a limit facet if the facet corresponds to a module whose \(g\)-vector is a limit \(g\)-vector of a sequence of \(g\)-vectors of modules obtained by a sequence of mutations of the corresponding cluster algebra. We conjecture that every simple module corresponding to a limit \(g\)-vector is prime non-real. **Conjecture 11.3**.: _Let \(\mathfrak{g}\) be a simple Lie algebra over \(\mathbb{C}\) and let \(\ell\in\mathbb{Z}_{\geq 1}\). If a simple module \(L(M)\) in \(\mathcal{C}_{\ell}\) corresponds to a limit \(g\)-vector of the cluster algebra corresponding to \(\mathcal{C}_{\ell}\), then \(L(M)\) is prime non-real._ Conjecture 11.3 can be generalized to a more general setting. Suppose that \(\mathcal{A}\) is a cluster algebra and it has a linear basis \(B\) and every element \(b\) in \(B\) corresponds to a unique \(g\)-vector \(g_{b}\) of \(\mathcal{A}\). We say that an element \(b\) in \(B\) is real if \(b^{2}\in B\). We say that \(b\in B\) is prime if \(b\neq b^{\prime}b^{\prime\prime}\) for any non-trivial elements \(b^{\prime},b^{\prime\prime}\in B\). **Conjecture 11.4**.: _Let \(\mathcal{A}\) be a cluster algebra and suppose that it has a linear basis \(B\) and every element \(b\) in \(B\) corresponds to a unique \(g\)-vector \(g_{b}\) of \(\mathcal{A}\). For any \(b\in B\), if the \(g\)-vector \(g_{b}\) corresponding to \(b\) is a limit \(g\)-vector of the cluster algebra \(\mathcal{A}\), then \(b\) is prime non-real._ ### An example of limit \(g\)-vector In the case of Grassmannian cluster algebras, in Section 7 of [23], it is shown that given any tableau, one can recover its \(g\)-vector as follows. Any tableau \(T\in\operatorname{SSYT}(k,[n])\) can be written uniquely as \(S_{1}^{e_{1}}\cup\cdots\cup S_{m}^{e_{m}}\) for some integers \(e_{1},\ldots,e_{m}\in\mathbb{Z}\), where \(S_{1},\ldots,S_{m}\) are the tableaux in the initial cluster (we choose an order of the initial cluster variables). The vector \((e_{1},\ldots,e_{m})\) is the \(g\)-vector of \(T\). We explain an example of limit \(g\)-vector in the case of \(\mathbb{C}[\operatorname{Gr}(4,8)]\). We fix the order \[[1,2,3,5],[1,2,4,5],[1,3,4,5],[1,2,3,6],[1,2,5,6],[1,4,5,6],\] \[[1,2,3,7],[1,2,6,7],[1,5,6,7],[1,2,3,4],[2,3,4,5],[3,4,5,6],\] \[[4,5,6,7],[5,6,7,8],[1,2,3,8],[1,2,7,8],[1,6,7,8]\] of the initial cluster, where each list corresponds to a Plucker coordinate. Mutate at the vertices in Figure 5 alternatively where the tableaux \(\begin{array}{c| following \(g\)-vectors \[(-1,1,0,1,-1,0,0,0,0,0,0,1,0,0,0,1,0),\] \[(-2,2,0,2,-1,-1,0,-1,1,0,0,2,0,0,0,2,0),\] \[(-3,3,0,3,-1,-2,0,-2,2,0,0,3,0,0,0,3,0),\] \[(-4,4,0,4,-1,-3,0,-3,3,0,0,4,0,0,0,4,0),\ldots\] When the mutation step \(r\) is large enough, the \(g\)-vector we obtain is \[(-r,r,0,r,-1,-r+1,0,-r+1,r-1,0,0,r,0,0,0,r,0).\] The limit \(g\)-vector of the sequence is \[(-1,1,0,1,0,-1,0,-1,1,0,0,1,0,0,0,1,0).\] This limit \(g\)-vector corresponds to the prime non-real tableau \(\young(1\)\(3\)\(2\)\(5\)\(4\)\(7\)\(6\)\(8\)\(7\)\(6\)\(8\)\(9\)\(10\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(1\)\(11\)\(11\)\(1\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(11\)\(1\)\(11\)\(11\)\(11\)\(11\)\(1\)\(11\)\(11\)\(11\)\(11\)\(11\)\(1 Limit \(g\)-vectors in \(\mathbb{C}[\operatorname{Gr}(3,9)]\) and \(\mathbb{C}[\operatorname{Gr}(4,8)]\) Recall that we say that a tableau in \(\operatorname{SSYT}(k,[n])\) has rank \(r\) if the tableau has \(r\) columns and an element in \(\mathbb{C}[\operatorname{Gr}(k,n)]\) has rank \(r\) if the tableau corresponding to it has \(r\) columns. We say that a \(g\)-vector has rank \(r\) is the tableaux corresponding to it has rank \(r\). We compute cluster variables and limit \(g\)-vectors for \(\mathbb{C}[\operatorname{Gr}(k,n)]\) in terms of tableaux up to certain numbers of columns. That is, at each step of mutation, if we obtain some tableau with columns more than some number \(m\), we mutate this vertex again. In this way, we can collect only tableaux (cluster variables) with columns less or equal to \(m\). We compute limit \(g\)-vectors for \(\mathbb{C}[\operatorname{Gr}(3,9)]\) and \(\mathbb{C}[\operatorname{Gr}(4,8)]\) up to rank \(56\) (the corresponding tableaux have at most \(56\) columns). The sequence of numbers of rank \(r\) (\(r\geq 1\)) limit \(g\)-vectors for \(\mathbb{C}[\operatorname{Gr}(3,9)]\) (conjecturally, prime non-real tableaux in \(\operatorname{SSYT}(3,[9])\)) is \[0,0,3,0,0,3,0,0,6,0,0,6,0,0,12,0,0,6,0,0,18,0,0,12,0,0,12,0,0,12,\] \[0,0,18,0,0,12,0,0,30,0,0,12,0,0,36,0,0,18,0,0,24,0,0,24,\ldots\] The sequence of numbers of rank \(r\) (\(r\geq 1\)) limit \(g\)-vectors for \(\mathbb{C}[\operatorname{Gr}(4,8)]\) (conjecturally, prime non-real tableaux in \(\operatorname{SSYT}(4,[8])\)) is \[0,2,0,2,0,4,0,4,0,8,0,4,0,12,0,8,0,12,0,8,0,12,0,8,0,\] \[20,0,8,0,24,0,12,0,16,0,16,0,32,0,12,0,36,0,16,0,24,0,20,0,44,\ldots\] Based on the computations, we have the following interesting conjecture. Denote by \(\phi(m)\) the Euler totient function which counts the numbers less or equal to \(m\) and prime to \(m\). **Conjecture 11.5**.: _The number of rank \(r\) (\(r\geq 1\)) prime non-real tableaux in \(\operatorname{SSYT}(3,[9])\) is_ \[f_{3,9,r}=\begin{cases}0,&r\pmod{3}=i,\ i\in\{1,2\},\\ 3\phi(r/3),&r\pmod{3}=0.\end{cases}\] _The number of rank \(r\) (\(r\geq 1\)) prime non-real tableaux in \(\operatorname{SSYT}(4,[8])\) is_ \[f_{4,8,r}=\begin{cases}0,&r\pmod{2}=1,\\ 2\phi(r/2),&r\pmod{2}=0.\end{cases}\] ### Limit \(g\)-vectors for \(\operatorname{Gr}(4,9)\) Using the algorithm in Theorem 1.1 in [18], we find that there are \(18\) two-column prime non-real tableaux and \(252\) three-column prime non-real tableaux in \(\operatorname{SSYT}(4,[9])\). We also checked that by computer that they all correspond to limit \(g\)-vectors. The \(18\) prime non-real tableaux are obtained from the two prime non-real tableaux by replacing \(1<2<\cdots<8\) by \(a_{1}<\cdots<a_{8}\in[9]\). Up to promotion, the 252 three-column prime non-real tableaux in \(\mathrm{SSYT}(4,[9])\) are \[\begin{array}{ Now we mutate the vertices \(3,4\) alternatively and obtain the \(g\)-vectors: \[(0,0,1,0,0,0,0,0,0,0,0,0),\quad(0,0,0,1,0,0,0,0,0,0,0),\] \[(-1,1,2,0,0,-1,0,-1,0,0,1,1),\quad(-1,1,3,-1,0,-1,0,-1,0,1,1,1),\] \[(-1,1,4,-2,0,-1,0,-1,0,2,1,1),\quad(-1,1,r+2,-r,0,-1,0,-1,0,r,1,1), \quad r\geq 3.\] The limit \(g\)-vector for this mutation sequence is \((0,0,1,-1,0,0,0,0,0,1,0,0)\). This is the \(g\)-vector of the module \(L(Y_{2,-4}Y_{2,0})\). We will verify that the module \(L(Y_{2,-4}Y_{2,0})\) in type \(D_{4}\) is prime non-real by using \((q,t)\)-characters [87, 88] (see also [66, 14]) below. Note that the module \(L(Y_{2,-4}Y_{2,0})\) in type \(A_{n}\) (\(n\geq 2\)) is real (it is a snake module [37]). We first recall the results of [87, 88] about \((q,t)\)-characters. ### \((q,t)\)-characters Let \(C(z)\) be the quantum Cartan matrix of \(\mathfrak{g}\)[50] and let \(\widetilde{C}(z)=(\widetilde{C}_{ij}(z))\) be the inverse of \(C(z)\), see Section 2. The entries of \((\widetilde{C}_{ij}(z))\) have power series expressions in \(z\) of the form [66]\(\widetilde{C}_{ij}(z)=\sum_{m\geq 1}\widetilde{C}_{ij}(m)z^{m}\). Nakajima [87, 88] introduced \((q,t)\)-characters of \(U_{q}(\widehat{\mathfrak{g}})\)-modules which are \(t\)-deformations of \(q\)-characters. Let \(K_{t}(\mathcal{C}_{\ell})\) be the \(t\)-deformation of the Grothendieck ring \(K_{0}(\mathcal{C}_{\ell})\)[66, 14]. Let \(\hat{I}\) be the set of vertices of the initial quiver of the cluster algebra \(K_{0}(\mathcal{C}_{\ell})\). Denote by \(\mathbf{Y}_{t}\) the \(\mathbb{Z}[t^{\pm 1}]\)-algebra generated by \(Y_{i,p}^{\pm 1}\), \((i,p)\in\hat{I}\), subject to the relations ([87, 88], see also [14, 66]): \[Y_{i,p}*Y_{j,s}=t^{N(i,p;j,s)}Y_{j,s}*Y_{i,p},\] Figure 6. The left hand side is an initial cluster of \(K_{0}(\mathcal{C}_{2}^{D_{4}})\). The numbers in the brackets are labels of the vertices. The right hand side is the quiver obtained from the initial quiver by mutating at the vertices \(1,6,8\). The arrow from vertex \(3\) to \(4\) is a double arrow. where we use Nakajima's convention [87, 88] (in type ADE, \(d_{i}=1\)): \[N(i,p;j,s)=2(\widetilde{C}_{ij}(s-p-d_{i})-\widetilde{C}_{ij}(p-s-d_{i})).\] For any family \(\{u_{i,p}\in\mathbb{Z}:(i,p)\in\hat{I}\}\), denote \[\prod_{(i,p)\in\hat{I}}Y_{i,p}^{u_{i,p}}=t^{-\frac{1}{2}\sum_{(i,p)<(j,s)}u_{i, p}u_{j,s}N(i,p;j,s)}\overrightarrow{\star}_{(i,p)\in\hat{I}}Y_{i,p}^{u_{i,p}}.\] The expression on the right hand side of the above equation does not depend on the order of \(Y_{i,p}\)'s and so \(\prod_{(i,p)\in\hat{I}}Y_{i,p}^{u_{i,p}}\) is well-defined. The monomial \(\prod_{(i,p)\in\hat{I}}Y_{i,p}^{u_{i,p}}\) is called a commutative monomial [87, 66]. For a dominant monomial \(m=\prod_{(i,p)\in\hat{I}}Y_{i,p}^{u_{i,p}(m)}\), ([87, 88], see also [66, 14]) the standard module \(M(m)\) is the tensor product of the fundamental modules corresponding to each of the factors in \(m\) in a particular order. In this paper, we choose the order as \(Y_{i,s}<Y_{j,t}\) if and only if \(s<t\). The truncated \((q,t)\)-character of \(M(m)\) is given by \[[M(m)]_{t}=t^{\alpha(m)}\overrightarrow{\star}_{p\in\mathbb{Z}}\prod_{i\in I} \widetilde{\chi}_{q,t}(L(Y_{i,p}))^{\star u_{i,p}(m)},\] where \(\alpha(m)\) is the integer such that \(m\) occurs with multiplicity one in the expansion of \([M(m)]_{t}\) on the basis of the commutative monomials of \(\mathbf{Y}_{t}\) and the product \(\overrightarrow{\star}_{p\in\mathbb{Z}}\) is taken as increasing order. Since \(\widetilde{\chi}_{q,t}(L(Y_{i,p}))\) and \(\widetilde{\chi}_{q,t}(L(Y_{i,p^{\prime}}))\) commute for any \(p,p^{\prime}\), the above expression is well-defined. In [87, 88], a \(\mathbb{Z}\)-algebra anti-automorphism of \(\mathbf{Y}_{t}\) called bar-involution is defined by: \(t\mapsto t^{-1}\), \(Y_{i,p}\mapsto Y_{i,p}\), \((i,p)\in\hat{I}\). For a simple module \(L(m)\), denote by \([L(m)]_{t}\) its \((q,t)\)-character. The following theorem by Nakajima [87, 88] gives an algorithm to compute (truncated) \((q,t)\)-characters of a simple \(U_{q}(\widehat{\mathfrak{g}})\)-module: for every dominant monomial \(m\in\mathcal{P}_{\ell}^{+}\), there is a unique element \([L(m)]_{t}\) of \(K_{t}(\mathcal{C}_{\ell})\) such that * \(\overline{[L(m)]_{t}}=[L(m)]_{t}\), * \([L(m)]_{t}\in[M(m)]_{t}+\sum_{m^{\prime}<m}t^{-1}\mathbb{Z}[t^{-1}][M(m^{ \prime})]_{t}\). This result is generalized to non-simply-laced types in [61, 55]. ### The Type \(D_{4}\) Module \(L(Y_{2,-4}y_{2,0})\) is Prime Non-real The quantum Cartan matrix in type \(D_{4}\) is \(\left(\begin{smallmatrix}\frac{z^{2}+1}{z}&-1&0&0\\ -1&\frac{z^{2}+1}{z}&-1&-1\\ 0&-1&\frac{z^{2}+1}{z}&0\\ 0&-1&0&\frac{z^{2}+1}{z}\end{smallmatrix}\right)\). In the following, we also write \(m_{1}m_{2}^{-1}\) as \(\frac{m_{1}}{m_{2}}\) for two dominant monomials \(m_{1},m_{2}\). By modified Frenkel-Mukhin algorithm [49, 87, 88], we have that \(\widetilde{\chi}_{q,t}(L(Y_{2,0}))=Y_{2,0}\), \[\widetilde{\chi}_{q,t}(Y_{2,-4}) = Y_{2,-4}+\frac{Y_{1,-3}Y_{3,-3}Y_{4,-3}}{Y_{2,-2}}+(t+\frac{1}{t} )\frac{Y_{2,-2}}{Y_{2,0}}+\frac{Y_{1,-1}Y_{1,-3}}{Y_{2,0}}+\frac{Y_{1,-3}Y_{3, -3}}{Y_{4,-1}}+\frac{Y_{4,-3}Y_{1,-3}}{Y_{3,-1}}+\frac{Y_{1,-1}Y_{3,-1}Y_{4,-1} }{Y_{2,0}}\] \[+\frac{Y_{3,-3}Y_{3,-1}}{Y_{2,0}}+\frac{Y_{4,-1}Y_{4,-3}}{Y_{2,0}} +\frac{Y_{4,-3}Y_{3,-3}}{Y_{1,-1}}+\frac{Y_{1,-3}Y_{2,-2}}{Y_{4,-1}Y_{3,-1}}+ \frac{Y_{1,-1}Y_{3,-1}}{Y_{2,0}Y_{4,1}}+\frac{Y_{1,-1}Y_{4,-1}}{Y_{2,0}Y_{3,1}} +\frac{Y_{3,-3}}{Y_{3,1}}+\frac{Y_{4,-3}}{Y_{4,1}}\] \[+\frac{Y_{4,-3}Y_{2,-2}}{Y_{1,-1}Y_{3,-1}}+\frac{Y_{1,-1}}{Y_{4,1} Y_{3,1}}+\frac{Y_{2,-2}}{Y_{4,-1}Y_{4,1}}+\frac{Y_{2,-2}}{Y_{3,-1}Y_{3,1}}+ \frac{Y_{2,-2}}{Y_{1,-1}Y_{3,-1}Y_{4,-1}}+\frac{Y_{2,-2}Y_{3,-3}}{Y_{1,-1}Y_{4,-1}},\] where the monomials on the right hand side are commutative monomials. Therefore, \(\widetilde{\chi}_{q,t}(L(Y_{2,-4}))*\widetilde{\chi}_{q,t}(L(Y_{2,0}))=p_{1}+ tp_{2}+t^{2}p_{3}\), where \[p_{1}=\frac{Y_{1,-1}Y_{3,-1}Y_{4,-1}}{Y_{2,0}}+\frac{Y_{1,-1}Y_{3,-1}}{Y_{4,1} }+\frac{Y_{1,-1}Y_{4,-1}}{Y_{3,1}}+\frac{Y_{2,0}Y_{1,-1}}{Y_{4,1}Y_{3,1}}+Y_{ 2,-2},\] \[p_{2}=Y_{1,-1}Y_{1,-3}+Y_{3,-1}Y_{3,-3}+Y_{4,-1}Y_{4,-3}+\frac{Y_{3,-3}Y_{2,0} }{Y_{3,1}}+\frac{Y_{4,-3}Y_{2,0}}{Y_{4,1}}+\frac{Y_{2,0}Y_{2,-2}}{Y_{4,-1}Y_{4,1}}+\frac{Y_{2,0}Y_{2,-2}}{Y_{3,-1}Y_{3,1}},\] \[p_{3} = \frac{Y_{1,-3}Y_{3,-3}Y_{4,-3}Y_{2,0}}{Y_{2,-2}}+Y_{2,-2}+\frac{Y _{1,-3}Y_{3,-3}Y_{2,0}}{Y_{4,-1}}+\frac{Y_{1,-3}Y_{4,-3}Y_{2,0}}{Y_{3,-1}}+Y_ {2,-4}Y_{2,0}+\frac{Y_{3,-3}Y_{4,-3}Y_{2,0}}{Y_{1,-1}}+\frac{Y_{2,-2}Y_{1,-3} Y_{2,0}}{Y_{3,-1}Y_{4,-1}}\] \[+\frac{Y_{2,-2}Y_{3,-3}Y_{2,0}}{Y_{1,-1}Y_{4,-1}}+\frac{Y_{2,-2}Y _{4,-3}Y_{2,0}}{Y_{1,-1}Y_{3,-1}}+\frac{Y_{2,-2}Y_{2,0}}{Y_{1,-1}Y_{3,-1}Y_{4,-1}}.\] It follows that \(\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_{2,0}))=p_{3}\). Since \(\widetilde{\chi}_{q}(L(Y_{2,-4}Y_{2,0}))\neq\widetilde{\chi}_{q}(L(Y_{2,-4})) \widetilde{\chi}_{q}(L(Y_{2,0}))\), we have that \(L(Y_{2,-4}Y_{2,0})\) is prime. By computing \(\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_{2,0}))*\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_{ 2,0}))\), we found that the dominant monomials in \(\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_{2,0}))*\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_ {2,0}))\) are: \[Y_{2,-4}^{2}Y_{2,0}^{2},\ Y_{1,-3}Y_{2,0}Y_{3,-3}Y_{4,-3},\ Y_{2,-2}^{2},\ 2Y_{2,-4}Y_{2,-2}Y_{2,0}, \tag{12.1}\] where \(2Y_{2,-4}Y_{2,-2}Y_{2,0}\) means that the monomial \(Y_{2,-4}Y_{2,-2}Y_{2,0}\) appears two times. Therefore in the decomposition \[\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_{2,0}))*\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_ {2,0}))=\sum_{i}f_{i}(t)\widetilde{\chi}_{q,t}(L(m_{i})), \tag{12.2}\] where \(f_{i}(t)\) is a polynomial in \(t\), we have that every \(m_{i}\) can only be chosen from the monomials in (12.1). By computing \(\widetilde{\chi}_{q,t}(L(Y_{1,-3}))*\widetilde{\chi}_{q,t}(L(Y_{3,-3}))* \widetilde{\chi}_{q,t}(L(Y_{4,-3}))*\widetilde{\chi}_{q,t}(L(Y_{2,0}))\), we obtain that \[\begin{split}\widetilde{\chi}_{q,t}(L(Y_{1,-3}Y_{2,0}Y_{3,-3}Y_{4, -3}))=Y_{1,-3}Y_{2,0}Y_{3,-3}Y_{4,-3}+(t+\frac{1}{t})Y_{2,-2}{}^{2}+Y_{1,-3}Y_ {1,-1}Y_{2,-2}+\frac{Y_{1,-3}Y_{2,-2}Y_{2,0}Y_{3,-3}}{Y_{4,-1}}\\ +\frac{Y_{1,-3}Y_{2,-2}Y_{2,0}Y_{4,-3}}{Y_{3,-1}}+\frac{Y_{1,-1}Y_ {2,-2}Y_{3,-1}Y_{4,-1}}{Y_{2,0}}+Y_{2,-2}Y_{3,-3}Y_{3,-1}+Y_{2,-2}Y_{4,-3}Y_{4, -1}+\frac{Y_{2,-2}Y_{2,0}Y_{3,-3}Y_{4,-3}}{Y_{1,-1}}\\ +\frac{Y_{1,-3}Y_{2,-2}{}^{2}Y_{2,0}}{Y_{3,-1}Y_{4,-1}}+\frac{Y_ {1,-1}Y_{2,-2}Y_{3,-1}}{Y_{4,1}}+\frac{Y_{1,-1}Y_{2,-2}Y_{4,-1}}{Y_{3,1}}+\frac {Y_{2,-2}Y_{2,0}Y_{3,-3}}{Y_{3,1}}+\frac{Y_{2,-2}Y_{2,0}Y_{4,-3}}{Y_{4,1}}+ \frac{Y_{2,-2}{}^{2}Y_{2,0}Y_{3,-3}}{Y_{1,-1}Y_{4,-1}}\\ +\frac{Y_{2,-2}{}^{2}Y_{2,0}}{Y_{4,-1}Y_{4,1}}+\frac{Y_{2,-2}{}^{ 2}Y_{2,0}}{Y_{3,-1}Y_{3,1}}+\frac{Y_{2,-2}{}^{3}Y_{2,0}}{Y_{1,-1}Y_{3,-1}Y_{4, -1}}+\frac{Y_{2,-2}{}^{2}Y_{2,0}Y_{4,-3}}{Y_{1,-1}Y_{3,-1}}+\frac{Y_{1,-1}Y_{2, -2}Y_{2,0}}{Y_{4,1}Y_{3,1}}.\end{split}\] We checked that there is a monomial \(\frac{Y_{1,-1}Y_{2,-2}Y_{3,-1}}{Y_{4,1}}\) appearing in \(\widetilde{\chi}_{q,t}(L(Y_{1,-3}Y_{2,0}Y_{3,-3}Y_{4,-3}))\) but not appearing in \(\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_{2,0}))*\widetilde{\chi}_{q,t}(L(Y_{2,-4}Y_ {2,0}))\). Therefore any \(m_{i}\) on the right hand side of (12.2) cannot be \(Y_{1,-3}Y_{2,0}Y_{3,-3}Y_{4,-3}\). Similarly, any \(m_{i}\) on the right hand side of (12.2) cannot be \(Y_{2,-2}^{2}\). Therefore the only possible dominant monomial appearing on the right hand side of (12.2) are \(Y_{2,-4}^{2}Y_{2,0}^{2}\) and \(Y_{2,-4}Y_{2,-2}Y_{2,0}\). By computing \(f=\widetilde{\chi}_{q,t}(L(Y_{2,-4}))*\widetilde{\chi}_{q,t}(L(Y_{2,-4}))* \widetilde{\chi}_{q,t}(L(Y_{2,0}))*\widetilde{\chi}_{q,t}(L(Y_{2,0}))\) and checking the coefficient of \(\frac{1}{t^{8}}f\), we find that the monomial \(Y_{2,-4}Y_{2,-2}Y_{2,0}\) appears in the truncated \((q,t)\)-character of \(L(Y_{2,-4}^{2}Y_{2,0}^{2})\) exactly one time. Since the monomial \(Y_{2,-4}Y_{2,-2}Y_{2,0}\) appears two times in \(\widetilde{\chi}_{q,t}(Y_{2,-4}Y_{2,0})*\widetilde{\chi}_{q,t}(Y_{2,-4}Y_{2,0})\), we have that \[\widetilde{\chi}_{q,t}(Y_{2,-4}Y_{2,0})*\widetilde{\chi}_{q,t}(Y_{2,-4}Y_{2,0}) =\widetilde{\chi}_{q,t}(L(Y_{2,-4}^{2}Y_{2,0}^{2}))+\widetilde{\chi}_{q,t}(L( Y_{2,-4}^{2}Y_{2,0}^{2})),\] and \(\widetilde{\chi}_{q,t}(L(Y_{2,-4}^{2}Y_{2,0}^{2}))=p_{1}+(t+\frac{1}{t})p_{2}+(t ^{2}+\frac{1}{t^{2}})p_{3}+(t^{3}+\frac{1}{t^{3}})p_{4}\), where \[\begin{split} p_{1}&=Y_{2,-4}{}^{2}Y_{2,0}{}^{2}+ \frac{Y_{1,-3}{}^{2}Y_{4,-3}{}^{2}Y_{2,0}{}^{2}}{Y_{3,-1}{}^{2}}+2\,\frac{Y_{1, -3}Y_{4,-3}{}^{2}Y_{3,-3}Y_{2,0}{}^{2}}{Y_{1,-1}Y_{3,-1}}+2\,\frac{Y_{1,-3}{} ^{2}Y_{3,-3}Y_{4,-3}Y_{2,0}{}^{2}}{Y_{4,-1}Y_{3,-1}}+\frac{Y_{2,-2}{}^{2}Y_{4, -3}{}^{2}Y_{2,0}{}^{2}}{Y_{1,-1}{}^{2}Y_{3,-1}{}^{2}}\\ &\quad+\frac{Y_{2,-2}{}^{4}Y_{2,0}{}^{2}}{Y_{1,-1}{}^{2}Y_{3,-1 }{}^{2}Y_{4,-1}{}^{2}}+\frac{Y_{2,-2}{}^{2}Y_{3,-3}{}^{2}Y_{2,0}{}^{2}}{Y_{1,- 1}{}^{2}Y_{4,-1}{}^{2}}+\frac{Y_{1,-3}{}^{2}Y_{3,-3}{}^{2}Y_{4,-3}{}^{2}Y_{2,0 }{}^{2}}{Y_{2,-2}{}^{2}}+\frac{Y_{1,-3}{}^{2}Y_{2,-2}{}^{2}Y_{2,0}{}^{2}}{Y_ {3,-1}{}^{2}Y_{4,-1}{}^{2}}+2\,\frac{Y_{2,-2}{}^{2}Y_{1,-3}Y_{3,-3}{}^{2}Y_{2,0 }{}^{2}}{Y_{1,-1}Y_{3,-1}{}^{2}Y_{4,-1}{}^{2}}\\ &\quad+2\,\frac{Y_{2,-2}{}^{2}Y_{1,-3}Y_{4,-3}{}^{2}Y_{2,0}{}^{2}} {Y_{3,-1}{}^{2}Y_{4,-1}{}^{2}}+2\,\frac{Y_{1,-3}{}^{3}Y_{3,-3}{}^{2}Y_{4,-3}{ }^{2}Y_{2,0}{}^{2}}{Y_{1,-1}Y_{4,-1}}+\frac{Y_{1,-3}{}^{2}Y_{3,-3}{}^{2}Y_{2,0 }{}^{2}}{Y_{4,-1}{}^{2}}+\frac{Y_{3,-3}{}^{2}Y_{4,-3}{}^{2}Y_{2,0}{}^{2}}{Y_ {1,-1}{}^{2}}+Y_{2,-2}{}^{2}\\ &\quad+2\,\frac{Y_{3,-3}Y_{4,-3}{}^{2}Y_{2,0}{}^{2}}{Y_{4,-1}Y_{3,-1 }{}^{2}Y_{1,-1}{}^{2}}+Y_{2,-2}Y_{2,-4}Y_{2,0},\end{split}\] \[p_{2} =\frac{Y_{2,-2}{}^{2}Y_{4,-3}Y_{2,0}}{Y_{1,-1}Y_{3,-1}}+\frac{Y_{2,-2} {}^{2}Y_{2,0}}{Y_{1,-1}Y_{3,-1}Y_{4,-1}}+\frac{Y_{2,-2}Y_{4,-3}Y_{2,-4}{Y_{2,0}} ^{2}}{Y_{1,-1}Y_{3,-1}}+\frac{Y_{1,-3}{}^{2}Y_{3,-3}Y_{2,-2}{Y_{2,0}}^{2}}{Y_{4, -1}Y_{3,-1}}+\frac{Y_{3,-3}{}^{2}Y_{4,-3}{Y_{1,-3}}{Y_{2,0}}^{2}}{Y_{1,-1}Y_{2,- 2}}\] \[+\frac{Y_{2,-2}{}^{3}Y_{1,-3}{Y_{2,0}}^{2}}{Y_{1,-1}Y_{3,-1}{Y_{4, -1}}^{2}}+\frac{Y_{2,-2}{}^{3}Y_{4,-3}{Y_{2,0}}^{2}}{Y_{1,-1}{Y_{3,-1}}^{2}Y_{4,-1}}+\frac{Y_{2,-2}{}^{2}{Y_{3,-3}}{Y_{2,-4}}{Y_{2,0}}^{2}}{Y_{4,-1}Y_{4,-1}}+ \frac{Y_{1,-3}{}^{2}{Y_{3,-3}}^{2}Y_{4,-3}{Y_{2,0}}^{2}}{Y_{4,-1}Y_{2,-2}}+ \frac{Y_{2,-2}{}^{3}Y_{3,-3}{Y_{2,0}}^{2}}{Y_{1,-1}{Y_{4,-1}}^{2}Y_{4,-1}}\] \[+\frac{Y_{1,-3}{Y_{3,-3}}^{2}{Y_{2,-2}}{Y_{2,0}}^{2}}{Y_{4,-1}}+ \frac{Y_{2,0}Y_{1,-3}{Y_{2,-2}}{Y_{3,-3}}}{Y_{4,-1}}+\frac{{Y_{2,-2}}^{2}{Y_{1,-3}}{Y_{2,0}}}{Y_{4,-1}Y_{3,-1}}+\frac{{Y_{2,-2}}^{2}{Y_{3,-3}}{Y_{2,0}}}{Y_{ 1,-1}{Y_{4,-1}}}+\frac{{Y_{1,-3}}^{2}{Y_{2,-2}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{3,-1 }{Y_{4,-1}}}\] \[+\frac{Y_{2,-4}{Y_{1,-3}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{3,-1}}+ \frac{Y_{1,-3}{Y_{3,-3}}{Y_{2,-4}}{Y_{2,0}}^{2}}{Y_{4,-1}}+\frac{Y_{2,-4}{Y_{3,-3}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{1,-1}}+Y_{1,-3}{Y_{3,-3}}{Y_{4,-3}}{Y_{2,0}} +\frac{Y_{1,-3}{Y_{3,-3}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{2,-2}}\] \[+\frac{Y_{1,-3}{Y_{3,-3}}{Y_{4,-3}}^{2}{Y_{2,0}}^{2}}{Y_{2,-2}{Y_ {3,-1}}}+\frac{Y_{2,-2}{}^{2}{Y_{2,-4}}{Y_{2,0}}^{2}}{Y_{1,-1}{Y_{3,-1}}{Y_{4, -1}}}+\frac{Y_{2,-4}{Y_{1,-3}}{Y_{2,-2}}{Y_{2,0}}^{2}}{Y_{4,-1}{Y_{3,-1}}}+ \frac{Y_{2,-2}{Y_{4,-3}}^{2}{Y_{3,-3}}{Y_{2,0}}^{2}}{{Y_{1,-1}}^{2}Y_{3,-1}}+ \frac{Y_{2,-2}{Y_{3,-3}}^{2}{Y_{4,-3}}{Y_{2,0}}^{2}}{{Y_{1,-1}}^{2}Y_{4,-1}}\] \[+\frac{Y_{1,-3}{Y_{4,-3}}^{2}{Y_{2,-2}}{Y_{2,0}}^{2}}{Y_{1,-1}{Y_ {3,-1}}^{2}}+\frac{Y_{1,-3}{Y_{4,-3}}{Y_{2,-2}}{Y_{2,0}}}{Y_{3,-1}}+\frac{Y_{3,-3}{Y_{4,-3}}{Y_{2,-2}}{Y_{2,0}}}{Y_{1,-1}}+3\,\frac{Y_{1,-3}{Y_{2,-2}}{Y_{3,- 3}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{1,-1}Y_{3,-1}},\] \[p_{3} =\frac{Y_{2,-2}{}^{2}{Y_{1,-3}}{Y_{3,-3}}{Y_{2,0}}^{2}}{Y_{1,-1}{Y _{3,-1}}{Y_{4,-1}}^{2}}+\frac{Y_{2,-2}{}^{2}{Y_{1,-3}}{Y_{4,-3}}{Y_{3,-1}}^{2} {Y_{4,-1}}{Y_{1,-1}}}+\frac{Y_{3,-3}{Y_{4,-3}}{Y_{2,-2}}^{2}{Y_{2,0}}^{2}}{Y_{4,-1}{Y_{3,-1}}^{2}}+\frac{Y_{1,-3}{Y_{4,-3}}{Y_{3,-3}}{Y_{2,0}}^{2}}{Y_{4,-1}Y _{3,-1}}+\frac{Y_{1,-3}{Y_{3,-3}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{4,-1}Y_{3,-1}}+ \frac{Y_{1,-3}{Y_{3,-3}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{4,-1}Y_{3,-1}}\] \[+\frac{Y_{1,-3}{Y_{3,-3}}^{2}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{1,-1}{Y _{4,-1}}},\] \[p_{4}=\frac{Y_{1,-3}{Y_{2,-2}}{Y_{3,-3}}{Y_{4,-3}}{Y_{2,0}}^{2}}{Y_{1,-1}{Y _{3,-1}}{Y_{4,-1}}}.\] Therefore the module \(L(Y_{2,-4}Y_{2,0})\) in type \(D_{4}\) is non-real. ## 13. Discussion In this work we make a connection between tropical geometry, representation theory of quantum affine algebras, and scattering amplitudes in physics. In mathematical side, we introduce a sequence of Newton polytopes and in the case of \(U_{q}(\widehat{\mathfrak{sl}_{k}})\), we construct explicitly simple modules from given facets of a Newton polytope. We conjecture that the obtained simple modules are prime modules, see Conjecture 6.1. Representations of quantum affine algebras can be also applied to study questions in tropical geometry. For example, in Section 8.3, we construct matroid subdivisions from prime modules of quantum affine algebras. On physics side, we generalize the Grassmannian string integral to the setting that the integrand is the infinite product of prime elements in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n)]\), see Section 10, and more generally the infinite product of the \(q\)-characters of all prime modules in the category \(\mathcal{C}_{\ell}\) of a quantum affine algebra, see Section 10.3. We also define the so called \(u\)-variables for every prime element in the dual canonical basis of \(\mathbb{C}[\operatorname{Gr}(k,n)]\) and we conjecture that the \(u\)-variables are unique solutions of \(u\)-equations, see Section 10.2. The \(u\)-equations are important in the study of scattering amplitudes in physics, see [4]. Our work raises many related questions. On mathematical side, it is important to give an explicit construction of dominant monomials corresponding to facets of Newton polytopes defined in Section 9 and prove that every prime module in the category \(\mathcal{C}_{\ell}\) corresponds to a facet of \(\mathbf{N}_{\mathfrak{g},\ell}^{(d)}\) for some \(d\), see Conjecture 9.5. It is also important to study compatibility of prime modules using Newton polytopes, see Conjecture 9.5. On physics side, it is important to compute explicitly \(u\)-equations and \(u\)-variables and verify that \(u\)-variables are unique solutions of \(u\)-equations, see Conjecture 10.6. In the simplest examples, the Newton polytopes for representations of quantum affine algebras defined in Section 9 are associahedra. It would be very interesting to study the relation between the Newton polytopes for representations of quantum affine algebras and the surfacehedra defined in [2]. Finally, let us discuss in some detail an exciting potential research direction which was beyond the scope of the present work to pursue. In [45], \(u\)-equations were introduced in order to define a certain generalized worldsheet associahedron, related to the moduli space of \(n\) points in \(\mathbb{P}^{k-1}\). A parameterization of the solution to the \(u\)-equations was conjectured when \(k=3,4\); the details are being worked out in [47]. What is striking is that these \(u\)-equations are manifestly finite; there is a Newton polytope which is (conjecturally) simple with a face lattice which is anti-isomorphic to the noncrossing complex \(\mathbf{NC}_{k,n}\). In particular there are finitely many \(u\)-variables and finitely many \(u\)-equations. It would be very interesting to determine if this can be explained in the context of the present paper. Does the generalized worldsheet associahedron relate to the solution to the \(u\)-equations which we propose in the present work? If so, which prime tableaux are involved? We leave such fascinating questions to future work.
2306.09824
Process Knowledge-infused Learning for Clinician-friendly Explanations
Language models have the potential to assess mental health using social media data. By analyzing online posts and conversations, these models can detect patterns indicating mental health conditions like depression, anxiety, or suicidal thoughts. They examine keywords, language markers, and sentiment to gain insights into an individual's mental well-being. This information is crucial for early detection, intervention, and support, improving mental health care and prevention strategies. However, using language models for mental health assessments from social media has two limitations: (1) They do not compare posts against clinicians' diagnostic processes, and (2) It's challenging to explain language model outputs using concepts that the clinician can understand, i.e., clinician-friendly explanations. In this study, we introduce Process Knowledge-infused Learning (PK-iL), a new learning paradigm that layers clinical process knowledge structures on language model outputs, enabling clinician-friendly explanations of the underlying language model predictions. We rigorously test our methods on existing benchmark datasets, augmented with such clinical process knowledge, and release a new dataset for assessing suicidality. PK-iL performs competitively, achieving a 70% agreement with users, while other XAI methods only achieve 47% agreement (average inter-rater agreement of 0.72). Our evaluations demonstrate that PK-iL effectively explains model predictions to clinicians.
Kaushik Roy, Yuxin Zi, Manas Gaur, Jinendra Malekar, Qi Zhang, Vignesh Narayanan, Amit Sheth
2023-06-16T13:08:17Z
http://arxiv.org/abs/2306.09824v1
# Process Knowledge-infused Learning for Clinician-friendly Explanations ###### Abstract Language models have the potential to assess mental health using social media data. By analyzing online posts and conversations, these models can detect patterns indicating mental health conditions like depression, anxiety, or suicidal thoughts. They examine keywords, language markers, and sentiment to gain insights into an individual's mental well-being. This information is crucial for early detection, intervention, and support, improving mental health care and prevention strategies. However, using language models for mental health assessments from social media has two limitations: (1) They do not compare posts against clinicians' diagnostic processes, and (2) It's challenging to explain language model outputs using concepts that the clinician can understand, i.e., clinician-friendly explanations. In this study, we introduce Process Knowledge-infused Learning (PK-iL), a new learning paradigm that layers clinical process knowledge structures on language model outputs, enabling clinician-friendly explanations of the underlying language model predictions. We rigorously test our methods on existing benchmark datasets, augmented with such clinical process knowledge, and release a new dataset for assessing sociality. PK-iL performs competitively, achieving a 70% agreement with users, while other XAI methods only achieve 47% agreement (average inter-rater agreement of 0.72). Our evaluations demonstrate that PK-iL effectively explains model predictions to clinicians. 1 Artificial Intelligence Institute, University of South Carolina Columbia, South Carolina, US 2 University of Maryland, Baltimore County, US {kaushik, yzi}@email.sc.edu, [email protected], [email protected], [email protected],{vignar,amit}@sc.edu ## Introduction A long-standing problem in adopting language models for clinician assistance has been the lack of clinician-friendly explanations for the model's predictions 1. In practice, a clinical guideline or process is often detailed by which the clinician can assess or label patients. For example, to label patients for degrees of suicidal tendencies in a physical clinical setting, a well-known scale, the Columbia Suicide Severity Rating Scale (CSSRS) [1], is used to determine the right set of labels. The green part of Figure 1 (b) shows the CSSRS scale, a _process_, which consists of six conditions whose values determine four assessment outcomes from the set {_indication_, _ideation_, _behavior_, _attempt_}. Similarly, when patients are assessed for depression, clinicians evaluate patient responses against a process or guideline like the Patient Health Questionnaire-9 (PHQ-9) and provide explanations for their assessment using the same. The blue part of Figure 1 (b) shows the PHQ-9 assessment process. Language models do not explicitly leverage such process knowledge to derive their predictions. Furthermore, language model predictions are typically explained using XAI methods, such as LIME and SHAP, which fits a simpler interpretable surrogate model [1, 1, 2]. XAI models, however, provide explanations that benefit computer scientists in debugging and improving language models but are of limited utility to the clinician for making decisions. Additionally, it is challenging to approximate very large and complex models, e.g., language models (LMs) using simpler surrogate models [13]. Footnote 1: [https://globelynews.com/world/chatapt-ai-ethics-healthcare/](https://globelynews.com/world/chatapt-ai-ethics-healthcare/) We propose a novel learning framework _Process Knowledge infused Learning_ (PKiL) that leverages explicit representations of publicly available knowledge of processes and guidelines to augment language models to enable clinician-friendly explanations. Crucially, PKiL incorporates process knowledge structures to provide explanations for model predictions using concepts that are familiar to a clinician. Figure 1 shows the execution flow of a model trained using PKiL. The PKiL learning framework achieves this through a novel training method with the following salient features - (1) PKiL leverages powerful language models with hundreds of millions of parameters while requiring training of very few additional parameters (equal to the number of process knowledge conditions, e.g., conditions in Figure 1 (b)) to obtain clinician-friendly explanations, (2) The optimization objective is simple to understand, enabling globally optimal solution discovery through various optimization procedures. ## Problem Formulation, Resource Construction, and Process Knowledge infused Learning ### Problem Formulation Let \(X_{\mathcal{D}}\) denote a dataset of input texts and their labels in a domain \(\mathcal{D}\). An example of an input post is shown in Figure 1 (a), and its suicidality assessment label is from the set {_indication_, _ideation_, _behavior_, _attempt_} in the domain of mental health. Let \(Pb_{\mathcal{D}}\) denote the relevant process knowledge available to us from established literature in domain \(\mathcal{D}\). For example, Figure 1(b) shows the process of obtaining suicidality assessment labels. Let \(\Lambda_{\mathcal{D}}\) be a language model available to us that is fine-tuned on domain specific data (e.g., BERT fine-tuned on mental health posts from social media). Process Knowledge infused Learning (PKiL) is a training method that makes combined use of \(X_{\mathcal{D}}\) and \(Pb_{\mathcal{D}}\) to evaluate the conditions in the process knowledge to predict the final label. The evaluated conditions in the process knowledge are familiar to clinicians and therefore enable clinician-friendly explanations for predictions, as shown in Figure 1. ### Resource Construction - Construction of Process Knowledge Augmented Datasets Due to the recent push for explainable and trustworthy AI, recent studies have published new datasets with knowledge of established processes and guidelines used in a particular domain. For example, Gupta et al. constructed the PRiMATE dataset, which includes a series of depression-related posts labeled by human annotators by checking against the PHQ-9 depression assessment process knowledge [14]. Roy et al. construct the ProKnow dataset that consists of similar process knowledge for question generation (e.g., generate questions about symptoms before causes) while eliciting mental health-related conversation for psychometric test evaluations [15]. We call such datasets process knowledge augmented datasets [20]. Gaur et al. used the CSSRS, the suicidality assessment process knowledge, to annotate suicidality labels for a set of Reddit posts extracted from suicidality-related subreddits [14]. We will call this dataset CSSRS 1.0, an example of \(X_{\mathcal{D}}\) in our problem formulation. Even though their work labeled the posts using the process knowledge contained in the CSSRS as annotation guidelines, the exact process knowledge \(Pb_{\mathcal{D}}\) used per data point was not stored. Therefore, we first obtain the \(Pb_{\mathcal{D}}\) using the following procedure: 1. First, we fine-tune the models Word2Vec, SBERT, RoBERTa, T5, ERNIE, and Longformer on the CSSRS 1.0 dataset, i.e., the \(\Lambda_{\mathcal{D}}\) in our formulation [16, 15, 14, 17, 18, 19, 20, 21, 22]. 2. Second, we evaluate each post in the CSSRS 1.0 dataset Figure 1: Overview of PKiL inference for an input text. The model uses two arguments, the input text (a) and the process knowledge (b). The process knowledge shows conditions that must be satisfied for a given label. The green part shows process knowledge conditions for suicidality assessment, and the blue part shows the same for depression assessment. For example, for the label _attempt_ in the suicidality assessment process knowledge, all conditions 1-6 need to be satisfied. For the label _indication_, only condition 1 needs to be satisfied. The model then annotates text fragments with clinician-friendly concepts from the process knowledge, as shown in (d). The final assessment predictions are obtained through the relevant process knowledge conditions that apply, as shown in (e). against the CSSRS \(Pk_{\mathcal{D}}\) conditions using cosine similarity between the fine-tuned representations of the posts and the conditions. Condition evaluation returns 1.0 if the condition is satisfied, else 0.0. We set the similarity threshold to 0.5. We do this for all the models and use the max similarity that is greater than the threshold of 0.5. 3. Next, we obtain a label for each post in \(X_{\mathcal{D}}\) from the set {_indication_, _ideation_, _behavior_, _attempt_} by comparing the evaluated condition values against the \(Pk_{\mathcal{D}}\). For example, if only condition 1, which is _wish to be dead_ evaluates to 1.0, the label is _indication_ (see Figure 1 (b)). 4. Lastly, we provide our labels to three domain experts and task them with either retaining the labels or editing the labels by referring to the CSSRS \(Pk_{\mathcal{D}}\) while recording the inter-rater agreement. The domain experts in the study checked through the labels of 448 Reddit posts in \(X_{\mathcal{D}}\). They edited 235/448 posts and provided the relevant process knowledge conditions 1-6, evaluated during the edit. A substantial inter-rater agreement of 0.84 was recorded. Crucially, we augment the CSSRS 1.0 to include the specific process knowledge used for the edited label. We call this new dataset CSSRS 2.0. Examples from the dataset can be found at the link in the footnote2. We will use \(X_{\mathcal{D}}^{Pk}\) to denote process knowledge augmented datasets. Note that \(|X_{\mathcal{D}}^{Pk}|\leq|X_{\mathcal{D}}|\). For example, CSSRS 2.0 has 235 data points, whereas CSSRS 1.0 has 448 data points. Our experiments use CSSRS 2.0 and PRIMATE. Footnote 2: [https://anonymous.4open.science/_tf_](https://anonymous.4open.science/_tf_) MenatlHealthAnonymous-SCC3/csrsv\(\backslash\)%202.0.csv ### Process knowledge infused Learning Consider a single condition process knowledge \(Pk_{\mathcal{D}}\) to predict a binary label \(L\) for an input \(x\in X_{\mathcal{D}}^{Pk}\): \[if\;(C(x)=1),L(x) =1\] \[else,L(x) =0\] Here \(C(x)\) is a condition evaluation function for the input \(x\) that evaluates to \(1.0\) if the condition is satisfied and \(0\) if the condition is not satisfied. \(Pk_{\mathcal{D}}\) can be written algebraically as: \[L(x) =\mathbf{I}(L(x)=1)(C(x)=1)\] \[+\mathbf{I}(L(x)=0)\] Here \(\mathbf{I}(L(x)=l)\) is the indicator function that evaluates to \(1\) or \(0\), indicating whether the value that the label \(L(x)\) takes is equal to \(l\). How do we mathematically formulate \(C(x)=1\)? We can parameterize \(C(x)=1\) as \(S(e_{x}^{\Lambda_{\mathcal{D}}},e_{C}^{\Lambda_{\mathcal{D}}})\geq\theta_{C}\), where \(S\) is a similarity function (e.g., cosine similarity) and \(\theta_{C}\) is the similarity threshold. The \(e_{x}^{\Lambda_{\mathcal{D}}}\) and \(e_{C}^{\Lambda_{\mathcal{D}}}\) are embeddings of the input and condition obtained using a domain-specific fine-tuned language model \(\Lambda_{\mathcal{D}}\). Thus, we can write a parameterized approximation to (1) as: \[\hat{L}(x,\theta_{C}) =\mathbf{I}(L(x)=1)S(e_{x}^{\Lambda_{\mathcal{D}}},e_{C}^{\Lambda _{\mathcal{D}}})\geq\theta_{C}) \tag{2}\] \[+\mathbf{I}(L(x)=0)\] Now we consider a slightly more complex process knowledge \(Pk_{\mathcal{D}}\), a multilabel and multi-conditioned process knowledge to predict label \(L\in\{1,2,3\}\), given conditions \(C1,C2,C3\), for an input \(x\in X_{\mathcal{D}}^{Pk}\): \[if\;(C1(x)=1\wedge C2(x)=1),L(x) =1\] \[if\;(C1(x)=1\wedge C3(x)=1),L(x) =2\] \[else,L(x) =3\] Similar to (1), we can write this \(Pk_{\mathcal{D}}\) algebraically as: \[L(x) =\mathbf{I}(L(x)=1)(C1(x)=1)(C2(x)=1)\] \[+\mathbf{I}(L(x)=2)(C1(x)=1)(C3(x)=1) \tag{3}\] \[+\mathbf{I}(L(x)=3)\] Following a similar procedure as the one used to derive (2), we obtain: \[\hat{L}(x,\theta_{C1},\theta_{C2})=\] \[\mathbf{I}(L(x)=1)(S(e_{x}^{\Lambda_{\mathcal{D}}},e_{C1}^{ \Lambda_{\mathcal{D}}})\geq\theta_{C1}))(S(e_{x}^{\Lambda_{\mathcal{D}}},e_{C 2}^{\Lambda_{\mathcal{D}}})\geq\theta_{C2}))\] \[+\mathbf{I}(L(x)=2)(S(e_{x}^{\Lambda_{\mathcal{D}}},e_{C1}^{ \Lambda_{\mathcal{D}}})\geq\theta_{C1})(S(e_{x}^{\Lambda_{\mathcal{D}}},e_{C 3}^{\Lambda_{\mathcal{D}}})\geq\theta_{C3}))\] \[+\mathbf{I}(L(x)=3) \tag{4}\] Generally, given multi-condition process knowledge \(Pk_{\mathcal{D}}\) for multilabel prediction of the form \[if\;\wedge_{j}(C_{j}(x)=1),L(x)=l\] we get its algebraic form as \[L(x)=\mathbf{I}(L(x)=l)\prod_{j}(C_{j}(x)=1) \tag{5}\] Denoting all the parameters as the set \(\{\theta_{C_{j}}\}\) we get the parameterization \[\hat{L}(x,\{\theta_{C_{j}}\})=\mathbf{I}(L(x)=l)\prod_{j}(S(e_{x}^{\Lambda_{ \mathcal{D}}},e_{C_{j}}^{\Lambda_{\mathcal{D}}})\geq\theta_{C_{j}}) \tag{6}\] For all \(x\in X_{\mathcal{D}}^{Pk}\), we get a system of equations like (6). #### Sentiment Analysis The conditions in the process knowledge help the model assess problem issues. However, a complete mental health assessment usually also involves the identification of signs of positivity. Therefore for each \(\theta_{C_{j}}\), we also optimize for a \(\gamma_{C_{j}}\) term, where the model predicts positive sentiment in the input if \(S(e_{x}^{\Lambda_{\mathcal{D}}},e_{C_{j}}^{\Lambda_{\mathcal{D}}})\leq\theta_{C_ {j}}+\gamma_{C_{j}}\). #### Optimization Problem Formulation For a process knowledge augmented dataset \(X_{\mathcal{D}}^{Pk}\), we know the ground truths \(L(x)\) for all \(x\in X_{\mathcal{D}}^{Pk}\). We want to solve for the unknown parameters \(\theta_{C_{j}}\) that yields minimum error between the parameterized approximation \(L(x,\{\theta_{C_{j}}\})\) and the ground truth \(L(x)\) i.e., \[\sum_{x\in X_{\mathcal{D}}^{Pk}}\mathcal{E}(\hat{L}(x,\{\theta_{C_{j}}\}),L(x))\] Here \(\mathcal{E}\) denotes the error function. The choice of similarity functions \(S\) is a hyperparameter (We explore cosine similarity and normalized Gaussian kernels in our experiments). #### Projected Newton's method When one of the \(\{\theta_{C_{j}}\}\) are fixed, setting \(\mathcal{E}(\hat{L}(x,\{\theta_{C_{j}}\}),L(x))\) to be the cross entropy loss reduces to a strongly convex objective that can be solved by **Newton's method** (with \(\varepsilon\) corrections for low determinant Hessians). After each optimization step, we project the \(\theta_{C_{j}}\) to the \([-1,1]\) range. _Grid Search:_ Since the number of parameters to optimize is small (six for CSSRS 2.0 and nine for PRIMATE), we can perform a grid search over a predefined set of grid values to find the values that yield minimum cross-entropy loss. For our choice of \(S\), we choose **cosine similarity and normalized Gaussian kernel**; therefore, grid search candidate values are in the \([-1,1]\) range. _Optimizing for the \(\gamma_{C_{j}}\):_ To find the optimal \(\gamma_{C_{j}}\), we first predict positive and negative sentiment labels using the **Stanford CoreNLP** model for all the inputs. Next, we perform a grid search in the \([-1,1]\) range and set values for the \(\gamma_{C_{j}}\) that results in the maximum agreement between \(S(e_{x}^{\Delta_{\mathcal{D}}},e_{C_{j}}^{\Delta_{\mathcal{D}}})\leq\theta_{C _{j}}+\gamma_{C_{j}}\) and the Stanford CoreNLP model labels (only the positive labels). In our experiments, we try both Newton's method and grid search optimization strategies. ## Experiments and Results We demonstrate the effectiveness of PkiL training using PRIMATE and CSSRS 2.0 combined with several state-of-the-art language models. We also perform experiments with prompting Text-Davinci-003 using the langchain library3. Footnote 3: [https://langchain.readthedocs.io/en/latest/](https://langchain.readthedocs.io/en/latest/) ### Process Knowledge Augmented Datasets For CSSRS 2.0, the process knowledge is shown in Figure 1 (b) (the green part). We input this process knowledge in the form4: Footnote 4: Examples can be found at the link: [https://anonymous.4open](https://anonymous.4open). science/r/MenatalHealthAnonymous-8CC3/csrs_annotate.txt \[\begin{array}{l}if\;((C1(x),C2(x),C3(x),C4(x),C5(x),C6(x))=1),\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; ### Text-Davinci-003 Experiment Details We use the langchain library and write a prompt template to obtain answers to the process knowledge questions from Text-Davinci-003. For example, Figure 2 shows the prompt template for the first condition \(C1:\ Wish\ to\ be\ dead\) from the CSSRS process knowledge. For sentiment analysis, we set the _question_ variable in Figure 2 to _positive sentiment_. We will call this model Text-Davinci-003\({}_{\text{PK}}\). Once we evaluate all the conditions, we follow the process knowledge pertaining to the evaluated condition values to determine the label. ### Quantitative Results and Discussion Figure 3 shows the results of PKiL for various experiment configurations for the CSSRS 2.0 and PRIMATE datasets. The figure also shows results from the Text-Davinci-003\({}_{\text{PK}}\) model. _Quantitative Results for CSSRS 2.0:_ First, excluding the Text-Davinci-003\({}_{\text{PK}}\) from the analyses, we observe that SBERT trained using PKiL with a normalized Gaussian kernel performs the best in terms of accuracy, and the Word2Vec model performs the best on AUC-ROC scores for the CSSRS 2.0 dataset. In general, we see that PKiL leads to large boosts in performance of up to 14% over the baseline. Analysis of The Text-Davinci-003\({}_{\text{PK}}\) model performance reveals that it is the best performer among all the models for the CSSRS 2.0 dataset. Our experiments show that large language models can significantly increase suicidality assessment performance when leveraging process knowledge structures and process knowledge-augmented datasets. _Quantitative results for PRIMATE:_ Again, first excluding the Text-Davinci-003\({}_{\text{PK}}\) from the analyses, we observe that RoBERTa trained using PKiL with a cosine similarity function performs the best in terms of accuracy, and SBERT and ERNIE perform the best on AUC-ROC scores for the PRIMATE dataset. In general, we see that PKiL leads to large boosts in performance of up to 23% over the baseline. Analysis of The Text-Davinci-003\({}_{\text{PK}}\) model performance reveals that it is the best performer in terms of accuracy among all the models for the PRIMATE dataset. Our experiments show that large language models can also significantly increase depression assessment performance when leveraging process knowledge structures and process knowledge augmented datasets. ### Qualitative Results and Discussion We evaluate PkiL model outputs qualitatively for the following aspects: * **Mental health disturbance assessment**: The final label predicted by the model, i.e., the label _depression_ for depression assessment), and a label from the set {_indication_, _ideation_, _behavior_, _attempt_} for suicidality assessment. * **PHQ-9 depression concepts identified**: A list of concepts resulting from evaluating conditions C1-C9 using the learned thresholds \(\theta_{C_{j}}\). For the Text-Davinci-003\({}_{\text{PK}}\) model, we prompt the model using code as shown in Figure 2. * **CSSRS suicidality concepts identified**: A list of concepts resulting from evaluating conditions C1-C6 using the learned thresholds \(\theta_{C_{j}}\). Similar to the depression case, for the Text-Davinci-003\({}_{\text{PK}}\) model, we prompt the model using code as shown in Figure 2. * **Positive sentiment assessment**: Using the learned \(\theta_{j}\) and \(\gamma_{j}\) to identify input post fragments that convey positive sentiment. _Baseline Model Explanations:_ We use the bert-viz visualization technique6 to interpret the contributions of the different input post fragments to the prediction outcome (the CLS token). Figure 3(e) shows the output for SBERT. The highlights convey meaningful information from the perspective of depression, which is the correct label. However, it is unclear how the highlights map to clinician-friendly concepts from process knowledge guidelines for depression assessment. A manual post-processing layer for mapping to clinician-friendly concepts is needed in order to verify the prediction. Footnote 6: [https://github.com/jessevig/bertviz](https://github.com/jessevig/bertviz) _PKiL Model Explanations:_ We divide the input post into contiguous fragments of max size \(3\) sentences for models and infer the process knowledge condition values using the PKiL trained models and the parameters \(\theta_{C_{j}}\) and \(\theta_{\gamma_{j}}\). We divide for enhanced clinician-friendly explainability as simply annotating the whole posts with concepts still requires additional post-processing by the human to glean out fragments that correspond to problem issues and positive sentiments. Figure 3(f) shows the output of the SBERT model trained using PKiL with the normalized Gaussian kernel. Figure 3(g) shows the output of prompting the Text-Davinci-003\({}_{\text{PK}}\) as shown in Figure 2. We can readily observe that the explanations are more useful to the clinician as they directly explain the outcome in terms of concepts used in everyday practice. Finally, we provided PKiL explanations to the experts who helped construct the CSSRS 2.0 dataset and asked them to provide the percentage of times they found the explanations beneficial. We also provided baseline explanations for comparison. In order to control for bias, we tell them that humans generate PKiL explanations, and language models generate the baseline explanations. PKiL explanations scored 70% vs 47% for the baseline models. We recorded an inter-annotator agreement of 0.72. We analyzed the 30% that the experts did not find beneficial and observed that models have difficulty distinguishing casual mentions from serious ones. For example, a Reddit user reported wanting to kill themselves out of class boredom before identifying a legitimate clinical issue much further into their post. We leave the investigation Figure 2: Using the langchain library to prompt Text-Davinci-003 for answers to questions from the process knowledge. of these posts for future work (e.g., by expanding our framework to detect sarcasm). ## Conclusion In this study, we develop a novel paradigm PKiL that leverages the combined benefits of explicit process knowledge and high-performance language models to provide predictions and explanations that the end user can readily understand. Our experiments demonstrate the effectiveness of PKiL both quantitatively and qualitatively. Such an improved understanding of language model predictions can inform insights for refining existing process knowledge guidelines (e.g., adaptation to Reddit vocabulary) to facilitate remote monitoring and improved access to healthcare via social media platforms. _Reproducibility:_ We provide the trained model for SBERT with normalized Gaussian kernel similarity, the CSSRS 2.0 dataset, and the CSSRS process knowledge used in our experiments at the link in the footnote7. Additionally, we also provide a Python notebook for users to play with the Text-Davinci-003PK model at the link in this footnote8. _Ethics Statement:_ We adhere to anonymity, data privacy, intended use, and practical implication of the AI-based mental health assessment systems. The clinical process knowledge does not contain personally identifiable information. The datasets covered in the survey are publicly available and can be obtained from user-author agreement forms. Figures and examples are abstract and do not represent real-time data sources or any person. ## Acknowledgements This work was supported in part by the National Science Foundation (NSF) Awards 2133842 "EAGER: Advancing Figure 3: (a) and (b) present results for the CSSRS 2.0 dataset, while (c) and (d) show the results for the PRIMATE dataset. The mean accuracy/AUC-ROC of different language models (LMs) - Baseline fine-tuned model (**B**), PKiL performance with Cosine Similarity Kernel (**CS-K**), and PKiL performance with normalized Gaussian Kernel similarity (**Gauss-K**) - are displayed. The prompt-based model Text-Davinci-003PK model (\(D_{PK}\)) doesn’t utilize CS-K or Gauss-K, so no associated bar is shown. W2V: Word2Vec, SB: SBERT, RT: RoBERTa, EE: ERNIE, LF: LongFormer. (d) The self-attention-based interpretability visualization for the SBERT baseline model indicates correct predictions and sensible highlights. However, the mapping of these highlights to clinician-friendly concepts used in practice is unclear. Baseline language models consistently struggle to capture negation. (e) The SB model trained with PKiL using the normalized Gaussian kernel provides clinicians with annotated explanations that are more familiar. Additionally, the PKiL parameters enable the analysis of fragments conveying positive sentiment. (f) Explanations from the Text-Davinci-003PK model also demonstrate that leveraging process knowledge helps clinicians better understand the annotated explanations, as they are associated with familiar problem concepts. Neuro-symbolic AI with Deep Knowledge- infused Learning," and was carried out under the advisement of Prof. Amit Sheth [2022c,b,a; Sheth et al.2021, 2022; Sheth, Roy, and Gaur2023]. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2310.05720
HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation
Talking face generation has a wide range of potential applications in the field of virtual digital humans. However, rendering high-fidelity facial video while ensuring lip synchronization is still a challenge for existing audio-driven talking face generation approaches. To address this issue, we propose HyperLips, a two-stage framework consisting of a hypernetwork for controlling lips and a high-resolution decoder for rendering high-fidelity faces. In the first stage, we construct a base face generation network that uses the hypernetwork to control the encoding latent code of the visual face information over audio. First, FaceEncoder is used to obtain latent code by extracting features from the visual face information taken from the video source containing the face frame.Then, HyperConv, which weighting parameters are updated by HyperNet with the audio features as input, will modify the latent code to synchronize the lip movement with the audio. Finally, FaceDecoder will decode the modified and synchronized latent code into visual face content. In the second stage, we obtain higher quality face videos through a high-resolution decoder. To further improve the quality of face generation, we trained a high-resolution decoder, HRDecoder, using face images and detected sketches generated from the first stage as input.Extensive quantitative and qualitative experiments show that our method outperforms state-of-the-art work with more realistic, high-fidelity, and lip synchronization. Project page: https://semchan.github.io/HyperLips Project/
Yaosen Chen, Yu Yao, Zhiqiang Li, Wei Wang, Yanru Zhang, Han Yang, Xuming Wen
2023-10-09T13:45:21Z
http://arxiv.org/abs/2310.05720v3
# HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation ###### Abstract Talking face generation has a wide range of potential applications in the field of virtual digital humans. However, rendering high-fidelity facial video while ensuring lip synchronization is still a challenge for existing audio-driven talking face generation approaches. To address this issue, we propose HyperLips, a two-stage framework consisting of a hypernetwork for controlling lips and a high-resolution decoder for rendering high-fidelity faces. In the first stage, we construct a base face generation network that uses the hypernetwork to control the encoding latent code of the visual face information over audio. First, FaceEncoder is used to obtain latent code by extracting features from the visual face information taken from the video source containing the face frame.Then, HyperConv, which weighting parameters are updated by HyperNet with the audio features as input, will modify the latent code to synchronize the lip movement with the audio. Finally, FaceDecoder will decode the modified and synchronized latent code into visual face content. In the second stage, we obtain higher quality face videos through a high-resolution decoder. To further improve the quality of face generation, we trained a high-resolution decoder, HRDecoder, using face images and detected sketches generated from the first stage as input. Extensive quantitative and qualitative experiments show that our method outperforms state-of-the-art work with more realistic, high-fidelity, and lip synchronization. Project page: [https://scmchan.github.io/HyperLips_Project/](https://scmchan.github.io/HyperLips_Project/) Talking face generation, hypernetwork, lip synchronization, high-fidelity faces. ## I Introduction With the growth of audio-visual content [1, 2, 3, 4, 5, 6] and the rise of the metaverse, talking face generation has broad application prospects in visual dubbing [7, 8, 9], digital assistant [10], virtual human [11], animation film and other fields, and has attracted more and more attention. Based on the input requirements of the application, talking face generation methods can be categorized as driving audio only [12, 13, 14, 15, 16], driving audio with a single frame [17, 18, 19], and driving audio with source video (or multiple frames) [20, 21, 22, 8, 23] types. For driving audio only, it is person-specific primarily and requires re-training for videos captured by the target speaker. For example, using a neural radiance field to train the implicit 3D representation of a captured video of a specific speaking person can observe the person's speech in a novel view [12, 13], but the rendering results always look unnatural during movement. Due to the lack of facial and motion information as input for driving audio with a single frame, although some studies have done enough work, it is still impossible to generate accurate expressions and natural motion sequences [18]. For driving audio with source video, as shown in Fig. 1, the expressions and movements of the characters in the generated video are mostly taken from the source video, which naturally has realistic expressions and natural movements. In this case, there are two main challenges: 1. How to produce more accurate **lip synchronization** in generated videos; 2. How to render more more **high-fidelity** faces, especially high-definition lips and teeth, in generated videos. To produce more accurate lip synchronization in generated videos, Wav2lip [8] proposed a lip sync discriminator to improve the performance of lip synchronization in unconstrained videos; SyncTalkFace [20] proposed an audio lip memory that uses visual information of the mouth region corresponding to the input audio and enforces fine-grained audio-visual coherence; IP_LAP [21] leveraged a transformer-based landmark generator to infer lip and jaw landmarks from the audio to synchronize lip shape. These methods typically fuse the visual and audio features before decoding. However, the dimensions of audio and visual features are different, so additional processing is required to make them the same size for feature fusion. In the first stage of our method, we encode visual face information as a latent code, Fig. 1: Given the visual face information of source videos (upper left) and driving audio (upper right), our method is capable of rendering and generating more realistic, high-fidelity, and lip-synchronized videos (lower). See the zoom-in patches, our method can see details such as teeth. then modify the latent code by a HyperConv convolution operation, and finally decode the modified latent code into visual face content. The weight parameters of HyperConv are generated by constructing a hypernetwork using audio features as input, thus achieving audio control of lip movement in the rendered visual content. We use hypernetwork to avoid additional operations during the fusion of visual and audio features and to ensure lip synchronization in the generated videos better. Our idea is similar to the Audio Conditioned Diffusion Model [24], which takes audio information as the condition variable. Still, this method takes the diffusion model as the network architecture, which increases the demand for computational resources. To render more high-fidelity faces, DINet [22] proposed a Deformation Inpainting Network to achieve face visually dubbing on high-resolution videos, but it may generate artifacts out of face if mouth region covers background. IP_LAP [21] leverage the prior appearance information which is extracted from the lower-half occluded target face and static reference images, it may fail when landmark cannot be detected in the reference images. Another possible method is to increase the input resolution based on networks such as Wav2lip [8] or SyncTalkFace [20], but this not only increases the need for training resources but also does not render well, resulting in persistent artifacts. In the second stage of our method, we propose a high-resolution decoder (HRDecoder) to further optimize the fidelity of generating faces. We trained the network using the facial data generated in the first stage and the corresponding facial sketches, guided by the sketches, to achieve facial enhancement. In summary, the contributions of our work are as follows: * We propose a hypernetwork based on audio information to control the generation of facial visual content that improves lip synchronization. * We propose a high-resolution decoder with facial sketch guidance that can render more high-fidelity faces, especially high-definition lips and teeth, in generated videos. * Extensive experiments show that our method can achieve significantly better talking face generation performance in terms of lip synchronization and face quality. ## II Related Work ### _Audio-Driven Talking Face Generation_ In methods that only use audio input for audio-driven talking face generation [12, 13, 14, 15, 16], collecting audio and video for person-specific and re-training is usually necessary. By introducing a neural radiance field (NeRF) [25] to represent the scenes of talking heads [13], it can be controlled to render the face in a novel view. RAD-NeRF [12] decompose the inherently high-dimensional talking portrait representation into three low-dimensional feature grids, that makes can rending the talking portrait in real-time. GeneFace [14] propose a variational motion generator to generate accurate and expressive facial landmark and uses a NeRF-based renderer to render high-fidelity frames. Due to a lack of prior information, these tasks still struggle to render realistic expressions and natural movements. To drive a single facial image, ATVGnet [17] devises a cascade GAN approach to generate a talking face video, which is robust to different face shapes, view angles, facial characteristics, and noisy audio conditions. Recently, SadTalker [18] propose a novel system for a stylized audio-driven single-image talking face animation using the generated realistic 3D motion coefficients, improving motion synchronization and video quality, but it is still impossible to generate accurate expressions and natural motion sequences. The method of driving audio with source video is the most competitive because it can provide enough realistic facial expressions and natural movement information. Wav2lip [8], SyncTalkFace [20], IP_LAP [21], DINet [22] all belong to this category, mainly focusing on how to generate better lip synchronization and higher fidelity faces. ### _HyperNetwork_ Hypernetwork [26] was originally proposed to generate the weights for a larger network. In evolutionary computation, operating directly on large search spaces consisting of millions of weight parameters is difficult. A more efficient method is to evolve a smaller network to generate the weight structure for a larger network, so that the search is constrained to the much smaller weight space. The idea of weight generation is easy to use for controllable generation tasks. Chiang et al. [27] leverage it to control the style of the 3d scene representation. UPST-NeRF [28] uses hypernetwork to control the universal photorealistic style transfer for 3D scene. With the rise of Large Language Models (LLMs) [29, 30, 31, 32] and generative models [33], hypernetwork has also become one of the necessary skills for fine-tuning LLMs. Essentially, the idea of generating weight parameters in a hypernetwork to control the large network method is similar to the Audio Conditioned Diffusion Model [24] and the Conditioning Mechanisms in Latent Diffusion Models [33], both of which achieve controllable output of the decoding through a control variable. In our method, however, we perform controllable generation relatively simply rather than using the diffusion model for generation. Wang et al [34] use hypernetwork for the application of magnetic resonance imaging reconstruction. ### _Prior Based Face Restoration_ Face restoration is to restore the high-quality face image from the degraded face image [35]. Face restoration is divided Non-prior and Prior based methods. FSRNet [36] uses a coarse Super-Resolution (SR) network to recover coarse images, which are then processed by a fine SR encoder and a prior face-passing map estimation network, respectively. Finally, image features and prior information are fed to the fine SR decoder to get the results. In [37], it uses the semantic label as the face prior. The semantic label is extracted from the input image or coarse deblurred image by a face parsing network. The final sharp image is generated from a deburring network with the input of the concatenation of the blurred image and the face semantic label. Yin et al. [38] propose a joint alignment and face super-resolution network to learn landmark localization and face restoration jointly. In our work, we use the landmark sketches detected from the relatively low-quality face generated in the first stage as input to guide the HRDecoder to achieve face enhancement to render high-fidelity faces. ## III Proposed Method The overview of our framework is shown in Fig. 2. We aim to generate a high-fidelity talking face video with synchronized lip movements by implementing the occluded face in the lower half of the input video frame by frame, given an audio and video sequence. Our proposed method consists of two stages: Base Face Generation and High Fidelity Rendering. In Base Face Generation, we designed a hypernetwork that takes audio features as input to control the encoding and decoding of visual information to obtain base face images. In high-fidelity rendering, we trained an HRDecoder network using face data from the network trained in the first stage and corresponding face sketches to enhance the base face. ### _Base Face Generation_ #### Iii-A1 Hyper Control Lips Given the reference image \(I^{R}\in\mathbb{R}^{3\times H^{I}\times W^{I}}\) and the masked image (occluded face in the lower half of the reference image) \(I^{M}\in\mathbb{R}^{3\times H^{I}\times W^{I}}\), FaceEncoder obtains the latent code \(L^{C}=\{F_{i}^{C}\}|0\leq i\leq 3\) by extracting the \(I^{R}\) and \(I^{M}\) concatenation as inputs. Next, we use HyperConv's convolution operation to process \(L^{C}\) and obtain \(L^{H}=\{F_{i}^{H}\}|0\leq i\leq 3\). Finally, we decoded \(L^{H}\) with FaceDecoder to get the predicated base face \(I^{B}\in\mathbb{R}^{3\times H\times W}\). The HyperConv implicit function can be formulated as follows: \[\mathcal{F}_{\Theta}:(L^{C})\rightarrow(L^{H}), \tag{1}\] where \(\Theta\) is the weight parameter of the HyperConv convolution operation, which predicated by the HyperNet. HyperNet is composed of MLP, with audio deep features \(\{F_{i}^{A}\}|0\leq i\leq 3\) extracted by AudioEncoder as input. For the input of AudioEncoder, we follow [8] to extract the Mel-spectrogram of the audio as \(A^{M}\in\mathbb{R}\)\({}^{H^{A}\times W^{A}}\). The size of the audio mel-spectrogram is usually \(16\times 80\), i.e., \(H^{A}\)=16 and \(W^{A}\)=80. In contrast, the image size is usually not the same as the size of the audio mel-spectrogram, i.e., \(H^{I}\)=\(W^{I}\)=128 as default in our method. In our method, we do not need to unify the dimensions of audio features and visual features, so no additional operations are required compared to other methods. #### Iii-A2 Loss Function for Base Face Generation To be competitive in lip synchronization and fidelity, we constrain the generated base face by integrating multiple losses. We adopt its architecture from [8] to design a quality discriminator which we called HyperCtrolDiscriminator, indicates as \(\mathcal{D}^{B}\). Thereform, the overall generation process can be formulated as follows: \[I^{B}=\mathcal{G}^{B}((I^{R}\oplus I^{M}),A^{M}), \tag{2}\] where, \(\oplus\) indicate concatenation. In this way, we can consider the base face generation as a generator,indicates as \(\mathcal{G}^{B}\), consisting of the following modules: FaceEncoder, HyperConv, FaceDecoder, AudioEncoder and HyperNet. We train the discriminator by adding the following loss: \[\begin{split}\mathcal{L}_{d}^{B}=\mathbb{E}_{I^{GT}}[log(1- \mathcal{D}^{B}(I^{GT}))]\\ +\mathbb{E}_{I^{B}}[log(\mathcal{D}^{B}(I^{B}))],\end{split} \tag{3}\] Fig. 2: Overview of our proposed model. It can be divided into two stages: **(1) Base Face Generation**. FaceEncoder encode the visual face information (Reference and Masked) as a latent code, then modify the latent code by a HyperConv convolution operation, and finally FaceDecoder decode the modified latent code into visual face content. The weight parameters of HyperConv are updated by a hypernetwork using audio features as input, thus achieving audio control of lip movement in the rendered visual face content (Base Face). **(2) High-Fidelity Rendering**. The high-resolution decoder (HRDecoder) is used to further optimize the fidelity of generating faces. We trained the network using the facial data generated in the first stage and the corresponding facial sketches, guided by the sketches, to achieve facial enhancement. Therefore, the input to HRDecoder is the concatenation feature of the base face with the sketch extracted from the base face, and the output is the high-fidelity face. **Base Adversarial Loss**: We employ the adversarial loss to constrain the realism of our generated images: \[\mathcal{L}_{a}^{B}=\mathbb{E}_{I^{B}}[log(1-\mathcal{D}^{B}(I^{B}))], \tag{4}\] **Base Reconstruction Loss**: We achieve visual reconstruction by constraining the l1 loss between the generated base face and the Ground Truth: \[\mathcal{L}_{r}^{B}=\frac{1}{N}\sum_{i=1}^{N}||I^{B}-I^{GT}||_{1}, \tag{5}\] **Base LPIPS Loss**: We employ the Learned Perceptual Image Patch Similarity loss [39] to constrain the generated images: \[\mathcal{L}_{l}^{B}=\frac{1}{N}\sum_{i=1}^{N}LPIPS(I^{B},I^{GT}), \tag{6}\] **Base Audio-Visual Sync Loss**: We follow [20] use the audio-visual sync module proposed in [8, 40]. We train the audio-visual sync module, \(\mathcal{F}^{A}\) and \(\mathcal{F}^{V}\), on LRS2 [41] datasets and no fine-tune on any generated frames. The generated 5 frames (lower half only) correspond to one audio segment, and the features obtained by \(\mathcal{F}^{A}\) and \(\mathcal{F}^{V}\) are represented as \(f_{a}\) and \(f_{v}\), respectively. The outputs features' binary cross-entropy of cosine similarity is computed as follows: \[d_{sync}(f_{a},f_{v})=\frac{f_{a}\cdot f_{v}}{||f_{a}||_{2}\cdot||f_{v}||_{2}}, \tag{7}\] The Audio-Visual Sync Loss can be formulated as: \[\mathcal{L}_{av}^{B}=-\frac{1}{N}\sum_{i=1}^{N}(log(d_{sync}(\mathcal{F}^{A}( A_{i}^{M}),\mathcal{F}^{V}(\mathbf{I}_{i}^{B})))), \tag{8}\] where \(\mathbf{I}_{i}^{B}=\{I_{n}^{B}\}_{n=i-2}^{i+2}\). To summarize, the training loss for the base face generation stage can be formulated as follows: \[\mathcal{L}_{total}^{B}=\lambda_{a}^{B}\mathcal{L}_{a}^{B}+\lambda_{r}^{B} \mathcal{L}_{r}^{B}+\lambda_{l}^{B}\mathcal{L}_{l}^{B}+\lambda_{av}^{B} \mathcal{L}_{av}^{B}, \tag{9}\] where \(\lambda_{a}^{B}\), \(\lambda_{r}^{B}\), \(\lambda_{l}^{B}\), \(\lambda_{av}^{B}\) are the hyper-parameter weights. ### _High-Fidelity Rendering_ #### Iii-B1 HRDecoder We have constructed a relatively simple High-Resolution Decoder (HRDecoder) consisting of a base convolution module, an upsampling convolution module, and an output convolution block. The transposed convolution in the upsampling convolution module can convert lower resolution features to higher resolution features. HRDecoder takes the concatenation of the base face generated in the first stage and the corresponding face landmark sketch as input and outputs a high-fidelity face through the guidance of the landmark sketch. The high-fidelity rendering process can be formulated as follows: \[I^{HR}=\mathcal{F}^{HR}(I^{B}\oplus I^{S}), \tag{10}\] where, \(\oplus\) indicate concatenation, \(I^{S}\) is the face landmark sketch, \(I^{HR}\) is the high-fidelity face, and \(\mathcal{F}^{HR}\) is the HRDecoder. We utilize the mediapipe tool [42] to detect the face landmark sketch from the base face. To optimize the HRDecoder, we use the model trained in the first stage to generate corresponding base faces and landmark sketches on the dataset as the training dataset for this stage. #### Iii-B2 Loss Function for High-Fidelity Rendering To get high-fidelity faces, we define a discriminator, HRDiscriminator, at this stage. We use \(\mathcal{D}^{HR}\) to denote the HRDiscriminator. We train HRDiscriminator by adding the following loss: \[\begin{split}\mathcal{L}_{disc}^{HR}=\mathbb{E}_{I^{GT}}[log(1- \mathcal{D}^{HR}(I^{GT}))]\\ +\mathbb{E}_{I^{HR}}[log(\mathcal{D}^{HR}(I^{HR}))],\end{split} \tag{11}\] **HR Adversarial Loss**: Same as Eq. 4, we employ adversarial loss to constrain the realism of HRDecoder: \[\mathcal{L}_{a}^{HR}=\mathbb{E}_{I^{HR}}[log(1-\mathcal{D}^{HR}(I^{HR}))], \tag{12}\] **HR Perceptual Loss**: We employ the pre-trained VGG [43], indicated as \(\phi\), to extract the image features and caculate the features l1 loss to constrain the generated images: \[\mathcal{L}_{p}^{HR}=\frac{1}{N}\sum_{i=1}^{N}||\phi(I^{HR})-\phi(I^{GT})||_{1}, \tag{13}\] **HR Reconstruction Loss**: We also employ same as Eq. 5, constraining the l1 loss between the generated high-fidelity face and the GT: \[\mathcal{L}_{r}^{HR}=\frac{1}{N}\sum_{i=1}^{N}||I^{HR}-I^{GT}||_{1}, \tag{14}\] **HR Lip Loss**: To better optimize the lip region, we used the mask of the lip region to constrain the loss of lpips and the reconstruction loss of the lip region. \[\begin{split}\mathcal{L}_{l}^{HR}=\frac{1}{N}\sum_{i=1}^{N}(LPIPS (I_{lip}^{HR},I_{lip}^{GT})\\ +||(I^{HR}-I^{GT})*I_{lip}^{mask}||_{1}),\end{split} \tag{15}\] where \(I_{lip}^{HR}\) and \(I_{lip}^{GT}\) are the corped according to the lip bounding box of \(I^{HR}\) and \(I^{GT}\), \(I_{lip}^{mask}\) is the lip mask. The training loss for the high-fidelity rendering is: \[\begin{split}\mathcal{L}_{total}^{HR}=\lambda_{a}^{HR}\mathcal{L }_{a}^{HR}+\lambda_{p}^{HR}\mathcal{L}_{p}^{HR}\\ +\lambda_{r}^{HR}\mathcal{L}_{r}^{HR}+\lambda_{l}^{HR}\mathcal{L}_{ l}^{HR},\end{split} \tag{16}\] where \(\lambda_{a}^{HR}\), \(\lambda_{p}^{HR}\), \(\lambda_{r}^{HR}\), \(\lambda_{l}^{HR}\) are the hyper-parameter weights. ## IV Experiments ### _Experimental Settings_ **Implementation Details.** We follow [8, 20] to process video frames with the centered crops of size \(128\times 128\) at \(25\) fps, and calculate Mel-spectrograms of size \(16\times 80\) from 16kHz audios using a window size of 800 and hop size of 200. For HyperLips-HR, we set the output upsampling to HR\(\times 1\) by default, i.e. no upsampling, and the size of the output image remains at \(128\times 128\). Hyper-parameters are empirically set: \(\lambda_{a}^{B}\)=0.2, \(\lambda_{r}^{B}\)=0.5, \(\lambda_{l}^{B}\)=0.5, \(\lambda_{av}^{B}\)=0.3, \(\lambda_{a}^{HR}\), \(\lambda_{p}^{HR}\), \(\lambda_{r}^{HR}\), \(\lambda_{l}^{HR}\) are all set to 1. When training the HyperLips-Base and HyperLips-HR models, we set the learning rate to 0.0001 and used the Adam optimizer in PyTorch. All experiments are performed on a single NVIDIA TITAN RTX GPU. **Dataset.** Two audio-visual datasets, LRS2 [44] and MEAD-Neutral [45], are used in our experiments. **LRS2** is a sentence-level dataset with over 140,000 utterances, consists of 48,164 video clips from outdoor shows on BBC television. We randomly sample 80 videos from the test set for evaluating algorithms quantitatively. **MEAD-Neutral** is a part of MEAD dataset. MEAD dataset records around 40 hours emotional in-the-lab videos at 1080P resolution. We select a total of 1610 videos with neutral emotion and frontal view as MEAD-Neutral dataset and another 80 videos for testing. **Comparison Methods.** We compare our method against state-of-the-art methods [20, 21, 22, 8, 17] on the person-generic audio-driven talking face generation. **Wav2Lip**[8] uses an encoder-decoder model learned via adversarial training to produce talking face videos. **ATVGnet**[17] takes advantage of 2D landmarks to generate talking face videos from the input audio and an identity frame. **SyncTalkFace**[20] proposes Audio-Lip Memory that brings in visual information of the mouth region corresponding to input audio and enforces fine-grained audio-visual coherence. **IP_LAP**[21] proposes a two-stage framework consisting of audio-to-landmark generation and landmark-to-video rendering procedures. **DINet**[22] proposes a Deformation Inpainting Network for high-resolution face visually dubbing. For more comparison settings, please refer to our supplementary document. ### _Evaluation Metrics_ We use Peak Signal-to-Noise Ratio (**PSNR**) and Structured similarity (**SSIM**) [46] to measure the similarity between generated and ground-truth images. And we use dlib [47] to detect the lip landmark distances (**LMD**) between ground truth frames and those of generated frames. **LSE-C** and **LSE-D** proposed by [8] are cibfudebce score (higher the better) and distance score (lower the better) between audio and video features from SyncNet [40], respectively. LSE-C and LSE-D measure correspondence between audio and visual features while LMD directly measures visual to visual coherence. For a fair comparison, we evaluate the cropped region of the face based on the face detector used in Wav2Lip [8]. We generate corresponding videos using different methods based on different audio in the test dataset. Specifically, the face in the video frame is first detected by face detection. Then, the corresponding face area is resized according to the required resolution size of the corresponding method. After the face is generated by the corresponding method, it is pasted back into the original video. For a fair comparison, frames extracted from talking face videos, which are cropped based on the face detector used in Wav2Lip are resized to \(160\times 160\). When calculating the related metrics, we detect faces in the generated video and the corresponding ground truth video, resize them to \(160\times 160\), and then perform frame-by-frame calculations. Wav2Lip synthesizes face with \(96\times 96\) resolution; DINet synthesizes face with \(416\times 320\) resolution; ATVGnet, IP_LAP, SyncTalkFace, and ours synthesize face with \(128\times 128\) resolution. For LSE-D and LSE-C, we generate talking face videos by inputting audio and face come from the different videos in test datasets and use SyncNet to calculate LSE-C and LSE-D with generating talking face videos. ### _Quantitative Comparison_ Table I and II show the quantitative comparison on the LRS2 and MEAD-Neutral datasets, respectively. DINet(O) indicates tested on the MEAD dataset using the checkpoints officially released by DINet. DINet(R) is the result of our reproduction on the MEAD-Neutral dataset according to the code of DINet. In the tables, our HyperLips-HR output resolution is \(128\times 128\) without upsampling. The results show that whether it is our HyperLips-Base or our HyperLips-HR, the generated faces are significantly better than other methods in terms of PSNR, SSIM, and LMD metrics. Our HyperLips-HR is significantly better than our HyperLips-Base in terms of PSNR and SSIM, which shows that our HRDecoder has enhanced high-fidelity face rendering. However, there is no significant increase in the LMD metric, which shows that HRDecoder does not help improve lip synchronization. Regarding PSNR and SSIM, our results in Table I are better than those in Table II. This is because the face quality in the LRS2 dataset is worse than that in the MEAD dataset, making it easier for the faces generated by our model to reach the quality of LRS2. For LSE-C and LSE-D, Wav2Lip perform better results and even outperforms those of ground truth. The weights 1 of SyncNet we used in the test were derived from [40] without fine-tuning. In fact, these two metrics have been discussed in [20, 48], it only proves that their lip-sync results are nearly comparable to the ground truth, not better. On the one hand, the dataset used for model training may not match the distribution of the dataset we tested, resulting in two test results that may not accurately reflect lip synchronization; on the other hand, we performed better on the LMD metric, which is another synchronization metric that measures correspondence in the visual domain. ### _Qualitative Comparison_ **User Study.** To verify the video quality and lip synchronization of our talking face generation method, we invited 20 participants to evaluate the generated videos. We randomly selected 5 videos from the MEAD-Neutral [45] test dataset and generated different videos using different methods: Wav2Lip [8], IP_LAP [21], DINet(R) [22], DINet(O) [22] and HyperLips-HR(Ours). We asked the participants to vote for the video in two evaluation indicators: video quality of the results and whether to keep the lip synchronization. We collected 100 votes for each evaluation indicator and presented the result as a box plot in Fig. 5. As can be seen, our results stand out from other methods in terms of video quality and lip synchronization. **Visualization Comparison.** Fig. 3 and Fig. 4 show examples from LRS2 and MEAD-Neural dataset, respectively. Compared to other methods, our method produces images that are visually closer to the ground truth and show no artifacts in our results. The superiority of our method cannot be seen on the LRS2 dataset because the faces in this dataset are relatively blurred. But on the MEAD dataset, our method produces results that render faces clearly, and even teeth can be seen clearly. Our method also excels in lip sync.For example, in the last face on the left in Fig. 4, our results perfectly reproduce the current mouth shape, which is slightly open with teeth exposed, but the results from IP_LAP are not. ### _Ablation Study_ In this section, we perform ablation studies to validate the effect of core components in our method and the performance gain derived from high-fidelity rendering. **The Size of HRDecoder Output.** Our input in the high-fidelity rendering stage is fixed at \(128\times 128\), and the output can render faces with different resolutions, such as \(128\times 128\) (HR \(\times\) 1), \(256\times 256\) (HR \(\times\) 2), and \(512\times 512\) (HR \(\times\) 4), through the transposed convolution of HRDecoder. High-resolution Fig. 3: Qualitative comparisons with state-of-the-art methods on LRS2 datasets. Our method is capable of rendering more high-fidelity faces. More results are presented in the supplementary material. faces can often produce finer images, which are convenient for application to high-resolution videos. In Table III, we study the effect of different output sizes on the LRS2 and MEAD datasets. Our HR models significantly compares with the Base model regarding image quality (such as PSNR and SSIM metrics), e.g., PSNR, the HR\(\times\)1 model is 34.914, and the Base model is 33.953. However, in terms of lip synchronization (such as LMD indicators), the HR models are only comparable to the Base model. However, for all HR models, the image quality index does not increase significantly with increasing size, and even the image quality at size \(512\times 512\) is comparable to that at size \(128\times 128\). It can be concluded that on the LRS2 and MEAD datasets, a size of \(256\times 256\) is comparable to that of the Base model. Fig. 4: Qualitative comparisons with state-of-the-art methods on MEAD-Neutral datasets. Our method is capable of rendering more high-fidelity faces. More results are presented in the supplementary material. Fig. 5: User study about video quality and lip synchronization. can already provide a cost-effective result. **Effectiveness for Sketch Input of HRDecoder.** In HRDecoder, we introduced face landmark sketches to guide the generation of high-fidelity face images. We performed corresponding ablation experiments on the MEAD dataset to verify the influence of sketches as input to HRDecoder on the rendering results. In Table IV, "w/o sketch" means that no face landmark sketches are used as input, and "w/ sketch" means that face landmark sketches are used as input. The results show that in all HR models, the rendering results of adding face landmark sketches as a guide input are better than the rendering results without adding sketches. This suggests that sketches are beneficial for generating high-fidelity face images. **Effectiveness for Finetuning.** Although our method supports dubbing any face video, it may generally perform poorly on unseen faces. Therefore, we performed an ablation experiment on this. We chose a video of Kate's speech and used 4 minutes and 40 seconds of the video as training data and another 18 seconds as test data. As shown in Table V, we fine-tuned the pre-trained models on the MEAD and LRS2 datasets to Kate's videos and obtained corresponding results. The results show that the deformed model produces slightly better results in terms of visual quality without a significant improvement in lip synchronization. **The Impact of Landmark Detection Failure.** There is a defect in landmark-based talking face generation methods (such as IP_LAP, ATVGnet, etc.). That is, faces cannot be generated if landmark detection fails. There is no such problem with those that are not based on landmark detection, such as Wav2Lip, etc. Our method is not based on landmark detection in the first stage (Base model), so it can still generate the correct face even if the face deviation is severe, as shown in Fig. 6. As for the second stage (HR models), our method still has such shortcomings. ## V Conclusion We propose a hypernetwork for controlling lip movements with audio information to achieve lip synchronization in the task of talking face generation. We first use a FaceEncoder to extract the visual face information as latent code from the source video; and then use HyperConv to modify the latent code to synchronize the lip movement with the audio; finally, FaceDecoder will decode the modified and synchronized latent code into visual face content. The weight parameters are updated by HyperNet using the audio features as input. In order to achieve high-fidelity human face rendering, we propose HRDecoder, which uses landmark guidance as the face detail enhancement of faces. Therefore, our method effectively improves lip synchronization and face visual quality.
2305.10825
DiffUTE: Universal Text Editing Diffusion Model
Diffusion model based language-guided image editing has achieved great success recently. However, existing state-of-the-art diffusion models struggle with rendering correct text and text style during generation. To tackle this problem, we propose a universal self-supervised text editing diffusion model (DiffUTE), which aims to replace or modify words in the source image with another one while maintaining its realistic appearance. Specifically, we build our model on a diffusion model and carefully modify the network structure to enable the model for drawing multilingual characters with the help of glyph and position information. Moreover, we design a self-supervised learning framework to leverage large amounts of web data to improve the representation ability of the model. Experimental results show that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. Our code will be avaliable in \url{https://github.com/chenhaoxing/DiffUTE}.
Haoxing Chen, Zhuoer Xu, Zhangxuan Gu, Jun Lan, Xing Zheng, Yaohui Li, Changhua Meng, Huijia Zhu, Weiqiang Wang
2023-05-18T09:06:01Z
http://arxiv.org/abs/2305.10825v3
# DiffUTE: Universal Text Editing Diffusion Model ###### Abstract Diffusion model based language-guided image editing has achieved great success recently. However, existing state-of-the-art diffusion models struggle with rendering correct text and text style during generation. To tackle this problem, we propose a universal self-supervised text editing diffusion model (DiffUTE), which aims to replace or modify words in the source image with another one while maintaining its realistic appearance. Specifically, we build our model on a diffusion model and carefully modify the network structure to enable the model for drawing multilingual characters with the help of glyph and position information. Moreover, we design a self-supervised learning framework to leverage large amounts of web data to improve the representation ability of the model. Experimental results show that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. Our code will be avaliable in [https://github.com/chenhaoxing/DiffUTE](https://github.com/chenhaoxing/DiffUTE). ## 1 Introduction Due to the significant progress of social media platforms, image editing technology has become a common demand. AI-based technology has significantly lowered the threshold for fancy image editing, which traditionally required professional software and labor-intensive manual operations. Deep neural networks can now achieve remarkable results in various image editing tasks, such as image inpainting Feng et al. (2022), image colorization Zhang et al. (2022), and object replacement Kwon and Ye (2022), by learning from rich paired data. Futhermore, recent advances in diffusion models Brack et al. (2023); Brooks et al. (2022); Saharia et al. (2022) enable precise control over generation quality and diversity during the diffusion process. By incorporating a text encoder, diffusion models can be adapted to generate natural images following text instructions, making them well-suited for image editing. Despite the impressive results, existing image editing methods still encounter numerous challenges. As a typical task, scene text editing is widely used in practical applications such as text-image synthesis, advertising photo editing, text-image correction and augmented reality translation. It aims to replace text instances (i.e., the foreground) in an image without compromising the background. However, the fine-grained and complex structures of text instances raise two major challenges: (i) **How to transfer text style and retain background texture.** Specifically, text style includes factors such as font, color, orientation, stroke size, and spatial perspective. It is difficult to precisely capture the complete text style in the source image due to the complexity of the background. (ii) **How to maintain the consistency of the edited background** especially for complex scenes, e.g., menus and street store signs. Numerous studies formulate scene text editing as a style transfer task and approach it by generative models like GANs Wu et al. (2019); Qu et al. (2022). Typically, a cropped text region with the target style is needed as the reference image. Such methods then transfer a rendered text in the desired spelling to match the reference image's style and the source image's background. However, the two major challenges for scene text editing remains. (i) These methods are currently constrained to editing English and fail to accurately generate complex text style (e.g., Chinese). (ii) The process of cropping, transferring style and blending results in less natural-looking outcomes. End-to-end pipelines are needed for the consistency and harmony. To address the above issues, we present DiffUTE, a general diffusion model designed to tackle high-quality multilingual text editing tasks. DiffUTE utilizes character glyphs and text locations in source images as auxiliary information to provide better control during character generation. As shown in Figure 1, our model can generate very realistic text. The generated text is intelligently matched to the most contextually appropriate text style and seamlessly integrated with the background while maintaining high quality. The major contribution of this paper is the universal text edit diffusion model proposed to edit scene text images. DiffUTE possesses obvious advantages over existing methods in several folds: 1. We present DiffUTE, a novel universal text editing diffusion model that can edit any text in any image. DiffUTE generates high-quality text through fine-grained control of glyph and Figure 1: Examples of text editing. DiffUTE achieves the best result among existing diffusion models. position information. DiffUTE is capable of seamlessly integrating various styles of text characters into the image context, resulting in realistic and visually pleasing outputs. 2. We design a self-supervised learning framework that enables the model to be trained with large amounts of scene text images. The framework allows the model to learn from the data without annotation, making it a highly efficient and scalable solution for scene text editing. 3. We conduct extensive experiments to evaluate the performance of DiffUTE. Our method performs favorably over prior arts for text image editing, as measured by quantitative metrics and visualization. ## 2 Preliminaries In this paper, we adopt Stable Diffusion (SD) Rombach et al. (2022) as our baseline method to design our network architecture. SD utilizes a variational auto-encoder (VAE) to enhance computation efficiency. Through VAE, SD performs the diffusion process in low-dimensional latent space. Specifically, given an input image \(x\in\mathbb{R}^{H\times W\times 3}\), the encoder \(\mathcal{E}_{v}\) of VAE transforms it into a latent representation \(z\in\mathbb{R}^{h\times w\times c}\), where \(\alpha=\frac{H}{h}=\frac{W}{w}\) is the downsampling factor and \(c\) is the latent feature dimension. The diffusion process is then executed in the latent space, where a conditional UNet denoiser Ronneberger et al. (2015)\(\epsilon_{\theta}(z_{t},t,y)\) is employed to predict the noise with noisy latent \(z_{t}\), generation condition input \(y\) and current time step \(t\). The condition information \(y\) may encompass various modalities, e.g., natural language, semantic segmentation maps and canny edge maps. To pre-processing \(y\) from various modalities, SD employs a domain-specific encoder \(\tau_{\theta}\) to project \(y\) into an intermediate representation \(\tau_{\theta}(y)\in\mathbb{R}^{M\times d_{\tau}}\) which is then mapped to the intermediate layers of the UNet via a cross-attention mechanism implementing \(\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^{\top}}{\sqrt{d}})\cdot V\), where \(Q=W_{Q}^{(i)}\cdot\phi_{i}(z_{t})\), \(K=W_{K}^{(i)}\cdot\tau_{\theta}(y)\), \(V=W_{V}^{(i)}\cdot\tau_{\theta}(y)\). \(W_{Q}^{(i)},W_{K}^{(i)},W_{V}^{(i)}\) are learnable projection matrices, \(d\) denotes the output dimension of key (\(K\)) and query (\(Q\)) features, and \(\phi_{i}(z_{t})\in\mathbb{R}^{N\times d_{\tau}^{i}}\) denotes a flattened intermediate representation of the UNet implementing \(\epsilon_{\theta}\). In the scenario of text-to-image generation, the condition \(C=\tau_{\theta}(y)\) is produced by encoding the text prompts \(y\) with a pre-trained CLIP text encoder \(\tau_{\theta}\). The overall training objective of SD is defined as \[\mathcal{L}_{sd}=\mathbb{E}_{\mathcal{E}(x),y,\epsilon\sim\mathcal{N}(0,1),t} \left[\|\epsilon-\epsilon_{\theta}(z_{t},t,\tau_{\theta}(y))\|_{2}^{2}\right], \tag{1}\] Therefore, \(\tau_{\theta}\) and \(\epsilon_{\theta}\) can be jointly optimized via Equation (1). ## 3 Universal Text Editing Diffusion Model ### Model Overview The overall training process of our proposed DiffUTE method is illustrated in Figure 2. Based on the cross attention mechanism in SD Rombach et al. (2022), the original input latent vector \(z_{t}\) is replaced Figure 2: Training process of our proposed universal text editing diffusion model. Given an image, we first extracted all the text and corresponding bounding boxes by the OCR detector. Then, a random area is selected and the corresponding mask and glyph image are generated. We use the embedding of the glyph image extracted by the glyph encoder as the condition, and concatenate the masked image latent vector \(x_{m}\), mask \(m\), and noisy image latent vector \(z_{t}\) as the input of the model. by the concatenation of latent image vector \(z_{t}\), masked image latent vector \(x_{m}\), and text mask \(m\). The condition \(C\) is also equipped with a glyph encoder for encoding glyph image \(x_{g}\). Introducing text masks and glyph information enables fine-grained diffusion control throughout the training process, resulting in the improved generative performance of the model. ### Perceptual Image Compression Following Rombach et al. (2022), we utilize a VAE to reduce the computational complexity of diffusion models. The model learns a perceptually equivalent space to the image space but with significantly reduced computational complexity. Since the VAE in SD is trained on natural images, its ability to restore text regions is limited. Moreover, compressing the original image directly through the VAE encoder causes the loss of dense text texture information, leading to blurry decoded images by the VAE decoder. To improve the reconstruction performance of text images, we further fine-tune the VAE on text image datasets. As shown in our experiments (Section 4.4), training VAE directly on the original image size lead to bad reconstruction results, i.e., unwanted patterns and incomplete strokes. We propose a progressive training strategy (PTT) in which the size of the images used for training increases as the training proceeds. Specifically, in the first three stages of training, we randomly crop images of sizes \(S/8\), \(S/4\) and \(S/2\) and resize them to \(S\) for training, where \(S\) is the resolution of the model input image and \(S=H=W\). Thus, the tuned VAE can learn different sizes of stroke details and text recovery. In the fourth stage, we train with images of the same size as the VAE input to ensure that the VAE can predict accurately when inferring. ### Fine-grained Conditional Guidance The pixel-level representation of text images differs greatly from the representation of natural objects. Although textual information consists of just multiple strokes of a two-dimensional structure, it has fine-grained features, and even slight movement or distortion lead to unrealistic image generation. In contrast, natural images have a much higher tolerance level as long as the semantic representation of the object is accurate. To ensure the generation of perfect text representations, we introduce two types of fine-grained guidance: positional and glyph. Positional guidance.Unlike the small differences between natural images, the latent feature distributions of character pixels differ dramatically. Text generation requires attention to specific local regions instead of the existing global control conditions for natural images Zhang and Agrawala (2023); Mou et al. (2023); Cheng et al. (2023) (e.g., segmentation maps, depth maps, sketch and grayscale images). To prevent model collapse, we introduce position control to decouple the distribution between different regions and make the model focus on the region for text generation. As shown in Figure 2, a binary mask is concatenated to the original image latent features. Glyph guidance.Another important issue is to precisely control the generation of character strokes. Language characters are diverse and complex. For example, a Chinese character may consist of more than 20 strokes, while there are more than 10,000 common Chinese characters. Learning directly from large-scale image-text datasets without explicit knowledge guidance is complicated. Liu et al. Figure 3: Inference process of our proposed universal text editing diffusion model. Users can directly input the content they want to edit, and the large language model will understand their needs and provide the areas to be edited and the target text to DiffUTE, which then completes the text editing. [2022a] proposes that the character-blinded can induce robust spelling knowledge for English words only when the model parameters are larger than 100B and cannot generalize well beyond Latin scripts such as Chinese and Korean. Therefore, we heuristically incorporate explicit character images as additional conditional information to generate text accurately into the model diffusion process. As shown in Figure 2, we extract the latent feature of the character image as a control condition. ### Self-supervised Training Framework for Text Editing It is impossible to collect and annotate large-scale paired data for text image editing, i.e., \(\left\{(x_{s},x_{g},m),y\right\}\). It may take great expense and huge labor to manually paint reasonable editing results. Thus, we perform self-supervised training. Specifically, given an image and the OCR bounding box of a sentence in the image, our training data is composed of \(\left\{(x_{m},x_{g},m),x_{s}\right\}\). For diffusion-based inpainting models, the condition \(C\) is usually text, which is usually processed by a pre-trained CLIP text encoder. Similarly, a naive solution is directly replacing it with an image encoder. To better represent glyph images, we utilize the pre-trained OCR encoder Li et al. [2023] as the glyph encoder. Such naive solution converges well on the training set. However, the generated quality is far from satisfactory for test images. We argue that the main reason is that the model learns a mundane mapping function under the naive training scheme: \(x_{g}+x_{s}\cdot(1-m)=x_{s}\). It impedes the network from understanding text style and layout information in the image, resulting in poor generalization. To alleviate such issue, we use a uniform font style (i.e., "arialuni") and regenerate the corresponding text image, as shown in Figure 2 with the example of "RM 30.00". Thus, we prevent the model from learning such a trivial mapping function and facilitate model understanding in a self-supervised training manner. Our self-supervised training process is summarized as follows: (1) An ocr region is randomly selected from the image and the corresponding text image is regenerated with a uniform font style. (2) The regenerated character image \(x_{g}\) is fed into glyph encoder to get condition glyph embedding \(e_{g}\). (3) The masked image latent vector \(x_{m}\), mask \(m\) and noisy image latent vector \(z_{t}\) is concatenated to form a new latent vector \(z_{t}^{\prime}=\text{Concat}(x_{m},m,z_{t})\). After dimension adjustment through a convolution layer, the feature vector \(\hat{z}_{t}=\text{Conv}(z_{t}^{\prime})\) is fed into the UNet as the query component. Consequently, the training objective of DiffUTE is: \[\mathcal{L}_{\text{DiffUTE}}=\mathbb{E}_{\mathcal{E}_{v}(x_{s}),x_{g},x_{m},m, \epsilon-N(0,1),t}\left[||\epsilon-\epsilon_{\theta}(z_{t},t,x_{g},x_{m},m)|| _{2}^{2}\right]. \tag{2}\] ### Interactive Scene Text Editing with LLM To enhance the interaction capability of the model, we introduced the large language model (LLM), i.e., ChatGLM Zeng et al. [2023]. Moreover, we fine-tuned ChatGLM using the extracted OCR data to facilitate a better understanding of structured information by ChatGLM, The inference process of DiffUTE is show in Figure 3. We first provide the OCR information extracted by the OCR detector and the target that the user wants to edit with to LLM, which will return the target text and its corresponding bounding box. Then, we use bounding boxes to generate mask and masked images, and generate images through a complete diffusion process (\(t=\left\{T,T-1,...,0\right\}\)) by DDIM Song et al. [2020] sampling strategy. By using ChatGLM to understand natural language instruction, we avoid requiring users to provide masks for the areas they want to edit, making our model more convenient. ## 4 Experiments ### Data Preparation Due to the lack of large-scale datasets for generating text image compositions, we collect 5M images by combining the web-crawled data and publicly available text image datasets, including CLDA Li, XFUND Xu et al. [2022], PubLayNet Zhong et al. [2019] and ICDAR series competitions Zhang et al. [2019], Nayef et al. [2019], Karatzas et al. [2015], to prepare our training dataset. To verify the effectiveness of our model, we randomly selected 1000 images from ArT Chng et al. [2019], TextOCR Singh et al. [2021], ICDAR13 Karatzas et al. [2015] and web data collected by ourselves to form the test set, respectively. All the images are cropped/resized to \(512\times 512\) resolution as model inputs. ### Implementation Details and Evaluation **Implementation details.** Our DiffUTE consists of VAE, glyph encoder and UNet. To obtain better reconstruction ability for text images, we first fine-tuned the VAE, which is initialized from the checkpoint of stable-diffusion-2 inpainting 2. The VAE is trained for three epochs with a batch size of 48 and a learning rate of 5e-6. We use a pre-trained OCR encoder as our glyph encoder, i.e., TROCR Li et al. (2023). During the training of DiffUTE, we set the batch size to 256, the learning rate to 1e-5, and the batch size to 5. Note that the weights of the glyph encoder and VAE were frozen during the training of DiffUTE. Footnote 2: [https://huggingface.co/stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) **Evaluation and metrics.** In our evaluation, we evaluate the accuracy of the generated text. We report OCR accuracy, calculated separately using pre-trained recognition model Fang et al. (2021) and human evaluation of the correctness between the generated text and the target text, denoted as OCR and Cor, respectively. **Baseline methods.** We compare DiffUTE with state-of-the-art scene text editing methods and diffusion models, i.e., Pix2Pix Isola et al. (2017), SRNet Wu et al. (2019), MOSTEL Qu et al. (2022), SD Rombach et al. (2022) and ControlNet Zhang and Agrawala (2023). Pix2Pix is an image translation network. To make Pix2Pix network implement multiple style translations, we concatenate the style image and the target text in depth as the network input. Training of SRNet requires different texts to appear in the same position and background, which does not exist in real-world datasets. Therefore, we use synthesiger Yim et al. (2021) to synthesize images for fine-tuning. For MOSTEL, we fine-tuned it on our dataset. For SD, we selected two baseline methods, i.e., stable-diffusion-inpainting 3 (SD1) and stable-diffusion-2-inpainting (SD2). For fair comparison, we fine-tuned SD1 and SD2 by instruction tuning. The resulting models are termed as SD1-FT and SD2-FT. In the NLP field, instruction tuning techniques are used to train models to perform tasks based on task instructions. We aim to accurately map text instructions to corresponding text edits using the SD model. To achieve this, we constructed a dataset for fine-tuning. Each sample in the dataset consists of a language instruction describing the target text, a mask, and the ground truth. ControlNet is an \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Web} & \multicolumn{2}{c}{ArT} & \multicolumn{2}{c}{TextOCR} & \multicolumn{2}{c}{ICDAR13} & \multicolumn{2}{c}{Average} \\ \cline{2-10} & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) \\ \hline \hline Pix2Pix & 17.24 & 16 & 13.52 & 11 & 15.74 & 14 & 15.48 & 15 & 15.50 & 14 \\ SRNet & 30.87 & 42 & 31.22 & 44 & 32.09 & 41 & 30.85 & 44 & 31.26 & 42.8 \\ MOSTEL & 48.93 & 61 & 60.73 & 68 & 45.97 & 53 & 53.76 & 59 & 52.35 & 60.3 \\ SD1 & 4.32 & 5 & 5.98 & 7 & 7.43 & 7 & 3.64 & 6 & 5.34 & 6.3 \\ SD2 & 5.88 & 7 & 6.94 & 9 & 9.29 & 11 & 5.32 & 8 & 6.86 & 8.8 \\ SD1-FT & 33.53 & 45 & 33.25 & 47 & 49.72 & 46 & 28.76 & 32 & 36.32 & 42.5 \\ SD2-FT & 46.34 & 51 & 49.69 & 44 & 62.89 & 59 & 46.87 & 46 & 51.45 & 50 \\ \hline \hline \multirow{2}{*}{DiffUTE} & **84.83** & **85** & **85.98** & **87** & **87.32** & **88** & **83.49** & **82** & **85.41** & **85.5** \\ & **+35.90** & **+24** & **+25.25** & **+19** & **+24.43** & **+29** & **+29.73** & **+23** & **+33.06** & **+25.2** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison across four datasets. \(\uparrow\) means the higher the better, underline indicates the second best method. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Web} & \multicolumn{2}{c}{ArT} & \multicolumn{2}{c}{TextOCR} & \multicolumn{2}{c}{ICDAR13} & \multicolumn{2}{c}{Average} \\ \cline{2-10} & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) & OCR\(\uparrow\) & Cor\(\uparrow\) \\ \hline w/o PTT & 44.73 & 47 & 45.29 & 41 & 60.83 & 52 & 41.22 & 39 & 48.02 & 44.8 \\ w/o Pos. & 49.84 & 53 & 50.89 & 47 & 65.72 & 63 & 49.72 & 47 & 54.04 & 52.5 \\ w/o Gly. & 46.34 & 51 & 49.69 & 44 & 62.89 & 59 & 46.87 & 46 & 51.45 & 50.0 \\ \hline \multirow{2}{*}{DiffUTE} & **84.83** & **85** & **85.98** & **87** & **87.32** & **88** & **83.49** & **82** & **85.41** & **85.5** \\ & **+34.99** & **+32** & **+35.09** & **+40** & **+21.60** & **+25** & **+33.77** & **+35** & **+31.37** & **+33** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study results. (Pos.: position control, Gly.: Glyph control.) image synthesis method that achieves excellent controllability by incorporating additional conditions to guide the diffusion process. To adapt this method to our text editing problem, we take the glyph image as the input to the ControlNet network. ### Comparison results The quantitative results for text generation are shown in Table 1. We can find that our DiffUTE achieves state-of-the-art results on all datasets. For example, DiffUTE improves average OCR accuracy and human-evaluated text correctness by 63.2% and 41.8% compared with the second best method MOSTEL. Moreover, our method achieves better results than the diffusion model and the fine-tuned diffusion model because our fine-grained control can provide the model with prior knowledge of glyph and position. Furthermore, the poor performance of the diffusion model for instruction fine-tuning also demonstrates the superiority of our inference approach combining ChatGLM, which can achieve better editing effects. We further conducted a visualization experiment. As shown in Figure 4, our method successfully achieved the transfer of foreground text and background texture, resulting in a regular textual structure and consistent font with the original text. Moreover, the background texture was clearer, and the overall similarity with real images was improved. In contrast, the results edited using the diffusion model often deviated from the target text, further validating the effectiveness of the glyph condition we introduced. Furthermore, other methods perform poorly when faced with more challenging Chinese text generation tasks, whereas DiffUTE still achieves good generation results. Figure 4: More visualization results of scene text editing. Our DiffUTE beats other methods with a significant improvement. ### Ablation results The Ablation studies examine three main aspects, namely 1) the effectiveness of the progressive training strategy of VAE, and 2) the impact of position control and glyph control on the image generation performance of DiffUTE. The experimental results are shown in Table 2, Figure 5 and Figure 6. **Progressive training strategy.** Without using the progressive training strategy, the editing results become distorted and the accuracy of text generation significantly decreases. The reason for such poor results is the complexity of the local structure of the text, whereby the VAE may need to learn the reconstruction ability of local details efficiently by focusing when there are too many characters in the image. And using our proposed progressive training strategy, the reconstruction ability of the model is significantly improved and more realistic results are obtained. The experimental results validate the effectiveness of this strategy and highlight the pivotal role of VAE in the diffusion model. **Fine-grained control.** When position control is not used, the mask and masked images at the input of the UNet are removed. When glyph control is not used, the latent code obtained from the text through the CLIP text encoder is used as the condition. When position control and glyph control are not used, there is a significant drop in performance. For example, when position control is not used, the OCR accuracy of the model drops by 36.7%, and the Cor drops by 38.6%. When glyph control is not used, the model cannot generate accurate text and the OCR accuracy of the model drops by 39.8%, and the Cor drops by 41.5%. These results show that position control can help the model focus on the area where text is to be generated, while glyph control can provide prior knowledge of the shape of the characters to help the model generate text more accurately. Figure 5: Sample results of ablation study. ## 5 Related Works ### Scene Text Editing Style transfer techniques based on Generative Adversarial Networks (GANs) have gained widespread popularity for scene text editing tasks Roy et al. (2020); Huang et al. (2022); Kong et al. (2022); Lee et al. (2021); Shimoda et al. (2021); Yang et al. (2020); Zhan et al. (2019). These methods typically involve transferring text style from a reference image to a target text image. STEFANN Roy et al. (2020), for instance, leverages a font-adaptive neural network and a color-preserving model to edit scene text at the character level. Meanwhile, SRNet Wu et al. (2019) employs a two-step approach that involves foreground-background separation and text spatial alignment, followed by a fusion model that generates the target text. Mostel Qu et al. (2022) improves upon these methods by incorporating stroke-level information to enhance the editing performance. However, despite their reasonable performance, these methods are often constrained in their ability to generate text in arbitrary styles and locations and can result in less natural-looking images. ### Image Editing Text-guided image editing has attracted increasing attention in recent years among various semantic image editing methods. Early works utilized pretrained GAN generators and text encoders to progressively optimize images based on textual prompts Bau et al. (2021); Gal et al. (2021); Perez et al. (2003). However, these GAN-based manipulation methods encounter difficulties in editing images with complex scenes or diverse objects, owing to the limited modeling capability of GANs. The rapid rise and development of diffusion models Rombach et al. (2022); Saharia et al. (2022); Ruiz et al. (2022) have demonstrated powerful abilities in synthesizing high-quality and diverse images. Many studiesBrack et al. (2023); Brooks et al. (2022) have employed diffusion models for text-driven image editing. Among various diffusion models, Stable Diffusion Rombach et al. (2022) is one of the state-of-the-art models, which compresses images into low-dimensional space using an auto-encoder and implements effective text-based image generation through cross-attention layers. This model can easily adapt to various tasks, such as text-based image inpainting and image editing. However, it has been observed that diffusion models exhibit poor visual text generation performance and are often prone to incorrect text generation. Only a few studies have focused on improving the text generation capability of diffusion models. Recently, one study trained a model to generate images containing specific text based on a large number of image-text pairs Liu et al. (2022). However, this work differs from ours in terms of application, as they focus on text-to-image generation, while ours concentrates on editing text in images. Another ongoing work, ControlNet Zhang and Agrawala (2023), has demonstrated remarkable performance in image editing by providing reference Figure 6: Examples of image reconstruction with our method DiffUTE. images such as Canny edge images and segmentation maps. While ControlNet achieves remarkably impressive results, it performs poorly on text editing tasks. To obtain better editing results, we incorporate auxiliary glyph information into the conditional generation process and emphasize local control in all diffusion steps. ### Large Language Model Large language models (LLMs) refer to language models that contain billions (or more) of parameters, which are trained on massive amounts of text data, such as models like GPT-3 Brown et al. (2020), Galactica Taylor et al. (2022), LLaMA Touvron et al. (2023) and ChatGLM Zeng et al. (2023). Among them, ChatGLM is a billion-scale language model with rudimentary question-answering and conversational capabilities. It differs from BERT Devlin et al. (2018), GPT-3 and T5 Xue et al. (2021) architectures and is a self-regressive pre-training model that includes multiple objective functions. In this paper, we use ChatGLM to enhance the interaction capability of our model. ## 6 Conclusion and Limitations In this paper, we argue that the current diffusion model can not generate realistic text in images. To tackle this problem, we present a novel method DiffUTE, a diffusion-based universal text editing model. DiffUTE generates high-quality text through fine-grained control of glyph and position information, and benefits from massive amounts of text images through a self-supervised training approach. Moreover, by integrating a large language model (i.e., ChatGLM), we can use natural language to edit the text in images, enhancing the editing usability and convenience of model. Extensive experiments have shown that DiffUTE excels in textual correctness and image naturalness. The main limitation of our method is that the accuracy of generated text will decrease as the number of characters to be edited in the image increases. This is due to the fact that as the number of characters increase, the spatial complexity of the characters will also increase, making the generation process more challenging. Therefore, our future work will focus on improving the generation quality and solving the problem of rendering long texts. In this appendix, we will first provide the detailed structure and training strategies of DiffUTE in Section 1 to ensure better understanding and reproducibility. Then, we will provide more visual results of text editing tasks in Section 2. ## Appendix A Details of DiffUTE **Model Architecture.** Our DiffUTE is composed of VAE, glyph encoder and UNet. (i) The VAE uses the same structure as in _stable-diffusion-2-inpainting_4, with a downsampling factor of 8. (ii) The glyph encoder employs the pre-trained TrOCR model Li et al. (2023), specifically the _trocr-large-printed_5 version. The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT Bao et al. (2021), while the text decoder was initialized from the weights of RoBERTa Liu et al. (2019). And the TrOCR model is fine-tuned on the SROIE dataset Huang et al. (2019). Note that we only use image encoder of TrOCR. Given a character image, the glyph encoder will return a latent feature of size \(577\times 1024\). This output is just the right size to be fed directly into the conditioned Unet as a condition. (iii) The UNet uses the same structure as in _stable-diffusion-2-inpainting_. Footnote 4: [https://huggingface.co/stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) Footnote 5: [https://huggingface.co/microsoft/trocr-large-printed](https://huggingface.co/microsoft/trocr-large-printed) **Training Details.** We adopt the Stable Diffusion Rombach et al. (2022) as our baseline model and choose their publicly released v2 model for image inpainting as initialization for VAE and UNet. For the glyph encoder, we use its pre-trained checkpoints for initialization and freeze its weights during training. To improve the reconstruction ability of VAE, we use progressive training strategy. The experimental setting of VAE and UNet is shown in Table 3. Upon completion of VAE training, we proceed to train UNet while keeping the weights of VAE frozen. ## Appendix B Visualization Results We provide additional generated images for editing text in image by our method DiffUTE in Figure 7. DiffUTE consistently generates correct visual text, and the texts naturally follow the same text style, i.e. font, and color, with other surrounding texts. We can see from the experiment that DiffUTE has a strong generative power. (i) In sample N1, DiffUTE can automatically generate slanted text based on the surrounding text. (ii) As shown in sample N2, the input is 234, and DiffUTE can automatically add the decimal point according to the context, which shows that DiffUTE has some document context understanding ability. (iii) In the sample CN4, DiffUTE can generate even artistic characters very well.
2302.05531
Fault-tolerant quantum simulation of materials using Bloch orbitals
The simulation of chemistry is among the most promising applications of quantum computing. However, most prior work exploring algorithms for block-encoding, time-evolving, and sampling in the eigenbasis of electronic structure Hamiltonians has either focused on modeling finite-sized systems, or has required a large number of plane wave basis functions. In this work, we extend methods for quantum simulation with Bloch orbitals constructed from symmetry-adapted atom-centered orbitals so that one can model periodic \textit{ab initio} Hamiltonians using only a modest number of basis functions. We focus on adapting existing algorithms based on combining qubitization with tensor factorizations of the Coulomb operator. Significant modifications of those algorithms are required to obtain an asymptotic speedup leveraging translational (or, more broadly, Abelian) symmetries. We implement block encodings using known tensor factorizations and a new Bloch orbital form of tensor hypercontraction. Finally, we estimate the resources required to deploy our algorithms to classically challenging model materials relevant to the chemistry of Lithium Nickel Oxide battery cathodes within the surface code.
Nicholas C. Rubin, Dominic W. Berry, Fionn D. Malone, Alec F. White, Tanuj Khattar, A. Eugene DePrince III, Sabrina Sicolo, Michael Kühn, Michael Kaicher, Joonho Lee, Ryan Babbush
2023-02-10T22:18:27Z
http://arxiv.org/abs/2302.05531v1
# Fault-tolerant quantum simulation of materials using Bloch orbitals ###### Abstract The simulation of chemistry is among the most promising applications of quantum computing. However, most prior work exploring algorithms for block-encoding, time-evolving, and sampling in the eigenbasis of electronic structure Hamiltonians has either focused on modeling finite-sized systems, or has required a large number of plane wave basis functions. In this work, we extend methods for quantum simulation with Bloch orbitals constructed from symmetry-adapted atom-centered orbitals so that one can model periodic _ab initio_ Hamiltonians using only a modest number of basis functions. We focus on adapting existing algorithms based on combining subitization with tensor factorizations of the Coulomb operator. Significant modifications of those algorithms are required to obtain an asymptotic speedup leveraging translational (or, more broadly, Abelian) symmetries. We implement block encodings using known tensor factorizations and a new Bloch orbital form of tensor hypercontraction. Finally, we estimate the resources required to deploy our algorithms to classically challenging model materials relevant to the chemistry of Lithium Nickel Oxide battery cathodes within the surface code. ###### Contents * I Introduction * II Electronic structure Hamiltonian of materials in Bloch orbitals * II.1 Basis functions and matrix elements * II.2 The second-quantized Hamiltonian * III Optimization of materials Hamiltonians * III.1 The sparse Hamiltonian representation * III.2 The single-factorization Hamiltonian representation * III.3 The double-factorization Hamiltonian representation * III.4 The tensor hypercontraction Hamiltonian representation * IV Scaling comparison and runtimes for diamond * V Classical and quantum simulations of LNO * V.1 LNO background * V.2 Correlated \(k\)-point calculations * V.3 Single shot density matrix embedding theory * V.4 Quantum resource estimates for LNO * VI Conclusion * A Sparse representation derivations * A.1 The Pauli operator representation of the one-body term * A.2 One-body correction for sparse case * A.3 Complexity for sparse implementation * B Single-factorization derivations * A.1 One-body correction for single factorization * A.2 Complexity for single-factorized representation * C Double-factorization derivations * A.3 One-body correction * A.3 Complexity of the double-factorized representation * D Tensor hypercontraction derivations * A.4 THC symmetries * A.4 Complexity of the tensor hypercontraction representation * E Correlation diagnostics for LNO * F Classical timing benchmarks * G Generating the THC factors ## I Introduction Recently, first quantization quantum algorithms and constant factor resource estimation analysis for molecular systems [1; 2] have been adapted to materials [3]. While the first quantization approach using a plane wave representation is attractive due to the smooth convergence to the continuum limit [4; 5] a local basis representation such as atom-centered basis sets has other advantages. Similar to the molecular simulation setting, local basis functions can be advantageous when describing spatially localized phenomena such as heterogeneous catalysis or efficiently describing cusps [6]. The desire for systematically improvable electronic structure methods to treat the many examples of strongly correlated phenomena [7; 8; 9] in the condensed phase has recently driven the application of _ab initio_ wavefunction theories to the periodic setting [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Standard treatments of symmetry in wavefunction theories [23; 24] can be used to exploit the translational symmetry of periodic systems, thus enabling the application of post-Hartree-Fock methods to material systems. Despite these advantages, classical _ab initio_ treatment of such problems is limited due to the large simulation cells needed to converge to the thermodynamic limit. This drawback has further driven the use of embedding theories [25; 26; 27; 28; 29; 30] and downfolding [31]. Naturally, one may ask if fault-tolerant quantum computers can alleviate the computational burden associated with _ab initio_ simulation of solids within the local basis framework. In this paper, we describe how to extend molecular quantum simulation algorithms of second quantization Hamiltonians represented in local basis sets to periodic systems using the quitization framework [32; 33]. Though the general structure of the algorithms is largely unchanged, introducing symmetry-_i.e._ symmetry-adapting the block encodings-requires non-trivial modifications to realize an improvement in the asymptotic complexity. The first steps in this direction were taken in Ref. [34] using the "sparse" Hamiltonian representation. We provide an alternative derivation for block encodings using this representation and introduce symmetry-adapted block encodings for three other more performant tensor factorizations of the Hamiltonian: single factorization (SF), double factorization (DF) and tensor hypercontraction (THC). The result is orders of magnitude improvement in the quantum resources required to simulate materials. For each of the four Hamiltonian representations we describe the origin of the asymptotic speedup (or lack thereof in one case), provide compiled algorithms for constant factor resource estimates, and compare the performance to non-symmetry-adapted block encodings. We note that the derived symmetry-adapted block encodings apply to any Abelian point group symmetry with minor modifications. For SF, sparse, and DF the symmetry-adapted block encodings provide an asymptotic speedup for walk operator construction proportional to the square root of the number of \(k\)-points used to sample the Brillouin zone. For THC, there is no asymptotic improvement due to the linear cost of unary iteration in the block encoding. Going beyond asymptotic analysis and compiling to total Toffolis, we find that for DF and THC using symmetry-adapted block encodings provides no asymptotic speedup over their non-symmetry-adapted counterparts due to the increased number of applications of the walk operator for fixed precision phase estimation. DF and THC are sensitive to the numerical compression of the Hamiltonian, and thus we expect the number of walk operator applications can be decreased. Furthermore, there are classical advantages to using the symmetry-adapted block encodings coming from the reduced classical complexity of representing the Hamiltonian as the system size is increased towards the thermodynamic limit. In parallel with recent studies estimating quantum resources required to simulate high-value molecular targets [32; 35; 36; 37], we estimate the quantum resources required to simulate an open materials science problem related to the cathode structure of Lithium Nickel Oxide (LNO) batteries. The LNO systems are universally observed in the high symmetry R\(\bar{3}\)m structure which is at odds with the predicted Jahn-Teller activity of low-spin trivalent Ni [38]; more background can be found in Section V.1. This discrepancy combined with the difficulty of synthesizing pure LNO, the size of the unit cells [39], and potential strong correlation at the high symmetry structure [40] makes the LNO problem an interesting application target for quantum simulation advantage. This realistic problem frames the algorithmic improvements articulated in this paper and the prospects of the quantum advantage given modern electronic structure methods. We find that the required resource estimates for simulating a set of benchmark systems and the LNO problem before reaching the thermodynamic limit are already substantial. In fact, the large simulation cells required to converge these calculations to the thermodynamic limit is ultimately a significant hurdle for _ab initio_ simulations. The layout of the rest of the paper is as follows: Section II describes the atom-centered basis sets and the Hamiltonian that we use, Section III describes the subitization algorithm and the origin of the asymptotic speedup in constructing walk operators using each of the four Hamiltonian representations. Each subsection is dedicated to a particular Hamiltonian factorization and describes the subitization algorithm and how to calculate associated parameters. Section IV compares all methods and extrapolates quantum resources required to simulate a diamond crystal converged towards the thermodynamic limit, and Section V reports the accuracy and correlation analysis of various electronic structure methods for LNO while providing estimates of quantum computing resources and runtimes. We close with prospects for this class of methods. ## II Electronic structure Hamiltonian of materials in Bloch orbitals Though plane-wave basis sets are used in most periodic Density Functional Theory (DFT) calculations, there is a long history of local-basis methods as well. The use of a localized basis set has a number of advantages over plane waves: _1_) 0D (molecular), 1D, 2D, and 3D systems can be treated on an equal computational footing, _2_) Calculations on low-density systems with large unit cells can be more efficient[41; 42; 43]_3_) Hartree-Fock exchange can be more efficiently computed in the smaller, local-orbital basis[42; 43; 44; 45] and _4_) The local-orbital representations can lower the computational cost of correlation corrections with a more compact representation of the virtual space. (_1_) - (_3_) have spurred the development of local-orbital DFT and Hartree-Fock methods with Gaussian orbitals [42; 43; 44; 46] and numerical atomic orbitals [47], while (_4_) has been behind recent work to apply correlated electronic structure theory to periodic solids [10; 11; 13; 14; 15; 16; 17]. In the following subsection we describe the symmetry-adapted periodic sum of Gaussian-type orbitals used in this work. ### Basis functions and matrix elements A local basis function, \(\tilde{\chi}_{p}\), can be adapted to the translational symmetry of a lattice to form a periodized function \[\chi_{p,\mathbf{k}}(\mathbf{r})=\sum_{\mathbf{T}}e^{i\mathbf{k}\cdot\mathbf{T }}\tilde{\chi}_{p}(\mathbf{r}-\mathbf{T}), \tag{1}\] where \(\mathbf{T}\) represents a lattice translation vector and \(\mathbf{k}\) is a crystal momentum vector lying in the first Brillouin zone. The lattice momentum \(\mathbf{k}\) labels an irreducible representation of the group of translations defined by the translational symmetry of the material. Functions of this form are easily verified to be Bloch functions in that \[\chi_{p,\mathbf{k}}(\mathbf{r})=e^{i\mathbf{k}\cdot\mathbf{r}}u_{p,\mathbf{k} }(\mathbf{r}) \tag{2}\] where \(u_{p,\mathbf{k}}(\mathbf{r})\) has the same periodicity as the lattice. Orbtials are constructed from a linear combination of the underlying Bloch orbitals, \[\phi_{i\mathbf{k}}(\mathbf{r})=N_{k}^{-1/2}\sum_{p}c_{p,i}(\mathbf{k})\chi_{p, \mathbf{k}}(\mathbf{r}), \tag{3}\] where \(N_{k}\) is the total number of \(k\) points. The expansion coefficients, \(c_{p,i}(\mathbf{k})\) are determined from the appropriate periodic self-consistent field procedure, usually Hartree-Fock or Kohn-Sham DFT. The resulting orbitals are normally constrained to be orthogonal by convention and can serve as a basis for representing the second-quantized Hamiltonian. The matrix elements of a one-electron operator, \[T_{p\mathbf{k}_{p},q\mathbf{k}_{q}}=\int dr\,\phi_{p\mathbf{k}_{p}}^{*}(r) \mathcal{O}_{1}\phi_{q\mathbf{k}_{q}}(\mathbf{r}) \tag{4}\] are non-zero only when \(\mathbf{k}_{p}=\mathbf{k}_{q}\) as long as \(\mathcal{O}_{1}\) has the translational symmetry of the lattice. We can use a similar strategy to derive the structure of the two-electron integrals which are given by \[V_{p\mathbf{k}_{p},q\mathbf{k}_{q},r\mathbf{k}_{r},s\mathbf{k}_{s}}=\int\int dr _{1}\,dr_{2}\,\phi_{p\mathbf{k}_{p}}^{*}(\mathbf{r}_{1})\phi_{q\mathbf{k}_{q}} (\mathbf{r}_{1})\mathcal{O}_{2}\phi_{r\mathbf{k}_{r}}^{*}(\mathbf{r}_{2})\phi _{s\mathbf{k}_{s}}(\mathbf{r}_{2}). \tag{5}\] The translational symmetry of the Bloch orbitals implies the 2-electron operator \(\mathcal{O}_{2}\) matrix elements can only be nonzero when \((\mathbf{k}_{p}+\mathbf{k}_{r}-\mathbf{k}_{q}-\mathbf{k}_{s})=\mathbf{G}\) where \(\mathbf{G}\) is a reciprocal lattice vector. We note that this expression for nonzero matrix elements by symmetry is a specific instance of the more general expression. More generally, given a group \(\mathfrak{g}\) with its irreducible representations labeled by \(\{\Gamma_{i}\}\), the two-electron integral is nonzero by symmetry whenever \(\Gamma_{p}\otimes\Gamma_{q}\otimes\Gamma_{r}\otimes\Gamma_{s}\) contains the complete symmetric representation [23]. For periodic systems \(\mathfrak{g}\) is the set of translational symmetries. Despite this sparsity, the evaluation of the nonzero matrix elements for all basis functions is often a major computational bottleneck whenever local basis sets are used. Local orbitals provide a more compact representation than plane waves, so fewer basis functions are needed. Unfortunately, there are \(O(N_{k}^{3}N^{4})\) generally nonzero two-electron matrix elements for \(N_{k}\)\(k\)-points and \(N\) basis functions in the primitive cell. For very large calculations locality can be exploited to yield asymptotically linear-scaling DFT methods [48; 49]. Linear scaling Hartree-Fock is also possible for insulators [50]. This linear regime is almost never reached in practice, and it is usually advantageous to instead reduce the cost by tensor factorization. Though our discussion has been thus far general with regard to the choice of local basis functions, Gaussian basis functions are by far the most popular choice in molecular calculations, and crystalline Gaussian orbitals are also a popular choice for periodic calculations. This popularity is due to the existence of analytic formulas which allow for fast, numerically exact evaluation of the matrix elements of most common operators. Despite the existence of efficient numerical techniques, the large number of two-electron integrals that must be evaluated in periodic calculations requires a more efficient procedure. Traditionally, this is accomplished with the Gaussian plane wave (GPW) method [41; 51] which only requires storage of \(O(N_{k}^{2}N^{2}n_{\text{pw}})\) integrals where \(n_{\text{pw}}\) is the number of plane waves used to evaluate the integrals. In molecular calculations, the most common decomposition is called the resolution of the identity (RI) or sometimes density fitting (DF) [52; 53; 54; 55]. This procedure requires the storage of \(O(N_{k}^{2}N^{2}n_{\text{aux}})\) integrals where \(n_{\text{aux}}\) is the size of the auxiliary basis set. Both the GPW and the RI method can be considered as density fitting approaches where the former uses a plane-wave fitting basis and the latter uses a Gaussian fitting basis. For this reason, the RI approach is often called "Gaussian density fitting" (GDF) in the context of periodic calculations [56; 57; 58; 59; 60]. The two-electron integral tensor can be further factorized into a product of five two-index tensors as was done in the tensor hypercontraction (THC) method of Martinez and coworkers [61; 62; 63]. Factorizations of this form are most useful for correlated methods where they have the potential to lower the computational scaling. In this work we present a translational symmetry-adapted form of the tensor hypercontraction for the two-electron integral tensors of periodic systems. ### The second-quantized Hamiltonian We can express the second-quantized electronic structure Hamiltonian as \[H =H_{1}+H_{2}\,, \tag{6}\] \[H_{1} =\sum_{\sigma}\sum_{\mathbf{k}}\sum_{pq}h_{p\mathbf{k},q\mathbf{k} }a^{\dagger}_{p\mathbf{k}\sigma}a_{q\mathbf{k}\sigma}\,,\] (7) \[h_{p\mathbf{k},q\mathbf{k}} =T_{p\mathbf{k},q\mathbf{k}}-\frac{1}{2}\sum_{r,\mathbf{Q}}V_{p \mathbf{k},r\mathbf{Q},r\mathbf{Q},q\mathbf{k}}\,,\] (8) \[H_{2} =\frac{1}{2}\sum_{\sigma,r}\sum_{\mathbf{Q},\mathbf{k}^{\prime}} \sum_{pqrs}V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k}^{\prime}\in \Theta),s\mathbf{k}^{\prime}}a^{\dagger}_{p\mathbf{k}\sigma}a_{q(\mathbf{k} \in\mathbf{Q})\sigma}a^{\dagger}_{r(\mathbf{k}^{\prime}\in\Theta)\tau}a_{s \mathbf{k}^{\prime}\tau}\,. \tag{9}\] We first introduce summation limits for each symbol as we will commonly use short hand summation formulas to indicate multiple sums. For each variable \(\{p,q,r,s\}\) summation is performed over the range \([0,N/2-1]\) indexing the spatial orbital or band, \(\{{\bf Q},{\bf k},{\bf k}^{\prime}\}\) summation is performed over the Brillouin zone (\({\sf{Z}\!\!Z}\)) at a set number of \(k\)-points of which there are \(N_{k}\), and \(\{\sigma,\tau\}\) are electron spin variables and summed over \(\{\uparrow,\downarrow\}\). Non-modular differences of \({\bf k}\), \({\bf Q}\), and \({\bf k}^{\prime}\) span twice the Brillouin zone. Because \(V\) needs to be indexed by values in the Brillouin zone, we use modular subtraction indicated by \(\ominus\). That is, if the number of points in each dimension is \(N_{x},N_{y},N_{z}\), we perform subtraction modulo \(N_{x},N_{y},N_{z}\) in each direction, respectively. The Hamiltonian is generally complex Hermitian with four-fold symmetry of the two-electron integrals.1 Footnote 1: We note that the following generic complex Coulomb integral symmetries are present \[V_{p{\bf k}_{p},q{\bf k}_{q},p{\bf k}_{r},s{\bf k}_{s}}=V_{s{\bf k}_{r},s{\bf k }_{s},p{\bf k}_{p},q{\bf k}_{q}}=V^{*}_{q{\bf k}_{q},p{\bf k}_{p},s{\bf k}_{s}, r{\bf k}_{r}}=V^{*}_{s{\bf k}_{s},r{\bf k}_{r},q{\bf k}_{q},p{\bf k}_{p}} \tag{10}\] from integration index relabeling and complex conjugation. In the following sections we demonstrate how the sparse structure of the two-electron integral tensor affects the scaling of block encoding the Hamiltonian for implementation of qubitized quantum walk oracles. The cost of qubitization is greatly affected by the representational freedom of the the underlying Hamiltonian expressed as a linear combination of unitaries. We demonstrate how to construct the sparse, single-factorization (SF), double-factorization (DF), and tensor-hypercontraction (THC) integral decompositions of Bloch orbital Hamiltonians and cost out simulations for a variety of materials. For all algorithms, we will make a comparison to the case of a \(\Gamma\)-point calculation on a supercell composed of \(N_{k}\) primitive cells in the geometry described by the \(k\)-point sampling. This allows us to directly observe the proposed speedup due to symmetry-adapting. To demonstrate the scaling of symmetry-adapted block encoding, we estimate quantum simulation resource requirements for the series of systems listed in Table 1. Range-separated density fitting [60] is used to construct integrals with Dunning type correlation-consistent basis sets [64] and the Goedecker-Teter-Hutter (GTH) family of pseudopotentials for Hartree-Fock [65]. For each Hamiltonian, cutoffs for the factorization are selected so that the Moller-Plesset second order perturbation theory (MP2) error in the total energy is below one milliHartree per cell or formula unit depending on the system. While prior works used coupled-cluster theory, MP2 is used here for computational efficiency. ## III Optimization of materials Hamiltonians Similar to fault tolerant resource estimates for molecular systems represented in second quantization [32; 36; 70; 71], we compare the number of logical qubits and number of Toffoli gates required to implement phase estimation on unitaries that use block encoding [72] and qubitization [33] to encode the Hamiltonian spectrum in a Szegedy walk operator [73] for various linear combination of unitaries (LCU) [74] representations of the Hamiltonian. All LCUs represent the Hamiltonian as \[H=\sum_{\ell=1}^{L}\omega_{\ell}U_{\ell} \tag{11}\] where \(\omega_{\ell}\in\mathbb{R}\), \(\omega_{\ell}\geq 0\), and \(U_{\ell}\) is a unitary operator. One can then construct the operators \[{\rm PREPARE}|0\rangle^{\otimes\log(L)} \mapsto\sum_{\ell=1}^{L}\sqrt{\frac{\omega_{l}}{\lambda}}|\ell \rangle\equiv|\mathcal{L}\rangle \tag{12}\] \[{\rm SELECT}|\ell\rangle|\psi\rangle \mapsto|\ell\rangle U_{\ell}|\psi\rangle \tag{13}\] \begin{table} \begin{tabular}{l l c c c c} \hline \hline System & Structure & Atoms in Cell & Lattice Parameters & spin-orbitals cc-pVDZ & spin-orbitals cc-pVTZ \\ \hline C & diamond & 2 & 3.567 [66] & 52 & 116 \\ Si & diamond & 2 & 5.43 [66] & 52 & 116 \\ BN & zinc blende & 2 & 3.616 [66] & 52 & 116 \\ LiCl & rocksalt & 2 & 5.106 [67] & 52 & 98 \\ AlN & wurzite & 4 & (a) 3.11 (c) 4.981 [66] & 104 & 220 \\ Li & bcc & 2 & 3.51 [68] & 52 & 80 \\ Al & fcc & 2 & 4.0479 [69] & 52 & 104 \\ \hline \hline \end{tabular} \end{table} Table 1: Crystal structures lattice parameters used for the systems studied in this work. The lattice parameters were chosen to be at or near their experimental equilibrium values. \[\lambda=\sum_{\ell=1}^{L}\omega_{\ell} \tag{14}\] where \(|\psi\rangle\) is the system register, and \(|\ell\rangle\) is an ancilla register used to index each term in the LCU. The walk operator constructed from select and a reflection operator built from prepare, \(R=2|\mathcal{L}\rangle\langle\mathcal{L}|\otimes\mathbb{1}-\mathbb{1}\), has eigenvalues proportional \(e^{\pm i\arccos E_{n}/\lambda}\) where \(E_{n}\) is an eigenvalue of the Hamiltonian in Eq. (11). It was shown in References [70] and [32] when ensuring that select is self-inverse, only the reflection operator \(R\) needs to be controlled on the ancilla for phase estimation and not select. Therefore, the Toffoli cost of phase estimating the walk operator scales as \[\left\lceil\frac{\pi\lambda}{2\epsilon_{\text{PEA}}}\right\rceil(C_{S}+C_{P}+ C_{P^{\dagger}}+\log(L)) \tag{15}\] where \(C_{S}\) is the cost for implementing the select oracle and \(C_{P}\) is the cost for implementing the prepare oracle, \(C_{P^{\dagger}}\) is the cost for the inverse prepare oracle, and \(\epsilon_{\text{PEA}}\) is the target precision for phase estimation. Thus the main costs for sampling from the eigenspectrum of a second quantized operator are the costs to implement select, prepare, and prepare\({}^{\dagger}\). These costs need to be multiplied by a factor proportional to \(\lambda/\epsilon_{\text{PEA}}\) for the number of walk steps needed for phase estimation. Note that when computing intensive quantities, such as the the energy per cell, the \(\lambda\) factor is scaled by \(1/N_{k}\). The particular choice of LCU changes all of these costs. Prior works have investigated the resource requirements for simulating molecules with four different LCUs. While all these methods can be used without modification in supercell calculations at the \(\Gamma\)-point, the construction of molecular select and prepare do not exploit any symmetries and are not applicable away from the \(\Gamma\)-point - _e.g._ at the Baldereschi point [75]. The leading costs in constructing select and prepare for second quantized Hamiltonians is the circuit primitive that functions similar to a read-only-memory (ROM) called QROM. The QROM primitive is a gadget that takes a memory address, potentially in superposition, and outputs data, also potentially in superposition. There are currently two variations of QROM that have different costs; traditional QROM that has linear Toffoli complexity when outputting \(L\) items with any amount of data associated with each item, and advanced QROM (called QROAM) with reduced non-Clifford complexity [76]. It uses a select-swap circuit construction with Toffoli cost \[\left\lceil\frac{L}{k}\right\rceil+m(k-1) \tag{16}\] where \(k\) is a power of 2 for outputting \(L\) items of data where each item of data is \(m\) bits long. The notation \(k\) here for an integer should not be confused with \(\mathbf{k}\) for the crystal momentum vector. It needs \(m(k-1)\) ancillas, so increases the logical ancilla count in exchange for reduced Toffoli complexity. When \(L>m\) this function is minimized by selecting \(k\approx\sqrt{L/m}\) and thus the Toffoli and ancilla cost generically go as \(\mathcal{O}(\sqrt{Lm})\). It is also possible to adjust \(k\) to reduce the ancilla count while increasing the Toffoli count. Having QROAM output the minimal amount of information to represent the Hamiltonian is at the core of the \(\sqrt{N_{k}}\) improvements we derive in many of the block encodings. We will also demonstrate that for all LCUs the lowest scaling can be linear in the Bloch orbital basis size \(\mathcal{O}(N_{k}N)\) due to the requirement to perform unary iteration at least once over the entire basis. Another primitive that becomes the dominant cost in constructing symmetry-adapted select is the multiplexed-controlled swap between two registers. The controlled swap between two registers uses unary iteration [70] on \(L\) items to swap \(M\) elements between two registers at the cost of \(\mathcal{O}(LM)\) Toffolis. For simulating materials this primitive is commonly encountered when swapping all band indices with a particular irreducible representation label, or \(k\)-point, into a working register at a cost of \(\mathcal{O}(N_{k}N)\). The necessity of coherently moving data thus puts a limit on the total savings one can achieve by leveraging Abelian symmetries. The cost of moving data must be weighed against the benefits, which we describe in each section below. In Table 2 we summarize the space complexity, in terms of logical qubits, and time complexity, in terms of Toffolis of the four LCUs when considering translational symmetry on the primitive cell and without (denoted as SC for supercell). For the sparse LCU, exploiting primitive cell translational symmetry reduces the amount of symmetry unique information in the Hamiltonian by a factor of \(N_{k}\), which translates to a reduction of \(\sqrt{N_{k}}\) savings in Toffoli complexity and ancilla complexity. For sparse prepare, the square root savings originates from the QROAM cost of outputting "alt" and "keep" values for the coherent alias sampling component of the state preparation. For sparse select controlled application of all Pauli terms has linear cost in the basis size \(\mathcal{O}(N_{k}N)\) and is not the dominant cost. The supercell calculation does not exploit the \(k\)-point symmetry unique non-zero coefficients of the Hamiltonian and thus has worse scaling. The single factorization LCU leverages the fact that the Coulomb integral tensor is positive semidefinite and can be written in a quadratic form. For molecular systems without symmetry-_i.e._\(C1\) symmetry, the factorization results in a three-tensor where there are two orbital indices and one auxiliary index that scales as the number of orbitals in the system [77]. For simulations where orbitals now have point group symmetry labels, such as \(k\)-points, each three-tensor factor can now be arranged into a five-tensor; two symmetry labels (irreducible representation labels), two-band labels (orbital labels), and one auxiliary index which still scales with the number of bands due to density fitting of the cell periodic part of the density [78; 60]. Thus the origin of the \(\sqrt{N_{k}}\) improvement for the symmetry-adapted block encoding with a single-factorization LCU lies in the fact that the auxiliary index has \(N_{k}\) lower scaling in comparison to a supercell variation where the Cholesky factorization or density fitting is performed on the entire supercell two-electron integral tensor. The single factorization algorithm is also dominated by the QROM cost of prepare-of which there are two state preparations. The inner state preparation [32] for the \(k\)-point symmetry-adapted algorithm requires outputting \(\mathcal{O}(N_{k}^{2}N^{3})\) to be used in the state preparation leading to \(\mathcal{O}(N_{k}N^{3/2})\) Toffoli and qubit complexity. Contrasting this to the supercell calculation, we see a \(\sqrt{N_{k}}\) savings due to the fact that the inner state preparation requires only \(N_{k}^{2}\) information and not \(N_{k}^{3}\) information. We elaborate on this point further in Section III.2. select is implemented in a similar fashion to sparse, scaling as \(\mathcal{O}(N_{k}N)\), and is not a dominant cost. The double factorization LCU represents the Hamiltonian in a series of non-orthogonal bases and leverages that a linear combination of ladder operators can be constructed by a similarity transform of a single fermionic ladder operator, or Majorana operator, by a unitary generated by a quadratic fermionic Hamiltonian. In the molecular case, the dominant cost for these algorithms is the QROM to output the rotations for the similarity transform and implementing the basis rotations with the programmable gate array circuit primitive [32; 36] for select. When taking advantage of primitive cell symmetry we reduce the amount of data needed to be output by QROM by \(N_{k}\), which results in a \(\sqrt{N_{k}}\) savings in the Toffoli complexity. Because we are using advanced QROM, this output size advantage is also observed in the logical qubit requirements. As mentioned previously, computing the total number of Toffolis requires scaling the walk operator cost by a linear function of \(\lambda\). We find that using a canonical orbtial basis set, the total Toffoli cost is higher than the commensurate supercell total Toffoli cost because \(\lambda\) for the symmetry-adapted case increases. The origin of the increase is related to a reduced variational freedom when selecting non-orthogonal bases and is further discussed in Section III.3. Finally, in the THC LCU there is no asymptotic speedup because the molecular algorithm had the lowest possible scaling for second quantized algorithms. This stems from the fact that even iterating over the basis once with unary iteration to apply an operator indexed by basis element has a Toffoli cost of \(\mathcal{O}(N_{k}N)\). As we will discuss in Section III.4 our symmetry-adapted algorithm offers other benefits such as enabling the classical precomputation of the THC factors by exploiting symmetry and lowering the number of controlled rotations. We now describe the Hamiltonian factorization used in each LCU, the calculation of \(\lambda\) associated with each Hamiltonian factorization, and outline the construction of the subitization oracles. Detailed compilations are provided for each LCU in the Appendices. In each section we provide numerical evidence that the symmetry-adapted oracles have the reported scaling by plotting the Toffoli requirements to synthesize select + prepare + prepare\({}^{-1}\) compared against the number of \(k\)-points sampled (\(N_{k}\)). ### The sparse Hamiltonian representation In the "sparse" method the Hamiltonian described in Eq. (8) and Eq. (9) is directly translated to Pauli operators which form the LCU. Under the Jordan-Wigner transformation, we take \[a_{\text{p}\mathbf{k}\sigma}\mapsto\vec{Z}(X_{\text{p}\mathbf{k}\sigma}+iY_{ \text{p}\mathbf{k}\sigma})/2, \tag{17}\] \begin{table} \begin{tabular}{c c c c c} Representation & Qubits & Toffoli Complexity & SC Qubits & SC Toffoli \\ \hline sparse & \(\mathcal{\widetilde{O}}(N_{k}^{3/2}N^{2})\) & \(\mathcal{\widetilde{O}}(N_{k}^{3/2}N^{2}\lambda_{\text{sparse}}/\epsilon)\) & \(\mathcal{\widetilde{O}}(N_{k}^{2}N^{2})\) & \(\mathcal{\widetilde{O}}(N_{k}^{2}N^{2}\lambda_{\text{sparse,SC}}/\epsilon)\) \\ SF & \(\mathcal{\widetilde{O}}(N_{k}N^{3/2})\) & \(\mathcal{\widetilde{O}}(N_{k}N^{3/2}\lambda_{\text{SF}}/\epsilon)\) & \(\mathcal{\widetilde{O}}(N_{k}^{3/2}N^{3/2})\) & \(\mathcal{\widetilde{O}}(N_{k}^{3/2}N^{3/2})\) \\ DF & \(\mathcal{\widetilde{O}}(\sqrt{N_{k}}N\sqrt{\Xi})\) & \(\mathcal{\widetilde{O}}(\sqrt{N_{k}}N\sqrt{\Xi}\lambda_{\text{DF}}/\epsilon)\) & \(\mathcal{\widetilde{O}}(N_{k}N\sqrt{\Xi})\) & \(\mathcal{\widetilde{O}}(N_{k}N\sqrt{\Xi})\)DF,SC \\ THC & \(\mathcal{\widetilde{O}}(N_{k}N)\) & \(\mathcal{\widetilde{O}}(N_{k}N\chi_{\text{THC}}/\epsilon)\) & \(\mathcal{\widetilde{O}}(N_{k}N)\) & \(\mathcal{\widetilde{O}}(N_{k}N\chi_{\text{THC,SC}}/\epsilon)\) \\ \end{tabular} \end{table} Table 2: Generically, qubitized quantum walks scale as \(\mathcal{O}(\sqrt{\Gamma})\) in space and \(\mathcal{O}(\lambda\sqrt{\Gamma}/\epsilon)\) in time where \(\Gamma\) is the amount of information required to specify the Hamiltonian within a particular representation. For double factorization, \(\Xi\) is the sum of the average rank of the second factorization which is expected to scale as \(\mathcal{O}(N_{k}N)\), which is the number of orbitals in the primitive cell or bands. \(\Xi\) is the average rank of the second factorization in the supercell calculation and is also expected to scale as \(\mathcal{O}(N_{k}N)\). The tilde on \(\mathcal{O}\) is used to account for logarithmic factors, and can include variables not explicitly given in the scaling. \(\lambda\) for each LCU is different and is denoted as a subscript indicating the LCU type and if it \(\lambda\) for the supercell version. \[a^{\dagger}_{p\mathbf{k}\sigma}\mapsto\vec{Z}(X_{p\mathbf{k}\sigma}-iY_{p \mathbf{k}\sigma})/2, \tag{18}\] where the notation \(\vec{Z}\) is being used to indicate that there is a string of \(Z\) operators on qubits up to (not including) that on which \(X_{p\mathbf{k}\sigma}\) or \(Y_{p\mathbf{k}\sigma}\) acts upon. This requires a choice of ordering for the qubits indexed by \(p\), \(\mathbf{k}\), and \(\sigma\). We need only apply the string of \(Z\) operators for the same value of \(\sigma\), because we always have matching annihilation and creation operators for the same spin \(\sigma\) (so any \(Z\) gates on the other spin would cancel). We also adopt a convention that the ordering of qubits for the Jordan-Wigner transformation takes \(\mathbf{k}\) as the more significant bits, with qubits for all \(p\) with a given \(\mathbf{k}\) grouped together. For most of the discussion we will not need to explicitly consider this ordering. With the Jordan-Wigner transform the one-body component of the Hamiltonian takes on the form \[H_{1} =\frac{i}{4}\sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{ \mathbf{k}}\sum_{p,q=1}\mathrm{Re}(h_{p\mathbf{k},q\mathbf{k}})\left\{\vec{Z} X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}-\vec{Z}Y_{p\mathbf{k}\sigma} \vec{Z}X_{q\mathbf{k}\sigma}\right\}\] \[\quad+\frac{i}{4}\sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{ \mathbf{k}}\sum_{p,q=1}\mathrm{Im}(h_{p\mathbf{k},q\mathbf{k}})\left\{\vec{Z} X_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma} \vec{Z}Y_{q\mathbf{k}\sigma}\right\}+\sum_{\mathbf{k}}\sum_{p=1}h_{p\mathbf{ k},p\mathbf{k}}\openone. \tag{19}\] We provide the full derivation for this expression in Appendix A.1. To derive the two-body operator LCU we use only complex conjugation symmetry in contrast to the molecular derivation that used eight-fold symmetry. The two-body Hamiltonian can be written as \[H_{2} =\frac{1}{4}\sum_{\sigma,\tau\in\{\uparrow,\downarrow\}}\sum_{ \mathbf{Q},\mathbf{k},\mathbf{k}^{\prime}}^{N_{k}}\sum_{p,q,\tau,s=1}^{N/2} \left[V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k}^{\prime}\in \mathbf{Q}),s\mathbf{k}^{\prime}}a^{\dagger}_{q\mathbf{k}\sigma}a_{q(\mathbf{k} \in\mathbf{Q})\sigma}a^{\dagger}_{r(\mathbf{k}^{\prime}\in\mathbf{Q})\tau}a _{s\mathbf{k}^{\prime}\tau}\right.\] \[\left.+V^{*}_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k} ^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}}a^{\dagger}_{q(\mathbf{k}\in \mathbf{Q})\sigma}a_{p\mathbf{k}\sigma}a^{\dagger}_{s\mathbf{k}^{\prime}\tau} a_{r(\mathbf{k}^{\prime}\in\mathbf{Q})\tau}\right], \tag{20}\] where \(\ominus\) indicates modular subtraction as defined above. In the case where \(\mathbf{Q}\neq 0\) or \(p\neq q\) and \(r\neq s\), we can move the creation and annihilation operators using the fermionic anticommutation relations to give the term on the second line as \[V^{*}_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k}^{\prime}\in \mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a^{\dagger}_{q(\mathbf{k} \in\mathbf{Q})\sigma}a_{r(\mathbf{k}^{\prime}\in\mathbf{Q})\tau}a^{\dagger}_{s \mathbf{k}^{\prime}\tau}\,. \tag{21}\] The Jordan-Wigner representation then gives the expression in square brackets in Eq. (20) as \[\frac{1}{16}\left\{V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r( \mathbf{k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}}[\vec{Z}(X_{p\mathbf{k} \sigma}-iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q(\mathbf{k}\in\mathbf{Q})\sigma }+iY_{q(\mathbf{k}\in\mathbf{Q})\sigma})][\vec{Z}(X_{r(\mathbf{k}^{\prime}\in \mathbf{Q})\tau}-iY_{(\mathbf{k}^{\prime}\in\mathbf{Q})\tau})][\vec{Z}(X_{s \mathbf{k}^{\prime}\tau}+iY_{s\mathbf{k}^{\prime}\tau})]\right.\] \[\left.+V^{*}_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{ k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}}[\vec{Z}(X_{p\mathbf{k}\sigma}+iY_{p \mathbf{k}\sigma})][\vec{Z}(X_{q(\mathbf{k}\in\mathbf{Q})\sigma}-iY_{q( \mathbf{k}\in\mathbf{Q})\sigma})][\vec{Z}(X_{r(\mathbf{k}^{\prime}\in\mathbf{ Q})\tau}+iY_{r(\mathbf{k}^{\prime}\in\mathbf{Q})\tau})][\vec{Z}(X_{s\mathbf{k}^{ \prime}\tau}-iY_{s\mathbf{k}^{\prime}\tau})]\right\}. \tag{22}\] Then we can separate Eq. (22) into real and imaginary components as \[\frac{1}{8}\left\{\mathrm{Re}(V_{p\mathbf{k},q(\mathbf{k}\in \mathbf{Q}),r(\mathbf{k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}})[\vec{Z} X_{p\mathbf{k}\sigma}\vec{Z}X_{q(\mathbf{k}\in\mathbf{Q})\sigma}+\vec{Z}Y_{p,\mathbf{k},\sigma}\vec{Z}Y_{q,\mathbf{k}\in\mathbf{Q},\sigma})][\vec{Z}X_{r( \mathbf{k}^{\prime}\in\mathbf{Q})\tau}\vec{Z}X_{s\mathbf{k}^{\prime}\tau}+\vec {Z}Y_{r(\mathbf{k}^{\prime}\in\mathbf{Q})\tau}\vec{Z}Y_{s\mathbf{k}^{\prime}\tau}]\right.\] \[\left.-\mathrm{Re}(V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r( \mathbf{k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}})[\vec{Z}Y_{p, \mathbf{k}\sigma}\vec{Z}X_{q(\mathbf{k}\in\mathbf{Q})\sigma}-\vec{Z}X_{p \mathbf{k}\sigma}\vec{Z}Y_{q(\mathbf{k}\in\mathbf{Q})\sigma})[\vec{Z}Y_{r( \mathbf{k}^{\prime}\in\mathbf{Q})\tau}\vec{Z}X_{s\mathbf{k}^{\prime}\tau}- \vec{Z}X_{r(\mathbf{k}^{\prime}\in\mathbf{Q})\tau}\vec{Z}Y_{s\mathbf{k}^{ \prime}\tau}]\right.\] \[\left.+\mathrm{Im}(V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r( \mathbf{k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}})[\vec{Z}Y_{p\mathbf{k} \sigma}\vec{Z}X_{q(\mathbf{k}\in\mathbf{Q})\sigma}-\vec{Z}X_{p\mathbf{k}\sigma} \vec{Z}Y_{q(\mathbf{k}\in\mathbf{Q})\sigma})][\vec{Z}X_{r(\mathbf{k}^{\prime} \in\mathbf{Q})\tau}\vec{Z}X_{s\mathbf{k}^{\prime}\tau}+\vec{Z}Y_{r(\mathbf{k}^{ \prime}\in\mathbf{Q})\tau}\vec{Z}Y_{s\mathbf{k}^{\prime}\tau}]\right.\] \[\left.+\mathrm{Im}(V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r( \mathbf{k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}})[\vec{Z}X_{p\mathbf{ k}\sigma}\vec{Z}X_{q(\mathbf{k}\in\mathbf{Q})\sigma}+\vec{Z}Y_{p\mathbf{k} \sigma}\vec{Z}Y_{q(\mathbf{k}\in\mathbf{Q})\sigma})][\vec{Z}Y_{r(\mathbf{k}^{ \prime}\in\mathbf{Q})\tau}\vec{Z}X_{s\mathbf{k}^{\prime}\tau}-\vec{Z}X_{r( \mathbf{k}^{\prime}\in\mathbf{Q})\tau}\vec{Z}Y_{s\mathbf{k}^{\prime}\tau}]\right\}. \tag{23}\] In accounting for cases where \(\mathbf{Q}=0\) with \(p=q\) or \(r=s\), the same expression is obtained, but there are also one-body terms obtained. These result in a total one-body operator \[\tilde{H}_{1}=H_{1}+\sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{\mathbf{k}} \sum_{p,q=1}^{N_{k}}\left(\sum_{r=1}^{N/2}\sum_{\mathbf{k}^{\prime}}V_{p \mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}\right)a^{ \dagger}_{p\mathbf{k}\sigma}a_{q\mathbf{k}\sigma}\,. \tag{24}\] A full derivation of this expression can be found in Appendix A.2. Using the representation of the one-body and two-body operators as Pauli operators, we have a linear combination of unitaries form. The \(\lambda\) associated with this LCU is \[\lambda=\lambda_{\tilde{H}_{1}}+\lambda_{\tilde{H}_{2}} \tag{25}\] \[\lambda_{\tilde{H}_{1}} =\sum_{\mathbf{k}}\sum_{pq}\left\{\left|\mathrm{Re}[h_{p\mathbf{k},q \mathbf{k}}]+\mathrm{Re}\left[\sum_{\mathbf{k}^{\prime},r}V_{p\mathbf{k},q \mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}\right]\right|+\left| \mathrm{Im}[h_{p\mathbf{k},q\mathbf{k}}]+\mathrm{Im}\left[\sum_{\mathbf{k}^{ \prime},r}V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime }}\right]\right|\right\} \tag{26}\] \[\lambda_{H_{2}} =\sum_{\mathbf{k},\mathbf{k}^{\prime},\mathbf{Q}}\sum_{pqrs} \left\{\left|\mathrm{Re}(V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{ k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}})\right|+\left|\mathrm{Im}(V_{p \mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k}^{\prime}\in\mathbf{Q}),s \mathbf{k}^{\prime}})\right|\right\}. \tag{27}\] In determining \(\lambda_{\tilde{H}_{1}}\) there is a factor of 2 due to the summation over spin \(\sigma\) and then a factor of 2 accounting for the fact that each expression in braces in Eq. (19) is the sum of two different Pauli strings. As a result these factors have cancelled the original 1/4 prefactor. In the expression for \(\lambda_{\tilde{H}_{1}}\) we have also summed over the native one-body terms and the contributions from the two-body terms. For \(\lambda_{H_{2}}\) we had a factor of 1/8 in Eq. (23), which is multiplied by the factor of 1/4 in Eq. (20). The two sums over spin \(\sigma\) and \(\tau\) give a factor of 4. Then for each of the real and imaginary parts in Eq. (23) there were sums over 8 Pauli strings, giving a factor of 8. As a result these factors have also cancelled in the expression for \(\lambda_{H_{2}}\). Note that there is a factor of 2 between this expression and that in [32], even when we just consider \(V\) that is real. The reason is that in Ref. [32] there was eight-fold symmetry, where here we only have four-fold symmetry. That is, here we have symmetry when simultaneously swapping the pairs \(p,q\) and \(r,s\), whereas in Ref. [32] there are two symmetries from swapping \(p,q\) or \(r,s\) on their own. That meant it was possible to express the Hamiltonian as in Eq. (101) of that work, then in Eq. (101) of that work the Jordan-Wigner mapping was used in the form \[a_{p\sigma}^{\dagger}a_{q\sigma}+a_{q\sigma}^{\dagger}a_{p\sigma}\mapsto\frac {X_{p\sigma}\vec{Z}X_{q\sigma}+Y_{p\sigma}\vec{Z}Y_{q\sigma}}{2}. \tag{28}\] In this mapping there has been a cancellation of half the Pauli strings, which results in \(\lambda\) being reduced by a factor of 2. Here we only have four-fold symmetry, so the value of \(\lambda\) for the two-body term is a factor of 2 larger than that in [32]. In order to implement the Hamiltonian as a linear combination of unitaries, the first step is to perform a state preparation on \(\mathbf{Q},\mathbf{k},\mathbf{k}^{\prime},p,q,r,s\). This state preparation corresponds to the sum, then we will perform controlled operations for each of the operators in Eq. (23). The state preparation is applied using coherent alias sampling as described in [71]. Because there are multiple variables that the state needs to be prepared over, it is convenient to use the QROM to output "ind" values as well as "alt" and "keep" values. Both "ind" and "alt" give values of all variables \(\mathbf{Q},\mathbf{k},\mathbf{k}^{\prime},p,q,r,s\). Then an inequality test is performed between keep and an equal superposition state, and the result is used to control a swap between ind and alt. There are both real and imaginary values of \(V_{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}}\), so we also include a qubit to distinguish between these values in the state preparation. We also do not prepare all values of \(\{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k}^{\prime}\in\mathbf{Q}),s \mathbf{k}^{\prime}\}\). There is symmetry in swapping \(p\mathbf{k},q(\mathbf{k}\in\mathbf{Q})\) with \(r(\mathbf{k}^{\prime}\in\mathbf{Q}),s\mathbf{k}^{\prime}\), or simultaneously \(p\mathbf{k}\) with \(q(\mathbf{k}\in\mathbf{Q})\) and \(r(\mathbf{k}^{\prime}\in\mathbf{Q})\) with \(s\mathbf{k}^{\prime}\). Only those values of \(\{p\mathbf{k},q(\mathbf{k}\in\mathbf{Q}),r(\mathbf{k}^{\prime}\in\mathbf{Q}),s \mathbf{k}^{\prime}\}\) that give unique values of \(V\) will be prepared. Then the full range can be obtained by using qubits to control these swaps. There is also a complex conjugate needed in the symmetry, which can be applied with a Clifford gate. The dominant complexity in the preparation comes from the QROM. The number of items of data is \(\mathcal{O}(N_{k}^{3}N^{4})\), and by using the advanced form of QROM the complexity can be made approximately the square root of the total amount of data (number of items of data times the size of each). The size of each item of data is logarithmic in \(N_{k}\) and \(N\) as well as the allowable error. Therefore, the scaling of the complexity can be given ignoring these logarithmic parts as \(\widetilde{\mathcal{O}}(N_{k}^{3/2}N^{2})\). To describe the controlled operations needed in order to implement the operation as in Eq. (23), we need to account for the fact that there are two lines for each of the real and imaginary components of \(V\). In addition, for each line in Eq. (23) there is a product of two factors, each of which is a sum of two terms. To describe the linear combination of unitaries we therefore introduce three more qubits. * The first is used to distinguish between the two lines for each of the real and imaginary parts in Eq. (23). * The second distinguishes between the two terms in the first set of square brackets. * The third distinguishes between the two terms in the second set of square brackets. When implementing the controlled operations, we perform four operations of the form of \(\vec{Z}X\) or \(\vec{Z}Y\), with \(X\) or \(Y\) being applied on target qubits indexed by \(p\mathbf{k}\), \(q(\mathbf{k}\in\mathbf{Q})\) and so forth. These Pauli strings are applied using the approach of [32], but in this case there is the additional complication that we need to select between \(X\) or \(Y\). This selection can be performed simply by performing the controlled Pauli string twice, once for \(X\) and once for \(Y\). The complexity is proportional to \(N_{k}N\), which is trivial compared to the complexity of the state preparation. The choice of whether \(X\) or \(Y\) is performed depends on the value of the three qubits selecting between the terms, as well as the qubit selecting between the real and imaginary parts. The processing of these qubits to determine the appropriate choice of \(X\) or \(Y\) can be performed with a trivial number of gates. The last part to consider is how the implementation of the one-body part of the Hamiltonian is integrated with the implementation of the two-body part. In the state preparation, amplitudes corresponding to the real and imaginary parts of \(h_{p\mathbf{k},q\mathbf{k}}\) will be produced, as well as a qubit selecting between the one- and two-body parts. That qubit will be used to also select between the choice of \(X\) and \(Y\). For the one-body part there is a product of only two of the Pauli strings, so the other two will not be applied at all for the one-body part. See Appendix A.3 for a more detailed description of the implementation. In Figure 1 we plot the Toffoli complexity to implement select + prepare + prepare\({}^{-1}\) for simulating the aforementioned sample systems using a symmetry-adapted select and prepare at different Monkhorst-Pack grids and different number of bands (cc-pVDZ and cc-pVTZ). We compare the symmetry-adapted calculations to supercell calculations using the same select and prepare. The supercell calculations do not explicitly take into account the symmetry of the primitive cell in the full simulation cell. To demonstrate the symmetry-adapted Hartree-Fock orbitals do not appreciably change the overall scaling with respect to a supercell calculation we plot the total \(\lambda\) for the supercell calculation (which reruns Hartree-Fock on the supercell) and the symmetry-adapted version. Though Figure 1 indicates some computational advantage for the symmetry-adapted case the expected \(\sqrt{N_{k}}\) improvement over the supercell case is not easily observed. The cost of the sparse method largely depends on the number of nonzero elements of \(V\), which is generically expected to go as \(\mathcal{O}(N_{k}^{3}N^{4})\) for the symmetry-adapted case and \(\mathcal{O}(N_{k}^{4}N^{4})\) for the supercell case. The scaling ultimately depends on the number of nonzero elements in each block of integrals (indexed by three-momentum indices), which we expect to be independent of supercell size \(N_{k}\). For Diamond we plot this dependence in Figure 2 and demonstrate that convergence is slow and there is a strong \(N_{k}\) dependence in the number of nonzero elements in each two-electron integral block. This dependence makes observing the improvement in Toffoli cost for symmetry-adapted oracles difficult in the low \(N_{k}\) regime. ### The single-factorization Hamiltonian representation For the "single-factorization" method, the Cholesky decomposition of the 2-electron integral tensor can be applied iteratively or the factorized forms can be directly recovered from a density fitted representation of the atomic orbital Figure 1: (a) Sparse Toffoli step complexity versus the number of \(k\)-points for systems in Table 1 using the cc-pVDZ and cc-pVTZ basis set \(\Gamma\)-centered Monkhorst-Pack grids of size [1, 1, 1] to [3, 3, 3]. Each point is a single system described at a particular basis set and \(k\)-mesh where the threshold for zeroing each nonzero two-electron integral coefficient is determined by MP2 as described earlier. The scaling for implementing the block encoding is shown in the legend. To isolate the \(N_{k}\) scaling behavior we divide the Toffoli step complexity by the square of the number of basis functions (\(N^{2}\)). For supercell we expect a scaling going as \(\mathcal{O}(N_{k}^{2})\) and for symmetry-adapted block encodings we expect a scaling going as \(\mathcal{O}(N_{k}^{1.5})\). The ideal symmetry-adapted scaling is not reached due to finite size effects which are further discussed in Figure 2. (b) Total \(\lambda\) for the symmetry-adapted version (denoted UC) and the supercell calculation without explicit primitive cell symmetry (denoted SC) demonstrating no deterioration of \(\lambda\) by symmetry-adapting. Similar to (a) all points are a particular system from the benchmark set in a fixed basis and \(k\)-mesh. integral. The quadratic representation of the two-electron integral tensor is \[V_{p\mathbf{k}_{p},q\mathbf{k}_{v},r\mathbf{k}_{r},s\mathbf{k}_{s}}=\sum_{n}L_{p \mathbf{k}_{p}q\mathbf{k}_{q},n}L^{*}_{s\mathbf{k}_{s}r\mathbf{k}_{r},n} \tag{29}\] where \(\mathbf{k}_{p}+\mathbf{k}_{r}=\mathbf{k}_{q}+\mathbf{k}_{s}\) modulo a reciprocal lattice vector \(\mathbf{G}\), or \(\mathbf{k}_{p}-\mathbf{k}_{q}-(\mathbf{k}_{s}-\mathbf{k}_{r})=\mathbf{G}\). We can identify \(\mathbf{k}_{p}-\mathbf{k}_{q}=\mathbf{Q}+\mathbf{G}=\mathbf{k}_{s}-\mathbf{k }_{r}\). Thus the two-body interaction operator can be written as \[\hat{H}_{2}^{\prime}=\frac{1}{2}\sum_{\mathbf{Q}}^{N_{k}}\sum_{n}^{M}\left( \sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{pq}^{N/ 2}L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}a^{\dagger}_{p\mathbf{k}\sigma}a _{q(\mathbf{k}\in\mathbf{Q})\sigma}\right)\left(\sum_{\tau\in\{\uparrow, \downarrow\}}\sum_{\mathbf{k}^{\prime}}^{N_{k}}\sum_{rs}^{N/2}L^{*}_{s \mathbf{k}^{\prime}r(\mathbf{k}^{\prime}\ominus\mathbf{Q}),n}a^{\dagger}_{r( \mathbf{k}^{\prime}\ominus\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}\right). \tag{30}\] Due to the reduced symmetry of the complex valued two-electron integral tensor we take additional steps to form Hermitian operators which can be expressed as Pauli operators under the Jordan-Wigner transform. We express each one-body operator in the product of particle-conserving one-body operators forming the two-electron operator as \[\hat{\rho}_{n}(\mathbf{Q})=\left(\sum_{\sigma\in\{\uparrow,\downarrow\}} \sum_{\mathbf{k}}^{N_{k}}\sum_{pq}^{N/2}L_{p\mathbf{k}q(\mathbf{k}\ominus \mathbf{Q}),n}a^{\dagger}_{p\mathbf{k}\sigma}a_{q(\mathbf{k}\ominus\mathbf{Q}) \sigma}\right),\qquad\hat{\rho}_{n}^{\dagger}(\mathbf{Q})=\left(\sum_{\sigma \in\{\uparrow,\downarrow\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{pq}^{N/2}L^{*}_{p \mathbf{k}q(\mathbf{k}\ominus\mathbf{Q}),n}a^{\dagger}_{q(\mathbf{k}\ominus \mathbf{Q})\sigma}a_{p\mathbf{k}\sigma}\right). \tag{31}\] We now take a linear combination of \(\hat{\rho}_{n}(\mathbf{Q})\) to form Hermitian operators and represent our two-electron integral operator as a sum of squares of Hermitian operators that are amenable to the approach for the quhitization of one-body sparse operators via a linear combination of unitaries. These operators are denoted \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\) and are defined as \[\hat{A}_{n}(\mathbf{Q}) =\frac{1}{2}(\hat{\rho}_{n}(\mathbf{Q})+\hat{\rho}_{n}^{\dagger}( \mathbf{Q})), \tag{32}\] \[\hat{B}_{n}(\mathbf{Q}) =\frac{i}{2}(\hat{\rho}_{n}(\mathbf{Q})-\hat{\rho}_{n}^{\dagger}( \mathbf{Q})), \tag{33}\] to give \[\hat{H}_{2}^{\prime}=\frac{1}{2}\sum_{\mathbf{Q}}^{N_{k}}\sum_{n}^{M}\left( \hat{A}_{n}^{2}(\mathbf{Q})+\hat{B}_{n}^{2}(\mathbf{Q})\right). \tag{34}\] Figure 2: \(N_{k}\) dependence on the number of non-zero elements in each two-electron integral block indexed by irrep. labels for Diamond in a single-zeta-valence basis with a \(k\)-mesh shifted to (1/8, 1/8, 1/8) of the simulation cell. In the thermodynamic limit this value should be independent of \(N_{k}\) and thus both supercell (SC) and symmetry-adapted (UC) should have no correlation with \(N_{k}\)–_i.e._ a slope of zero. The \(N_{k}\) dependence for small \(N_{k}\) makes it difficult to observe the asymptotic improvements from symmetry-adapting the sparse quhitization oracles. We have taken advantage of the translational symmetry by performing the sum over \(\mathbf{Q}\) outside the squares of \(\hat{A}\) and \(\hat{B}\), which reduces the amount of information needed in the representation. In the case \(\mathbf{Q}\neq 0\) we can write \(\hat{A}_{n}\) as \[\hat{A}_{n}(\mathbf{Q}\neq 0) =\frac{1}{2}\sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{\mathbf{k} }^{N_{k}}\sum_{pq}^{N/2}\left(L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}a^{ \dagger}_{p\mathbf{k}\sigma}a_{q(\mathbf{k}\in\mathbf{Q})\sigma}+L^{*}_{p \mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}a^{\dagger}_{q(\mathbf{k}\in\mathbf{Q}) \sigma}a_{p\mathbf{k}\sigma}\right)\] \[=\frac{1}{2}\sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{ \mathbf{k}}^{N_{k}}\sum_{pq}^{N/2}\mathrm{Re}[L_{p\mathbf{k}q(\mathbf{k}\in \mathbf{Q}),n}]\left(a^{\dagger}_{p\mathbf{k}\sigma}a_{q(\mathbf{k}\in\mathbf{ Q})\sigma}+a^{\dagger}_{q(\mathbf{k}\in\mathbf{Q})\sigma}a_{p\mathbf{k}\sigma}\right)\] \[\quad+\frac{i}{2}\sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{ \mathbf{k}}^{N_{k}}\sum_{pq}^{N/2}\mathrm{Im}[L_{p\mathbf{k}q(\mathbf{k}\in \mathbf{Q}),n}]\left(a^{\dagger}_{p\mathbf{k}\sigma}a_{q(\mathbf{k}\in\mathbf{ Q})\sigma}-a^{\dagger}_{q(\mathbf{k}\in\mathbf{Q})\sigma}a_{p\mathbf{k}\sigma} \right). \tag{35}\] Applying the Jordan-Wigner representation then gives \[\hat{A}_{n}(\mathbf{Q}\neq 0)=\sum_{\sigma\in\{\uparrow,\downarrow\}} \sum_{\mathbf{k}}^{N_{k}}\sum_{pq}^{N/2}\left(\frac{i\mathrm{Re}[L_{p\mathbf{ k}q(\mathbf{k}\in\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q( \mathbf{k}\in\mathbf{Q})\sigma}-\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q( \mathbf{k}\in\mathbf{Q})\sigma}\right)\right.\] \[\qquad\qquad\qquad\qquad\qquad+\left.\frac{i\mathrm{Im}[L_{p \mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k}\sigma} \vec{Z}X_{q(\mathbf{k}\in\mathbf{Q})\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z }Y_{q(\mathbf{k}\in\mathbf{Q})\sigma}\right)\right). \tag{36}\] The same reasoning can be performed for \(\hat{B}_{n}(\mathbf{Q}\neq 0)\), which gives the plus and minus signs between \(a^{\dagger}_{p\mathbf{k}\sigma}a_{q(\mathbf{k}\in\mathbf{Q})\sigma}\) and \(a^{\dagger}_{q(\mathbf{k}\in\mathbf{Q})\sigma}a_{p\mathbf{k}}\) in Eq. (35) reversed, so the roles of the real and imaginary parts are reversed. As a result we obtain \[\hat{B}_{n}(\mathbf{Q}\neq 0)=\sum_{\sigma\in\{\uparrow,\downarrow\}} \sum_{\mathbf{k}}^{N_{k}}\sum_{pq}^{N/2}\left(\frac{i\mathrm{Im}[L_{p\mathbf{ k}q(\mathbf{k}\in\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q( \mathbf{k}\in\mathbf{Q})\sigma}-\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q( \mathbf{k}\in\mathbf{Q})\sigma}\right)\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.+\left.\frac{i\mathrm{Re}[L_{ p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k} \sigma}\vec{Z}X_{q(\mathbf{k}\in\mathbf{Q})\sigma}+\vec{Z}Y_{p\mathbf{k} \sigma}\vec{Z}Y_{q(\mathbf{k}\in\mathbf{Q})\sigma}\right)\right). \tag{37}\] Accounting for the cases with \(\mathbf{Q}=0\), we may use the same expressions with an extra identity, which yields a one-body correction when squaring. We show in Appendix B.1 that this results in the total one-body operator \[\tilde{H}_{1}=\sum_{\sigma\in\{\uparrow,\downarrow\}}\sum_{\mathbf{k}}^{N_{k}} \sum_{p,q=1}^{N/2}\left(h_{p\mathbf{k},q\mathbf{k}}+\sum_{r=1}^{N/2}\sum_{ \mathbf{k}^{\prime}}^{N_{k}}V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}\right)a^{\dagger}_{p\mathbf{k}\sigma}a_{q\mathbf{k}\sigma} \tag{38}\] as before. Therefore the associated \(\lambda\) is again \(\lambda_{\tilde{H}_{1}}\) as given in Eq. (25). The \(\lambda\) for the two-body term is then \[\lambda_{V}=\frac{1}{2}\sum_{\mathbf{Q}}\sum_{n}^{M}\left(\sum_{ \mathbf{k},pq}(|\mathrm{Re}[L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}]|+| \mathrm{Im}[L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}]|)\right)^{2}. \tag{39}\] This expression can be obtained by first summing the absolute values of the weights in the linear combination of unitaries for \(\hat{A}\) and \(\hat{B}\) to give \[\sum_{\mathbf{k},pq}(|\mathrm{Re}[L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}] |+|\mathrm{Im}[L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}]|). \tag{40}\] This is obtained by noting that the sum over the spin gives a factor of \(2\), and there are two unitary operators for each of the real and imaginary parts; together these cancel the factor of \(4\). Then this expression is squared for each of \(\hat{A}\) and \(\hat{B}\), and there is a sum over \(\mathbf{Q}\) and \(n\) in Eq. (34). A further factor of \(1/2\) is obtained because we use amplitude amplification on each operator as described in [32], thus giving our expression for \(\lambda_{V}\). Next we describe the method to block encode the Hamiltonian in this single-factorized representation. The key idea is to perform a state preparation over \(\mathbf{Q}\) and \(n\), then block encode the squares of \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\) using a single step of oblivious amplitude amplification (which saves a factor of \(2\) for the value of \(\lambda\)). That is, we perform block encodings of \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\), reflect on an ancilla register, then apply the block encodings again. For the initial state preparation on \(\mathbf{Q},n\), the number of items of data is \(MN_{k}+1\), where the \(+1\) is for the one-body part of the Hamiltonian. This state preparation is via coherent alias sampling, so the dominant cost is from the QROM needed to output and alt values. That has complexity scaling as \(\widetilde{\mathcal{O}}(\sqrt{MN_{k}})\), where the tilde accounts for the size of the items of data. For both \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\) we have weightings according to the real and imaginary parts of \(L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}\), but the difference is in what operations are performed in the sum. Therefore, for an LCU block encoding, the state preparation step may be identical between \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\). For each value of \(\mathbf{Q},n\), the number of unique values of \(\mathbf{k},p,q\) to consider is \(N_{k}N^{2}/4\). Unlike the supercell case we cannot take advantage of symmetry between \(p\) and \(q\), because we have \(p\mathbf{k}\) and \(q(\mathbf{k}\!\!\in\!\!\mathbf{Q})\). The relation between \(\mathbf{k}\) and \(\mathbf{k}\!\!\in\!\!\mathbf{Q}\) is governed by the value of \(\mathbf{Q}\) which is given in the outer sum, and so we cannot exchange \(p\) and \(q\). There is a further factor of \(2\) for the number of items of data, because both real and imaginary parts are needed. Accounting for the values of \(\mathbf{Q},n\), the total number of items of data that must be output by the QROM used in the state preparation is \((MN_{k}+1)N_{k}N^{2}/2=\mathcal{O}(N_{k}^{2}N^{3})\), given that \(M\) scales as \(\mathcal{O}(N)\). Again because the size of the items of data is logarithmic, this gives a complexity \(\widetilde{\mathcal{O}}(N_{k}N^{3/2})\). In contrast, in the supercell calculation, each \(\hat{A}_{n}\) and \(\hat{B}_{n}\) would have \(\mathcal{O}(N_{k}^{2}N^{2})\) entries, and the rank would be \(\mathcal{O}(N_{k}N)\), for a total number of items of data \(\mathcal{O}(N_{k}^{3}N^{3})\). That would give a complexity \(\widetilde{\mathcal{O}}(N_{k}^{3/2}N^{3/2})\), so there is a factor of \(\sqrt{N_{k}}\) improvement obtained by taking advantage of the symmetry. In the state preparation we only prepare \(p,q\) for \(p\leq q\), and the full range of values should be produced using a swap controlled by an ancilla register. A further subtlety in the implementation as compared to prior work is that the complex conjugate is needed as well. This may be implemented using a sign flip on the qubit indicating the imaginary part, so it is just a Clifford gate. A major difference is in the selection of operations for \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\). We see that there are two steps where we need to apply an operation of the form \(\vec{Z}X\) or \(\vec{Z}Y\), and the choice of \(X\) or \(Y\). The selection of where the \(X\) or \(Y\) is applied (indicated by the subscript) can be implemented in the standard way. The choice of whether \(X\) or \(Y\) is applied depends on four qubits. 1. The qubit selecting between the one- and two-body parts. 2. A qubit selecting between \(A\) and \(B\), which can simply be prepared in an equal superposition using a Hadamard because there are equal weightings between these operators. 3. A qubit selecting between the real and imaginary parts of \(L_{p\mathbf{k}q(\mathbf{k}\in\mathbf{Q}),n}\), which was prepared in the state preparation. 4. A qubit selecting between the two terms shown above in each line of the expressions for \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\). This qubit can also be prepared using a Hadamard. Using a trivial number of operations on these qubits we can determine whether it is \(X\) or \(Y\) that needs to be performed. The cost of the controlled unitary is doubled because we apply a controlled \(\vec{Z}X\) and a controlled \(\vec{Z}Y\), but this cost is trivial compared to the state preparation cost so has little effect on the overall complexity. A further subtlety in the implementation is that in the second implementation of \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\), we simply use the qubit flagging the one-body part to control whether the Pauli string \(\vec{Z}X\) or \(\vec{Z}Y\) is applied at all. This ensures that the square is not obtained for the one-body part. For a more in-depth explanation of the implementation, see the circuit diagram in Figure 3 and the explanation in Appendix B.2. Figure 4 demonstrates the \(\sqrt{N_{k}}\) improvement in constructing the walk operator by symmetry-adapting. Even for small \(N_{k}\) there is a clear separation between the cost of supercell (SC) and symmetry-adapted oracles that agrees with the theoretical scalings of \(1.5\) and \(1.0\), respectively. ### The double-factorization Hamiltonian representation In the sparse and SF LCU approaches we have found that there is a factor of \(\sqrt{N_{k}}\) savings in Toffoli costs and logical qubit costs for symmetry-adapted block encoding constructions over their non-symmetry-adapted counterparts (supercell calculations). The double-factorization (DF) representation continues this trend, though the origin of the speedup is different. In the double-factorization circuits each unitary of the LCU is a rank-one one-body operator that can be thought of as the outer product of two vectors of ladder operators, where each vector of ladder operators is obtained by a Givens rotation with multiqubit control based on other indices. First notice that for SF there is \(\mathcal{O}(N_{k}^{2}N^{3})\) data to output to specify the Hamiltonian via the Cholesky factors. The factors come from two momentum indices \(\mathbf{k},\mathbf{Q}\), two band indices \(p,q\), and one auxiliary index \(n\). In this section we demonstrate that by using a workspace register to apply Givens rotations to pairs of band indices, \(\{\mathbf{k},\mathbf{k}\mathbf{\ominus}\mathbf{Q}\}\), the complexity of the DF LCU can also be improved by a factor of \(\sqrt{N_{k}}\) over supercell calculations. To construct the DF LCU, we will separate \(\hat{A}_{n}(\mathbf{Q})\) and \(\hat{B}_{n}(\mathbf{Q})\) out into sums over \(\mathbf{k}\). To express this, instead of having \(\rho_{n}(\mathbf{Q})\), we define \(\rho_{n}(\mathbf{Q},\mathbf{k})\) \[\hat{\rho}_{n}(\mathbf{Q},\mathbf{k})=\left(\sum_{\sigma\in\{\uparrow,\downarrow \}}\sum_{pq}^{N/2}L_{p\mathbf{k}q(\mathbf{k}\mathbf{\ominus}\mathbf{Q}),n}a^{ \dagger}_{p\mathbf{k}\sigma}a_{q(\mathbf{k}\mathbf{\ominus}\mathbf{Q})\sigma} \right),\qquad\hat{\rho}_{n}^{\dagger}(\mathbf{Q},\mathbf{k})=\left(\sum_{ \sigma\in\{\uparrow,\downarrow\}}\sum_{pq}^{N/2}L^{*}_{p\mathbf{k}q(\mathbf{ k}\mathbf{\ominus}\mathbf{Q}),n}a^{\dagger}_{q(\mathbf{k}\mathbf{\ominus} \mathbf{Q})\sigma}a_{p\mathbf{k}\sigma}\right) \tag{41}\] so then the Hermitian one-body operators that are squared to form the two body part of the Hamiltonian are \[\hat{A}_{n}(\mathbf{Q}) =\sum_{\mathbf{k}}\frac{1}{2}(\hat{\rho}_{n}(\mathbf{Q},\mathbf{ k})+\hat{\rho}_{n}^{\dagger}(\mathbf{Q},\mathbf{k}))\,, \tag{42}\] \[\hat{B}_{n}(\mathbf{Q}) =\sum_{\mathbf{k}}\frac{i}{2}(\hat{\rho}_{n}(\mathbf{Q},\mathbf{ k})-\hat{\rho}_{n}^{\dagger}(\mathbf{Q},\mathbf{k}))\,. \tag{43}\] Figure 3: The circuit for performing the state preparation and controlled operations for the single factorization approach. The register labelled \(\ell\) is a contiguous register for \(\mathbf{Q},n\), with \(\mathbf{Q}\) also output in the state preparation. The inner state preparation uses \(\ell\) as a control. Then the minus on \(\mathbf{Q}\) controlled by \(\mathbf{k}\) is to compute \(\mathbf{k}\mathbf{\ominus}\mathbf{Q}\), with that register reset to \(\mathbf{Q}\) later with a controlled addition. A qubit is used to swap \(p\) and \(q\) to generate that symmetry, and another is used to swap the spin up and spin down components of the system so the selection only need act on the spin down component. The qubits labelled \(\mathrm{Re}/\mathrm{Im}\), \(A/B\), and “term” are the qubits selecting the real versus imaginary parts, \(A\) versus \(B\), and the two terms in each line of the Hamiltonian. These correspond to \(b_{1},b_{2},b_{3}\) in Appendix B.2, and the Toffoli and CNOT gates are used so that the “term” qubit can be used to select whether the Paul string with \(X\) or \(Y\) is applied. The selection is performed twice, once for each of the Pauli strings and so is controlled by \(\mathbf{k}\mathbf{\ominus}\mathbf{Q}\) and \(q\) the first time, then \(\mathbf{k}\) and \(p\) the second time. A controlled phase between \(\mathrm{Re}/\mathrm{Im}\) and “term” (also controlled by the success flag qubits) is used to generate the correct sign for the term. The block encoding of \(A/B\) is performed twice, with the reflection on the ancilla qubits in the middle generating the step of oblivious amplitude amplification. The third register flags that we have the two-body part of the Hamiltonian, and is used to control the block encoding of \(A/B\) the second time to ensure it is not performed for the one-body part. Just as in the single factorization case, we have the two-body part of the Hamiltonian \[\hat{H}_{2}^{\prime}=\frac{1}{2}\sum_{\mathbf{Q}}^{N_{k}}\sum_{n}^{M}\left(\hat{ A}_{n}^{2}(\mathbf{Q})+\hat{B}_{n}^{2}(\mathbf{Q})\right). \tag{44}\] We can write \(\hat{A}_{n}(\mathbf{Q})\) as \[\hat{A}_{n}(\mathbf{Q})=\sum_{\mathbf{k}}\left[U_{n}^{A}(\mathbf{Q},\mathbf{k} )\left(\sum_{\sigma}\sum_{p}^{\Xi_{\mathbf{Q},n,\mathbf{k},A}}f_{p}^{A}( \mathbf{Q},n,\mathbf{k})n_{pk\sigma}\right)U_{n}^{A}(\mathbf{Q},\mathbf{k})^{ \dagger}\right] \tag{45}\] where the basis rotation unitary \(U_{n}(\mathbf{Q},\mathbf{k})\) acts on orbitals indexed by \(\mathbf{k}\) and \(\mathbf{k}\)\(\ominus\)\(\mathbf{Q}\), \(\Xi_{\mathbf{Q},n,\mathbf{k},A}\) corresponds to a rank cutoff for \(A\), and \(f_{\mathbf{p}}^{A}(\mathbf{Q},n,\mathbf{k})\) is the eigenvalue of the one body operator that is diagonalized by \(U_{n}(\mathbf{Q},\mathbf{k})\). The expression for \(\hat{B}_{n}(\mathbf{Q})\) is similar, and we use \(\Xi_{\mathbf{Q},n,\mathbf{k},B}\) to denote the rank cutoff. In practice, for the implementation we would apply a different basis rotation for each individual value of \(p\). As explained by [32], when doing that the number of Givens rotations needed only corresponds to the number of orbitals it is acting upon, instead of the square. Here we have two momentum modes \(\{\mathbf{k},\mathbf{k}\)\(\ominus\)\(\mathbf{Q}\}\) with \(N\) orbitals for each, suggesting there should be \(2N\). However, there is no mixture between the different spin states indexed by \(\sigma\), so that gives the number of orbitals as \(N\). To quantify the amount of information needed to specify the rotations for the Hamiltonian, there is a \(\mathbf{Q}\) summation, \(n\) summation, \(\mathbf{k}\) summation, \(p\) summation, and we need to specify \(N\) Givens rotations for each. In turn, each Givens rotation needs two angles. The total data here therefore scales as \[\widetilde{\mathcal{O}}\left(N\sum_{\mathbf{Q},n,\mathbf{k}}(\Xi_{\mathbf{Q},n,k,A}+\Xi_{\mathbf{Q},n,k,B})\right), \tag{46}\] where a factor of \(N\) comes from the number of Givens rotations, and the tilde accounts for the bits of precision given for the rotations. By analogy with the supercell case, it is convenient to define an average rank \[\Xi:=\frac{1}{2N_{k}M}\sum_{\mathbf{Q},n,\mathbf{k}}(\Xi_{\mathbf{Q},n,k,A}+ \Xi_{\mathbf{Q},n,k,B}). \tag{47}\] Figure 4: (a) Number of \(k\)-points verses Toffoli cost to implement the block encoding for the single factorization LCU evaluated for the benchmark systems listed in Table 1 described using the cc-pVDZ and cc-pVTZ basis sets and \(\Gamma\)-centered Monkhorst-Pack grids of size [1, 1, 1] to [3, 3, 3]. Each point is a single system described at a particular basis set and \(k\)-mesh where the range of the auxiliary index of the Cholesky factorization is selected to produce two-electron integrals corresponding to an MP2 error of one 1 milliHartree per unit cell with respect to an untruncated auxiliary index range. We divide the Toffoli complexity for implementing select + prepare + prepare\({}^{-1}\) by \(N^{3/2}\), which is the shared scaling in the number of bands. The different scaling in number of \(k\)-points becomes clear: \(N_{k}\) for symmetry-adapted block encodings and \(N_{k}^{3/2}\) for supercell non-symmetry-adapted block encodings. We observe similar behavior for qubit count and for plotting oracle Toffoli complexity versus the number of bands. (b) The value of \(\lambda\) per unit cell (\(\lambda/N_{k}\)) as a function of the total system size \(NN_{k}\) for the same systems described with the same cutoffs used in (a). This has division by \(N_{k}\) for the \({\bf Q}\) sum and \(M\) for the \(n\) sum, but no division by a factor accounting for \({\bf k}\). That is, it is the average rank for each value of \({\bf Q}\) and \(n\), with the sum over \({\bf k}\) regarded as part of the rank. Then it is most closely analogous to the rank in the supercell case, and it is found that it similarly scales as \({\cal O}(N_{k}N)\). In terms of \(\Xi\), the scaling of the amount of data can be given as \(\widetilde{\cal O}(N_{k}N^{2}\Xi)\), using \(M={\cal O}(N)\). Next we describe in general terms how to perform the block encoding of the linear combination of unitaries, with the full explanation in Appendix C. As in the case of single factorisation, the general principle is to perform state preparation over \({\bf Q}\) and \(n\), then block encode the squares of \(\hat{A}_{n}({\bf Q})\) and \(\hat{B}_{n}({\bf Q})\) using oblivious amplitude amplification. The difference is that \(\hat{A}_{n}({\bf Q})\) and \(\hat{B}_{n}({\bf Q})\) are now block encoded in a factorized form. In more detail, the key parts are as follows. 1. Perform a state preparation over \({\bf Q}\) and \(n\), as well as a qubit distinguishing between \(\hat{A}_{n}({\bf Q})\) and \(\hat{B}_{n}({\bf Q})\). Using the advanced QROM, the complexity of this state preparation scales approximately as the square root of the number of items of data, so as \(\widetilde{O}(\sqrt{N_{k}N})\). The tilde accounts for logarithmic factors from the size of the output. For convenience here we use a contiguous register for combined values of \({\bf Q}\) and \(n\). 2. Apply a QROM which outputs the value of \({\bf Q}\), as well as an offset needed for the contiguous register needed in the state preparation for \(\hat{A}_{n}({\bf Q})\) and \(\hat{B}_{n}({\bf Q})\). 3. Perform the inner state preparation over \({\bf k}\) and \(p\). Here the number of items of data is \({\cal O}(N_{k}N\Xi)\), accounting for the sums over \({\bf Q},n,{\bf k},p\). The complexity via advanced QROM is approximately the square root of this quantity, \(\widetilde{\cal O}(\sqrt{N_{k}N\Xi})\). 4. Apply the QROM again to output the rotation angles for the Givens rotations needed for the basis rotation. This time the size of the output scales as \({\cal O}(N)\), so the complexity of the QROM scales as \(\widetilde{\cal O}(N\sqrt{N_{k}\Xi})\). This is the dominating term in the complexity. 5. Use control qubits to swap the system registers into the correct location. This is done first controlled by a qubit labelling the spin, \(\sigma\), which is similar to what was done in prior work. The new feature here is that registers containing \({\bf k}\) and \({\bf k}\ominus{\bf Q}\) are also used to swap system registers into \(N\) target qubits. 6. Apply the Givens rotations on these \(N\) target qubits. The complexity here only scales as \(\widetilde{\cal O}(N)\), so is smaller than in the other steps. 7. Apply a controlled \(Z\) for part of the number operator. This comes from representing the number operator as \((\openone-Z)/2\) and combining the identity with the one-body part of the Hamiltonian. 8. Invert the Givens rotations, controlled swaps, QROM for the Givens rotations, and state preparation over \({\bf k},p\). The complexities here are similar to those in the previous steps, but the complexities for QROM erasure are reduced. This completes the block encoding of \(\hat{A}_{n}({\bf Q})\) and \(\hat{B}_{n}({\bf Q})\). 9. Perform a reflection on the ancilla qubits used for the state preparation on \({\bf k},p\). This is needed for the oblivious amplitude amplification. 10. Perform steps 3 to 8 again for a second block encoding of \(\hat{A}_{n}({\bf Q})\) and \(\hat{B}_{n}({\bf Q})\). This together with the reflection gives a step of oblivious amplitude amplification, and therefore the squares of \(\hat{A}_{n}({\bf Q})\) and \(\hat{B}_{n}({\bf Q})\). 11. Invert the QROM from step 2. This has reduced complexity because it is an erasure. 12. Invert the state preparation from step 1. A quantum circuit for the procedure is shown in Figure 5. This is similar to Figure 16 in [32], except it is including the extra parts needed in order to account for the momentum \({\bf k}\) used here. In particular, \(\ell\) shown here is a contiguous register for \({\bf Q},n\), and \(p\) shown in the diagram is actually a contiguous register for \({\bf k},p\). The values of \({\bf Q}\) and \({\bf k}\) need to be output via QROM after the state preparations. Then \({\bf k}\ominus{\bf Q}\) is computed and \({\bf k}\) and \({\bf k}\ominus{\bf Q}\) are used to swap the required part of the system register into \(N\) target qubits where we apply the Givens rotations. The lambda value for the Hamiltonian can be calculated by determining the total L1-norm of the coefficients of the unitaries used to represent the Hamiltonian. To determine this norm, note first that the number operator is replaced with \((\openone-Z)/2\), and the identity is combined with the one-body part of the Hamiltonian. For \(\hat{A}_{n}({\bf Q})\), what is implemented therefore corresponds to \[-\frac{1}{2}\sum_{{\bf k}}\left[U_{n}^{A}({\bf Q},{\bf k})\left(\sum_{\sigma} \sum_{p}^{\Xi_{{\bf Q},n,{\bf k},4}}f_{p}^{A}({\bf Q},n,{\bf k})Z_{p{\bf k} \sigma}\right)U_{n}^{A}({\bf Q},{\bf k})^{\dagger}\right]. \tag{48}\] Figure 5: The circuit for performing the state preparation and controlled operations for the double factorization approach. The register labelled \(\ell\) is a contiguous register for preparing \(\mathbf{Q},n\), with \(\mathbf{Q}\) output next in the QROM. The register labelled \(p\) is actually a contiguous register for preparing both \(\mathbf{k}\) and \(p\). The value of \(\mathbf{k}\) is output in the next step together with the rotations. Then the minus on \(\mathbf{Q}\) controlled by \(\mathbf{k}\) is to compute \(\mathbf{k}\mathbf{\odot}\mathbf{Q}\). The “\(\times\)” on \(|\psi_{\lambda}\rangle\) controlled by the \(\mathbf{Q}\) and \(\mathbf{k}\) registers indicates that these registers are used to swap the required part of the system register into \(N\) target qubits that the Givens rotations \(R\) act upon. Summing the absolute values of coefficients here gives \[\sum_{\mathbf{k}}\sum_{p}^{\Xi_{\mathbf{Q}_{n},\mathbf{k},A}}|f_{p}^{A}(\mathbf{ Q},n,\mathbf{k})|, \tag{49}\] where the sum over the spin \(\sigma\) has given a factor of 2 which canceled the factor of 1/2. In implementing the square of \(\hat{A}_{n}(\mathbf{Q})\) we use oblivious amplitude amplification, which provides a factor of 1/2 to \(\lambda\). Combining this with the 1/2 in the definition of \(\hat{H}_{2}^{\prime}\) gives 1/4, and combining with the contribution from \(\hat{B}_{n}(\mathbf{Q})\) then gives \[\lambda_{\mathrm{DF},2}=\frac{1}{4}\sum_{\mathbf{Q},n}\left[\left(\sum_{ \mathbf{k},p}^{N_{k}\Xi_{\mathbf{Q}_{n},\mathbf{k},A}}|f_{n}^{A}(p,\mathbf{Q},\mathbf{k})|\right)^{2}+\left(\sum_{\mathbf{k},p}^{N_{k}\Xi_{\mathbf{Q}_{n}, \mathbf{k},B}}|f_{n}^{B}(p,\mathbf{Q},\mathbf{k})|\right)^{2}\right], \tag{50}\] where the superscript \(B\) on \(f\) indicates the corresponding quantity for \(\hat{B}_{n}(\mathbf{Q})\). The one-body Hamiltonian is adjusted by the one-body term arising from the identity in the representation of the number operator in the two-body Hamiltonian. This yields an effective one-body Hamiltonian (see Appendix C) \[H_{1}^{\prime}=\sum_{\mathbf{k},p,q,\sigma}\left(h_{p\mathbf{k},q\mathbf{k}}+ \sum_{\mathbf{k}^{\prime},r}V_{r\mathbf{k}^{\prime},r\mathbf{k}^{\prime},q \mathbf{k},p\mathbf{k}}\right)a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k} \sigma}. \tag{51}\] We can rewrite this as \[H_{1}^{\prime}=\sum_{\mathbf{k},\sigma}\left[U^{C}(\mathbf{k})\left(\sum_{p} ^{N/2}\lambda_{\mathbf{k},p}n_{p\mathbf{k}\sigma}\right)U^{C}(\mathbf{k})^{ \dagger}\right] \tag{52}\] where \(\lambda_{\mathbf{k},p}\) are eigenvalues of the matrix indexed by \(p,q\) in the the brackets in Eq. (51). Thus the L1-norm of \(H_{1}^{\prime}\) is the sum \[\lambda_{\mathrm{DF},1}=\sum_{\mathbf{k}}\sum_{p}|\lambda_{\mathbf{k},p}|. \tag{53}\] Figure 6: (a) The number of \(k\)-points verses Toffoli cost to implement the block encoding for the double factorization LCU evaluated for the benchmark systems listed in Table 1 described using the cc-pVDZ and cc-pVTZ basis sets and \(\Gamma\)-centered Monkhorst-Pack grids of size [1, 1, 1] to [3, 3, 3]. Each point is a single system described at a particular basis set and \(k\)-mesh where the threshold to keep eigenvalues and vectors of the second factorization is selected to produce two-electron integrals corresponding to an MP2 error of one 1 milliHartree with respect to an untruncated double factorization. On average this corresponds to a threshold value of \(1\times 10^{-4}\) for the benchmark systems. The expected \(\mathcal{O}(\sqrt{N_{k}})\) scaling improvement for symmetry-adapted walk operators is demonstrated. (b) The value of \(\lambda\) per unit cell as a function of the total system size \(NN_{k}\) for the same systems described with the same cutoffs used in (a). The reduced variational freedom in compression of the two-electron integral tensors for the symmetry-adapted walk operator construction translates to an increased value of \(\lambda\) at all system sizes. Figure 6 demonstrates the improved \(\sqrt{N_{k}}\) scaling of the block encodings coming from reducing the number of controlled rotations by \(N_{k}\). Unlike the SF case \(\lambda\) for DF has worse scaling in the symmetry-adapted setting compared to the supercell case. This is rationalized by the fact that there is a larger degree of variational freedom in the second factorization for supercell calculations (and thus more compression) compared to the symmetry-adapted case. The \(\lambda\) value is basis set dependent and can potentially be reduced by orbital optimization [79]. ### The tensor hypercontraction Hamiltonian representation In the tensor hypercontraction (THC) LCU representation the fact that the two-electron integrals can be represented in a symmetric Canonical Polyadic like decomposition is used to define a set of non-orthogonal basis function in which to represent the Hamiltonian, and we use a similar infrastructure to the DF algorithm to implement each term in the factorization (which is in a different non-orthogonal basis) sequentially. In the following section we describe the Bloch orbital version (symmetry-adapted) of the THC decomposition and the resulting LCU, \(\lambda\) calculation, and qubitization complexities. First we review the salient features of tensor hypercontraction for the molecular case before introducing symmetry labels. Recall that in the molecular THC approach we expand density like terms over a grid of \(M\) points (labeled \(\mu\)) and weight each grid point with a function \(\xi_{\mu}(r)\) \[\phi_{p}(\mathbf{r})\phi_{q}(\mathbf{r})\approx\sum_{\mu}\xi_{\mu}(\mathbf{r} )\phi_{p}(\mathbf{r}_{\mu})\phi_{q}(\mathbf{r}_{\mu}) \tag{54}\] which allows us to write the two-electron integral tensor as \[V_{pqrs}=\sum_{\mu\nu}\chi_{p}^{(\mu)}\chi_{q}^{(\mu)}\zeta_{\mu\nu}\chi_{r}^{ (\nu)}\chi_{s}^{(\nu)} \tag{55}\] where the central tensor is defined as \[\zeta_{\mu\nu}=\int d\mathbf{r}\,\int d\mathbf{r}^{\prime}\,\frac{\xi_{\mu}( \mathbf{r})\xi_{\nu}(\mathbf{r}^{\prime})}{|\mathbf{r}-\mathbf{r}^{\prime}|}. \tag{56}\] In order to incorporate translational symmetry into the THC factorization the decomposition of the density is performed on the _cell periodic_ part of the Bloch orbitals as [80; 81] \[u^{*}_{p\mathbf{k}_{p}}(\mathbf{r})u_{q\mathbf{k}_{q}}(\mathbf{r})\approx\sum _{\mu}\xi_{\mu}(\mathbf{r})u^{*}_{p\mathbf{k}_{p}}(\mathbf{r}_{\mu})u_{q \mathbf{k}_{q}}(\mathbf{r}_{\mu}), \tag{57}\] where \(u_{p\mathbf{k}_{p}}(\mathbf{r})=e^{-i\mathbf{k}_{p}\mathbf{r}}\phi_{p\mathbf{ k}_{p}}(\mathbf{r})\). Then the two-electron integral tensor has the form \[V_{p\mathbf{k}_{p},q\mathbf{k}_{q},r\mathbf{k}_{r},s\mathbf{k}_ {s}} = \int d\mathbf{r}\int d\mathbf{r}^{\prime}\phi^{*}_{p\mathbf{k}_{ p}}\phi_{q\mathbf{k}_{q}}V(\mathbf{r},\mathbf{r}^{\prime})\phi^{*}_{r\mathbf{k}_{s}} \phi_{s\mathbf{k}_{s}} \tag{58}\] \[= \sum_{\mu\nu}u^{*}_{p\mathbf{k}_{p}}(\mathbf{r}_{\mu})u_{q\mathbf{ k}_{q}}(\mathbf{r}_{\mu})\chi^{\mathbf{k}_{p},\mathbf{k}_{q},\mathbf{k}_{r}, \mathbf{k}_{s}}_{\mu\nu}u^{*}_{r\mathbf{k}_{r}}(\mathbf{r}_{\nu})u_{s\mathbf{ k}_{s}}(\mathbf{r}_{\nu})\] \[= \sum_{\mu\nu}\chi^{(\mu)*}_{p\mathbf{k}_{p}}\chi^{(\mu)}_{q\mathbf{ k}_{q}}\zeta^{(\mu)}_{\mu\nu}\chi^{\mathbf{k}_{p},\mathbf{k}_{q},\mathbf{k}_{r}, \mathbf{k}_{s}}_{\nu\nu}\chi^{(\nu)*}_{r\mathbf{k}_{r}}\chi^{(\nu)}_{s\mathbf{ k}_{s}}\] where \(\chi^{(\mu)}_{q\mathbf{k}_{q}}=u_{q\mathbf{k}_{q}}(\mathbf{r}_{\mu})\), \(V(\mathbf{r},\mathbf{r}^{\prime})=|\mathbf{r}-\mathbf{r}^{\prime}|^{-1}\), and \[\zeta^{\mathbf{k}_{p},\mathbf{k}_{q},\mathbf{k}_{r},\mathbf{k}_{s}}_{\mu\nu}= \int d\mathbf{r}\int d\mathbf{r}^{\prime}e^{-i(\mathbf{k}_{p}-\mathbf{k}_{q}) \cdot\mathbf{r}}\xi_{\mu}(\mathbf{r})V(\mathbf{r},\mathbf{r}^{\prime})\xi_{ \nu}(\mathbf{r}^{\prime})e^{i(\mathbf{k}_{s}-\mathbf{k}_{r})\cdot\mathbf{r}^{ \prime}}. \tag{59}\] Some care needs to be taken when bringing this into a form similar to Eq. (5). First recall that we have \(\mathbf{k}_{p}-\mathbf{k}_{q}+\mathbf{k}_{r}-\mathbf{k}_{s}=\mathbf{G}_{pqrs}\), where \(\mathbf{G}_{pqrs}\) is a reciprocal lattice vector, and we are working with a uniform \(\Gamma\)-point centered momentum grid with dimensions \(\mathbf{N}=[N_{x},N_{y},N_{z}]\) and \(N_{k}=N_{x}N_{y}N_{z}\). To eliminate one of the four momentum modes, we identify \(\mathbf{Q}=\mathbf{k}_{p}\ominus\mathbf{k}_{q}\), and \(\mathbf{Q}=\mathbf{k}_{s}\ominus\mathbf{k}_{r}\), and set \(\mathbf{k}_{p}=\mathbf{k}\), \(\mathbf{k}_{q}=\mathbf{k}\ominus\mathbf{Q}\), \(\mathbf{k}_{s}=\mathbf{k}^{\prime}\) and \(\mathbf{k}_{r}=(\mathbf{k}^{\prime}\ominus\mathbf{Q})\). To evaluate the \(\zeta\) tensor we still need to know the values of \(\mathbf{k}_{p}-\mathbf{k}_{q}\) in absolute terms given a value for \(\mathbf{Q}\) and \(\mathbf{k}\). We note that mapping the difference \(\mathbf{k}_{p}-\mathbf{k}_{q}\) back into our \(k\)-point mesh amounts to adding a specific reciprocal lattice vector \(\mathbf{G}_{pq}^{\mathbf{Q}}=(\mathbf{k}_{p}-\mathbf{k}_{q})-\mathbf{Q}=(\mathbf{k}-( \mathbf{k}\mathbf{\ominus}\mathbf{Q}))-\mathbf{Q}\equiv\mathbf{G}_{\mathbf{k}, \mathbf{k}-\mathbf{Q}}\), with a similar expression for \(\mathbf{k}^{\prime}\) (the subtraction here is not modular). Thus, given a \(\mathbf{Q}\) and \(\mathbf{k}\) we can determine \(\mathbf{k}-\mathbf{Q}\) and \(\mathbf{G}_{\mathbf{k},\mathbf{k}-\mathbf{Q}}\). With these replacements we can write \[V_{p\mathbf{k}_{p},q\mathbf{k}_{q},r\mathbf{k}_{r},s\mathbf{k}_ {r}}\to V_{p\mathbf{k},q(\mathbf{k}\mathbf{\ominus}\mathbf{Q}),r(\mathbf{k}^ {\prime}\mathbf{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}} =\sum_{\mu\nu}\chi_{p\mathbf{k}}^{(\mu)*}\chi_{q\mathbf{k}\mathbf{ \ominus}\mathbf{Q}}^{(\mu)}\chi_{r(\mathbf{k}^{\prime}\mathbf{\ominus} \mathbf{Q})}^{\mathbf{Q},\mathbf{k},\mathbf{k}^{\prime}}\chi_{r(\mathbf{k}^{ \prime}\mathbf{\ominus}\mathbf{Q})}^{(\nu)*}\chi_{s\mathbf{k}^{\prime}}^{(\nu)}\] \[=\sum_{\mu\nu}\chi_{p\mathbf{k}}^{(\mu)*}\chi_{q\mathbf{\ominus} \mathbf{Q}}^{(\mu)}\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{\mathbf{k},\mathbf{k }-\mathbf{Q}},\mathbf{G}_{\mathbf{k}^{\prime},\mathbf{k}^{\prime}-\mathbf{Q} }}\chi_{r(\mathbf{k}^{\prime}\mathbf{\ominus}\mathbf{Q})}^{(\nu)*}\chi_{s \mathbf{k}^{\prime}}^{(\nu)}, \tag{60}\] where we have used \[\zeta_{\mu\nu}^{\mathbf{k}_{p},\mathbf{k}_{q},\mathbf{k}_{r}, \mathbf{k}_{s}}\to\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{k},\mathbf{k}^{\prime}} =\int d\mathbf{r}\int d\mathbf{r}^{\prime}e^{-i(\mathbf{Q}+ \mathbf{G}_{\mathbf{k},\mathbf{k}-\mathbf{Q}})\cdot\mathbf{r}}\xi_{\mu}( \mathbf{r})V(\mathbf{r},\mathbf{r}^{\prime})\xi_{\nu}(\mathbf{r}^{\prime})e^ {i(\mathbf{Q}+\mathbf{G}_{\mathbf{k}^{\prime}\mathbf{k}^{\prime}-\mathbf{Q}}) \cdot\mathbf{r}^{\prime})}\] \[=\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{\mathbf{k},\mathbf{k}- \mathbf{Q}},\mathbf{G}_{\mathbf{k}^{\prime},\mathbf{k}^{\prime}-\mathbf{Q}}}. \tag{61}\] In practice there are at most 8 values of \(\mathbf{G}\), so we only need to classically determine at most \(8^{2}N_{k}\) values of \(\zeta\), as opposed to \(N_{k}^{3}\). We can then write \[H_{2} =\frac{1}{2}\sum_{\mathbf{Q},\mathbf{k},\mathbf{k}^{\prime}}\sum _{pqrs}\sum_{\sigma\tau}V_{p\mathbf{k},q(\mathbf{k}\mathbf{\ominus}\mathbf{Q} ),r(\mathbf{k}^{\prime}\mathbf{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p \mathbf{k}\mathbf{\ominus}\mathbf{Q}}^{\dagger}a_{q(\mathbf{k}\mathbf{\ominus }\mathbf{Q})\sigma}a_{r(\mathbf{k}^{\prime}\mathbf{\ominus}\mathbf{Q})\tau}^{ \dagger}a_{s\mathbf{k}^{\prime}\tau}\] \[=\frac{1}{2}\sum_{\mathbf{Q},\mathbf{k},\mathbf{k}^{\prime}}\sum _{pqrs}\sum_{\sigma\tau}\sum_{\mu\nu}\chi_{p\mathbf{k},\mu}^{*}\chi_{q(\mathbf{ k}\mathbf{\ominus}\mathbf{Q}),\mu}\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{ \mathbf{k},\mathbf{k}-\mathbf{Q}},\mathbf{G}_{\mathbf{k}^{\prime},\mathbf{k}^{ \prime}-\mathbf{Q}}}\chi_{r(\mathbf{k}^{\prime}\mathbf{\ominus}\mathbf{Q}), \nu}^{*}\chi_{s\mathbf{k}^{\prime},\nu}a_{p\mathbf{k}\mathbf{\ominus}\mathbf{Q }}^{\dagger}a_{q(\mathbf{k}\mathbf{\ominus}\mathbf{Q})\sigma}a_{r(\mathbf{k}^{ \prime}\mathbf{\ominus}\mathbf{Q})\tau}^{\dagger}a_{s\mathbf{k}^{\prime}\tau}\] \[=\frac{1}{2}\sum_{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}\sum_{ \mu\nu}\sum_{\sigma\tau}\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}\] \[\quad\times\left(\sum_{\mathbf{k}|\mathbf{G}_{\mathbf{k},\mathbf{k }-\mathbf{Q}}=\mathbf{G}_{1}}\sum_{pq}\chi_{p\mathbf{k},\mu}^{*}\chi_{q(\mathbf{ k}\mathbf{\ominus}\mathbf{Q}),\mu}a_{p\mathbf{k}\mathbf{\ominus}\mathbf{Q}}^{ \dagger}a_{q(\mathbf{k}\mathbf{\ominus}\mathbf{Q})\sigma}\right)\left(\sum_{ \mathbf{k}^{\prime}|\mathbf{G}_{\mathbf{k}^{\prime},\mathbf{k}^{\prime}- \mathbf{Q}}=\mathbf{G}_{2}}\sum_{rs}\chi_{r(\mathbf{k}^{\prime}\mathbf{\ominus }\mathbf{Q}),\nu}^{*}\chi_{s\mathbf{k}^{\prime},\nu}a_{r(\mathbf{k}^{\prime} \mathbf{\ominus}\mathbf{Q})\tau}^{\dagger}a_{s\mathbf{k}^{\prime}\tau}\right), \tag{62}\] where in going from the second to the third line of Eq. (62) we have rewritten the sum over \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\) as a double sum over all \(8^{2}\) values of \(\mathbf{G}_{1}\) and \(\mathbf{G}_{2}\), and a restricted sum on \(\mathbf{k}\) such that for a given \(\mathbf{G}_{1}\) and \(\mathbf{Q}\) we only sum over those \(\mathbf{k}\) which satisfy \(\mathbf{G}_{\mathbf{k},\mathbf{k}-\mathbf{Q}}=\mathbf{G}_{1}\). Here the notation \(\mathbf{G}_{\mathbf{k}_{p},\mathbf{k}_{q}}\) is used as equivalent to \(\mathbf{G}_{pq}\) above. The fourfold symmetry of the two-electron integrals carries over to analogous symmetries in \(\zeta\), which are listed in Appendix D. We will then define \(\tilde{\chi}\) which are individually normalized for each \(\mathbf{k}\) and \(\mu\) so \(\sum_{p}\tilde{\chi}_{p\mathbf{k},\mu}^{*}\tilde{\chi}_{p\mathbf{k},\mu}=1\) and \[\mathcal{N}_{\mathbf{k},\mu}\tilde{\chi}_{p\mathbf{k},\mu}=\chi_{p\mathbf{k},\mu} \tag{63}\] with \(\mathcal{N}_{\mathbf{k},\mu}:=\sqrt{\sum_{p}|\chi_{p\mathbf{k},\mu}|^{2}}\). We then use these normalized \(\tilde{\chi}\) to give transformed annihilation and creation operators \[c_{\mu\mathbf{k}\sigma}=\sum_{p}\tilde{\chi}_{p\mathbf{k},\mu}a_{p\mathbf{k} \sigma},\qquad c_{\mu\mathbf{k}\sigma}^{\dagger}=\sum_{p}\tilde{\chi}_{p\mathbf{k}, \mu}^{*}a_{p\mathbf{k}\sigma}^{\dagger}\,. \tag{64}\] We can then write the two-body Hamiltonian as \[\hat{H}_{2} =\frac{1}{2}\sum_{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}\sum_{\mu,\nu}\sum_{\sigma\tau}\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}\] \[\quad\times\left(\sum_{\mathbf{k}|\mathbf{G}_{\mathbf{k},\mathbf{k}- \mathbf{Q}}=\mathbf{G}_{1}}\mathcal{N}_{\mathbf{k},\mu}\mathcal{N}_{\mathbf{k} \mathbf{\ominus}\mathbf{Q},\mu}c_{\mu\mathbf{k}\mathbf{\ominus}\mathbf{Q}}^{ \dagger}c_{\mu(\mathbf{k}\mathbf{\ominus}\mathbf{Q})\sigma}\right)\left(\sum_{ \mathbf{k}^{\prime}|\mathbf{G}_{\mathbf{k}^{\prime},\mathbf{k}^{\prime}-\mathbf{Q}}= \mathbf{G}_{2}}\mathcal{N}_{\mathbf{k}^{\prime}\mathbf{\ominus}\mathbf{Q},\nu} \mathcal{N}_{\mathbf{k}^{\prime},\nu}c_{\nu(\mathbf{k}^{\prime}\mathbf{\ominus} \mathbf{Q})\tau}^{\dagger}c_{\nu\mathbf{k}^{\prime}\mathbf{\ominus}\mathbf{Q} ^{\prime}\tau}\right). \tag{65}\] A complication for the implementation is that we would like to be able to choose the relative weighting between \(\zeta\) and \(\chi\) such that \[\sum_{\mathbf{k}|\mathbf{G}_{\mathbf{k},\mathbf{k}-\mathbf{Q}}=\mathbf{G}} \mathcal{N}_{\mathbf{k},\mu}\mathcal{N}_{\mathbf{k}\mathbf{\ominus}\mathbf{Q},\mu}=1. \tag{66}\] The difficulty here is that the values of \(\mathcal{N}_{\mathbf{k},\mu}\) only depend on \(\mathbf{k},\mu\), because they are based on \(\chi_{p\mathbf{k},\mu}\). This sum is also dependent on \(\mathbf{Q}\) and \(\mathbf{G}\), so for this normalization condition to hold it would mean we need to have \(\chi_{p\mathbf{k},\mu}\) also dependent on \(\mathbf{Q}\) and \(\mathbf{G}\) in a multiplicative factor (so a non-\(\mu\)-dependent way). That will leave the normalized \(\bar{\chi}_{p\mathbf{k},\mu}\) unaffected, but means that the values of \(\mathcal{N}_{\mathbf{k},\mu}\) need to have dependence on \(\mathbf{Q},\mathbf{G}\), which will need to be taken account of in the state preparation. The form in Eq. (65) then gives us a recipe for block encoding the Hamiltonian as a linear combination of unitaries. 1. First prepare a superposition state proportional to \[\sum_{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2},\mu,\nu}\sqrt{|\zeta_{\mu\nu}^{ \mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}|}\,|\mathbf{Q},\mathbf{G}_{1}, \mathbf{G}_{2},\mu,\nu\rangle\,.\] (67) This state may be prepared via the coherent alias sampling approach with a complexity dominated by the complexity of the QROM. Accounting for symmetry the dimension is about \(32N_{k}M^{2}\) and the size of the QROM output is approximately the log of that plus the number of bits for the keep probability. That gives a Toffoli complexity scaling as \[\sqrt{32N_{k}M^{2}[\log(32N_{k}M^{2})+\aleph]}\,.\] (68) 2. For each of the two expressions in brackets in Eq. (65), a preparation over \(\mathbf{k}\) or \(\mathbf{k}^{\prime}\) is needed to give a state of the form \[\sum_{\mathbf{k}|\mathbf{G}_{\mathbf{k},\mathbf{k}-\mathbf{Q}}=\mathbf{G}}\sqrt {\mathcal{N}_{\mathbf{k},\mu}\mathcal{N}_{\mathbf{k}\in\mathbf{Q},\mu}}\,| \mathbf{k}\rangle\,.\] (69) As explained above, the values of \(\mathcal{N}_{\mathbf{k},\mu}\) need to be chosen with (implicit) dependence on \(\mathbf{Q},\mathbf{G}\) for this to be a normalised state. This means that the amplitudes here need to be indexed by \(\mathbf{k}\), \(\mathbf{Q}\), \(\mathbf{G}_{1}\) and \(\mu\). The restricted range of values in the sum over \(\mathbf{k}\) means that the indexing over \(\mathbf{k},\mathbf{Q},\mathbf{G}_{1}\) gives \(N_{k}^{2}\) items of data, which is multiplied by \(M\) for the indexing over \(\mu\). So there are \(N_{k}^{2}M\) items of data needed, which is smaller than that in the first step, because it is missing the factor of 32 and typically \(N_{k}<M\). Given that the output size is approximately \(\log(N_{k})+\aleph\), the Toffoli complexity is approximately \[\sqrt{N_{k}^{2}M[\log(N_{k})+\aleph]}.\] (70) This cost is incurred twice, once for each of the factors in brackets in Eq. (65). 3. For each of the \(c\) annihilation and creation operators we perform a rotation of the basis from \(a\). This is done in the following way. 1. First use the spin \(\sigma\) or \(\tau\) to control a swap of the system registers. This is done once and inverted for each of the two \(c^{\dagger}c\) factors. Each of these 4 swaps has cost \(N_{k}N/2\). 2. Then use \(\mathbf{k}\) or \(\mathbf{k}\mathbf{\ominus}\mathbf{Q}\) to control the swap of the registers we wish to act on into working registers. The value of \(\mathbf{k}\mathbf{\ominus}\mathbf{Q}\) is used for \(c_{\mu(\mathbf{k}\in\mathbf{Q})\sigma}\), and needs to be computed to use as a control. Each of these eight swaps may be done with a Toffoli complexity approximately as half the number of system registers \(N_{k}N/2\). 3. Next \(\mathbf{k}\) (or \(\mathbf{k}\mathbf{\ominus}\mathbf{Q}\)) and \(\mu\) (or \(\nu\)) are used as a control for a QROM to output the angles for Givens rotations. There are two angles for each of \(N/2\) Givens rotations, so if they have \(\underline{\rule{0.0pt}{0.0pt}}\) each the size of the output is \(N\underline{\rule{0.0pt}{0.0pt}}\). Then the QROM complexity is about \[\sqrt{N_{k}N^{2}\underline{\rule{0.0pt}{0.0pt}}}.\] (71) This must be done 4 times (and has a smaller erasure cost). 4. The sequence of \(N/2\) Givens rotations is performed, each with 4 individual rotations on \(\underline{\rule{0.0pt}{0.0pt}}\), for a cost of \(2N\underline{\rule{0.0pt}{0.0pt}}\). This cost is incurred 8 times, twice for each of the annihilation and creation operators. 4. After the rotation of the basis, we simply need to perform the linear combination of \(\vec{Z}X\) and \(\vec{Z}Y\) for \(c^{\dagger}\) and \(c\). The \(X\) or \(Y\) is applied in a fixed location, but the \(\vec{Z}\) needs to be applied on a range of qubits chosen by \(\mathbf{k}\) or \(\mathbf{k}\mathbf{\ominus}\mathbf{Q}\). We therefore have approximately \(N_{k}\) for the unary iteration for each \(\vec{Z}\) for a total cost of about \(4N_{k}\). Lastly we would perform reflections on control ancillas as usual to construct a qubitised quantum walk from the block encoding. This cost is trivial compared to that in the other steps. For a more detailed explanation, see the circuit diagram in Figure 7 and the discussion in Appendix D. Figure 7: The quantum circuit for the block encoding of the THC representation, split into two parts with the right half at the bottom. The top shows the portion of the circuit for the first part controlled by \(\mathbf{k}^{\prime}\) and \(\nu\), and the bottom shows the (right) part of the circuit where it is controlled by \(\mathbf{k}\) and \(\mu\). The dotted rectangles show the regions for implementing the \(c\) and \(c^{\dagger}\) operators together with the Givens rotations needed to change the basis. The swaps controlled by the \(\mathbf{k}^{\prime}\) and \(\mathbf{k}\) registers are to move the appropriate qubits into target registers in order to apply the Givens rotations. The \(c\) and \(c^{\dagger}\) are applied using a superposition of \(X\) and \(iY\) applied using an ancilla qubit (not shown for simplicity), together with a string of \(Z\) gates for the Jordan-Wigner representation. The preparation at the beginning includes an inequality test between \(\mu\) and \(\nu\) to give a qubit flagging whether the real or imaginary part is produced. To make the implementation self-inverse, the \(\mu,\nu\) and \(\mathbf{k},\mathbf{k}^{\prime}\) pairs of registers are swapped in the middle (the left of the lower half). Also, an \(X\) gate is applied to the qubit that controls the swaps at the beginning and end. Figure 8: Violin plot of absolute errors in the \(k\)-THC-MP2 energy per cell for the benchmark set in Table 1. Here we compare the MP2 errors as a function of the THC rank parameter \(c_{\rm THC}\) using ISDF or subsequent reoptimization to generate the THC factors. Figure 9: (a) The number of \(k\)-points verses Toffoli cost to implement the block encoding for the THC factorization LCU evaluated for the benchmark systems listed in Table 1 described using the cc-pVDZ and cc-pVTZ basis sets and \(\Gamma\)-centered Monkhorst- Pack grids of size [1, 1, 1] to [3, 3, 3]. Each point is a single system described at a particular basis set and k-mesh where the range of the auxiliary index of the THC factorization is selected to produce two-electron integrals corresponding to an MP2 error of one 1 milliHartree with respect to an untruncated auxiliary index range. This corresponds to an auxiliary index that is eight times the number of orbitals in the primitive cell for symmetry adapted THC, and eight times the total number of orbitals (\(N_{k}N\)) for supercell THC. We divide the Toffoli complexity for implementing SELECT + PREPARE + PREPARE\({}^{-1}\) by N, which is the shared scaling in the number of bands. While we observe a \(\sqrt{N_{k}}\) scaling improvement for symmetry-adapted walk operations, we believe this is a finite size effect and both methods should scale linear with \(N_{k}\) for sufficiently large \(N_{k}\) The value of \(\lambda\) as a function of the total system size \(NN_{k}\) for the same systems described with the same cutoffs used in (a). The reduced variational freedom in compression of the two-electron integral tensors for the symmetry-adapted walk operator construction translates to an increased value of \(\lambda\) at all system sizes. The \(\lambda_{\rm{THC}}\) value has a one-body component and two-body component. Unlike molecular THC where the two-body component is reduced because we evolve by number operators in the non-orthogonal basis, in this version of the THC algorithm we will evolve by ladder operators in a non-orthogonal basis, and thus there is no one-body part to remove. The one-body contribution to \(\lambda_{\rm{THC}}\), \(\lambda_{\rm{THC,1}}\), is computed in a similar way as for the double factorization algorithm but noting that the extra factor of \(1/2\) coming from the \(Z\) operator is no-longer present to cancel the factor of two from spin summing. The one-body contribution to \(\lambda_{\rm{THC}}\) is \[\lambda_{\rm{THC,1}}=2\sum_{\bf k}\sum_{p}|\lambda_{p,{\bf k}}|. \tag{72}\] The two-body contribution to \(\lambda_{\rm{THC}}\), \(\lambda_{\rm{THC,2}}\), is determined by summing over all unitaries in the LCU. This summation can be rewritten in the form \[\lambda_{\rm{THC,2}} = 2\sum_{\bf Q}\sum_{\mu,\nu}\sum_{{\bf G}_{1},{\bf G}_{2}}\left(| \mathrm{Re}[\zeta_{\mu\nu}^{{\bf Q},{\bf G}_{1},{\bf G}_{2}}]|+|\mathrm{Im}[ \zeta_{\mu\nu}^{{\bf Q},{\bf G}_{1},{\bf G}_{2}}]|\right) \tag{73}\] \[\times\left(\sum_{{\bf k}|{\bf G}_{\bf k},{\bf k}\in{\bf Q}={\bf G }_{1}}{\cal N}_{{\bf k},\mu}{\cal N}_{{\bf k}\in{\bf Q},\mu}\right)\left(\sum_ {{\bf k}^{\prime}|{\bf G}_{\bf k},{\bf k}^{\prime},{\bf k}^{\prime}-{\bf Q}={ \bf G}_{2}}{\cal N}_{{\bf k}^{\prime}\in{\bf Q},\nu}{\cal N}_{{\bf k}^{\prime},\nu}\right)\] using the expression for \(\zeta\) described in Eq. (61). To obtain resource estimates for THC with \(k\)-points we follow a similar procedure to previous molecular work [32] and first compress the rank of the THC factors (\(M=c_{\rm{THC}}N/2\), where \(c_{\rm{THC}}\) is the THC rank parameter). In particular, we use the interpolative separable density fitting (ISDF) approach [80; 82; 83] as a starting point before subsequently reoptimizing these factors in order to compress the THC rank while regularizing \(\lambda\)[32; 37], which we will call \(k\)-THC. Further details of this procedure are provided in Appendix G. In Fig. 8 we demonstrate that a \(c_{\rm{THC}}=8\) is sufficient to obtain MP2 correlation energies within approximately 0.1 mHa/Cell for a subset of the systems considered in the benchmark set. We note that the equivalent ISDF rank may be on the order of 10-15 for comparable accuracy, which would correspond to a much larger value for \(\lambda\). Fig. 9 (a) demonstrates a \(\sqrt{N_{k}}\) scaling improvement of the block encodings in the symmetry adapted case. Note that this \(\sqrt{N_{k}}\) speedup for the block encodings is partially a finite size effect. In Fig. 10 we plot the Toffoli complexity per step as a function of \(N_{k}\) using artificially generated data to explore the large \(N_{k}\) behavior. We see that depending on the fitting range employed the extracted asymptotic scaling trends toward linear. While ultimately both the symmetry-adapted and supercell encodings should scale linearly with the system size due to the cost of unary iteration over all basis states, there are several factors that yield a \(\sqrt{N_{k}}\) saving in the symmetry-adapted case, and the relative size of the prefactors becomes important. Similar to DF, we find from Fig. 9 (b) that \(\lambda\) in the symmetry-adapted setting exhibits slightly worse scaling than for supercell calculations. This worsening of \(\lambda\) in the symmetry-adapted Figure 10: Synthetic data for the number of Toffolis required to implement the qubitization oracles with the \(k\)-THC factorization demonstrating the challenge of extracting the correct asymptotic scaling with limited finite size data. To generate the data we used the system parameters of carbon diamond in the cc-pVDZ basis set (\(N=52\), \(M=208\)) case can be understood again as a reduction in variational freedom in the symmetry adapated case, leading to smaller compression. Note that while Eq. (73) nominally scales cubicly with \(N_{k}\), we expect each individual matrix element to decay like \(N_{k}^{-1}\), which yields the expected quadratic dependence of \(\lambda\), or a linear dependence of \(\lambda\) when targeting the total energy per cell. In the supercell case, there are simply \(M^{2}=(N_{k}N)^{2}\) elements in the central tensor, which in turn controls the scaling of \(\lambda\). From Table 2 and Fig. 9 we can conclude that there is asymptotically no advantage to incorporating symmetry in the THC factorization for the Toffoli complexity, with both the supercell and symmetry-adapted methods exhibiting approximately quadratic scaling with system size for a fixed target accuracy of the total energy per cell. ## IV Scaling comparison and runtimes for diamond We now compare runtimes and estimate total physical requirements to simulate Diamond as a representative material. In Figure 11 we plot the total Toffoli complexity for the sparse, SF, DF, and THC LCUs using symmetry-adapted block encodings and supercell calculations for Diamond with cc-pVDZ and cc-pUTZ basis sets at various Monkhorst-Pack samplings. In sparse and SF there is a clear asymptotic separation between supercell and symmetry-adapted Toffoli counts. This is expected from the fact that both block encoding constructions are asymptotically improved and \(\lambda\) does not increase. For the DF case, total Toffoli complexity for supercell and symmetry-adapted cases is similar due to the larger \(\lambda\) for the symmetry-adapted algorithm. For THC, the total Toffoli complexity is similar in the supercell and symmetry adapted case, but the asymptotic scaling is identical for the supercell and symmetry-adapted algorithms. This is due to the increase in \(\lambda\) for the symmetry-adapted algorithm. In Table 3 we tabulate the quantum resource requirements and estimated runtimes after compiling into a surface code using physical qubits with error rates of 0.01% and a 1 us cycle time. We assume four Toffoli factories similar to References [32] and [37] and observe that for systems with 52-1404 spin-orbitals the quantum resource estimates are roughly in line with extrapolated estimates from the molecular algorithms. It is important to note that while the THC resource requirements look competitive for these small systems, in its current form it is not a practical way to simulate materials at scale. This is due to the prohibitive cost of reoptimizing the THC factors which significantly limits the system sizes that can be simulated. Moreover, as discussed in Section III.4, we caution that the THC trend lines are only valid within the fitting range, and we expect that asymptotic THC Toffoli count will trend more towards \(\mathcal{O}(N_{k}^{2})\) in the thermodynamic limit. Figure 11: (a) Total Toffoli requirements for Diamond in a cc-pVDZ basis at various Monkhorst-Pack samplings of the Brillouin zone with \(\Gamma\)-point centered grids of size [1,1,1] to [3, 3, 3]. Dashed lines are fits to the supercell data that is not plotted. Solid lines are fits to the symmetry-adapted data shown as data points. (b) Total logical qubits for symmetry-adapted oracles and supercell (dotted lines). All values are estimated from 0.1 mHa per unit cell thresholds on the MP2 energy. In the case of THC we only plot the symmetry adapted data due limited THC data arising from difficulty in optimizing the supercell THC factors. ## V Classical and quantum simulations of LNO In this section, we compare modern classical computational methods with quantum resource estimates in the context of a challenging problem of industrial interest: the ground state of LiNiO\({}_{2}\). ### LNO background Layered oxides have been the most popular cathode active materials for Li-ion batteries since their commercialization in the early '90s. While LiCoO\({}_{2}\) is still the material of choice in the electronics industry, the increasing human, environmental and financial cost of cobalt spells out the need for cobalt-free cathode active materials, especially for automotive applications[84, 85]. The isostructural compound LiNiO\({}_{2}\) (LNO) had been identified as an ideal replacement for LiCoO\({}_{2}\) already in the '90s, due to its comparably high theoretical capacity at a lower cost [86, 87]. Despite its numerous drawbacks, LNO still serves as the perfect model system for many derivative compounds such as lithium nickel-cobalt-manganese (NCM) and lithium nickel-cobalt-aluminum oxides (NCA) that are nowadays the gold standard in the automotive industry [38]. Moreover, the constant demand for better performing materials pushes the amount of substituted Ni to the dilute regime and the research trend is approaching the asymptotic LiNiO\({}_{2}\) limit, making LiNiO\({}_{2}\) a system of interest in battery research [38]. Even the nature of the ground state of LNO is still under debate. The universally observed rhombohedral R\(\bar{3}\)m symmetry [38], with Ni being octahedrally coordinated to six oxygen atoms through six equivalent Ni-O bonds conflicts with the renowned Jahn-Teller (JT) activity of low-spin trivalent Ni, which has been experimentally proven on a local scale [38]. In a recent DFT study [39], we argued that this apparent discrepancy might be resolved by the dynamics and low spatial correlation of Jahn-Teller distortions. In that work, the energy distance between Jahn-Teller distorted and non-distorted candidates (Figure 12) compared to zero-point vibrational energies makes a strong argument in favor of the dynamic Jahn-Teller effect. A non-JT distorted structure resulting from the disproportionation of Ni\({}^{3+}\) has also been reported as a ground state candidate [40] despite the 1:1 ratio between long and short Ni-O bonds, which conflicts with the experimentally determined 2:1 ratio. In the original study, the stability of this structure has been found to depend heavily on the value of the on-site Hubbard correction applied to the PBE functional. With the SCAN-rVV10 functional (with and without on-site Hubbard correction) [39], this candidate is consistently less stable than the JT-distorted models; it is also worth mentioning that the on-site Hubbard correction considerably increases the stability of the JT-distorted models. The dependence of Jahn-Teller stabilization energies on the functional had already been observed by Radin [88] and is ascribed to the difficulty to adequately describe the doubly degenerate high-symmetry, undistorted state. In light of previous studies, we will focus on four candidate structures for the LNO ground state. These structures are shown in Figure 12. We will furthermore focus only on the energetics of the problem. The goal is to compute the relative energies of these different crystal structures without the uncertainty of DFT. \begin{table} \begin{tabular}{l c c c c c} \hline \hline LCU & \(k\)-mesh & Toffolis & Logical Qubits & Physical Qubits[M] & Surface Code Runtime [days] \\ \hline sparse & \([1,1,1]\) & \(4.84\times 10^{9}\) & 2478 & 2.20 & \(9.10\times 10^{-1}\) \\ & \([2,2,2]\) & \(2.66\times 10^{12}\) & 75287 & 90.57 & \(5.77\times 10^{2}\) \\ & \([3,3,3]\) & \(1.06\times 10^{14}\) & 374274 & 543.76 & \(2.61\times 10^{4}\) \\ SF & \([1,1,1]\) & \(3.20\times 10^{9}\) & 2283 & 2.05 & \(6.02\times 10^{-1}\) \\ & \([2,2,2]\) & \(3.27\times 10^{12}\) & 20567 & 24.91 & \(7.11\times 10^{2}\) \\ & \([3,3,3]\) & \(1.13\times 10^{15}\) & 47665 & 69.52 & \(3.10\times 10^{5}\) \\ DF & \([1,1,1]\) & \(9.61\times 10^{8}\) & 2396 & 1.55 & \(1.81\times 10^{-1}\) \\ & \([2,2,2]\) & \(6.74\times 10^{10}\) & 18693 & 18.47 & \(1.27\times 10^{1}\) \\ & \([3,3,3]\) & \(1.09\times 10^{12}\) & 68470 & 82.39 & \(2.37\times 10^{2}\) \\ THC & \([1,1,1]\) & \(1.67\times 10^{10}\) & 18095 & 14.20 & 3.14 \\ & \([2,2,2]\) & \(4.85\times 10^{11}\) & 36393 & 35.60 & \(1.05\times 10^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Diamond represented in a cc-pVDZ basis (52 spin-orbitals in the primitive cell) at various \(k\)-mesh sizes and the associated quantum resource requirements to compute the total energy per cell to within 1 kcal / mol. The surface code runtime is estimated using four T-factories, a physical error rate of 0.01%, and a cycle time of 1 μs. The physical qubit count is given in millions. ### Correlated \(k\)-point calculations Local-basis quantum chemistry methods for electron correlation have been increasingly applied to periodic systems as an alternative to DFT with more controllable accuracy. Here we apply two such methods, second order Moller-Plesset perturbation theory (MP2)[90, 89] and coupled cluster singles and doubles (CCSD)[91, 92], to the three distorted structures of LNO (Figure 12). Local basis methods like these can be directly compared to quantum algorithms described in this work, since both are formulated within the same framework of a crystalline Gaussian one-particle basis. While these methods cannot be easily applied to the symmetric structure, which is metallic at the mean-field level, they should provide accurate results for the distorted structures provided that the finite-size and finite-basis errors can be controlled. All mean field, MP2 and CCSD calculations were performed with the PySCF program package [93, 94]. QMCPACK [95, 96] was used to perform the ph-AFQMC calculations, where we used at least 600 walkers and a timestep of 0.005 Ha\({}^{-1}\). The population control bias was found to be negligible. In all calculations, we use separable, norm-conserving GTH pseudopotentials [97, 65] that have been recently optimized for Hartree-Fock [98]. In all calculations on LNO we use the GTH basis sets [99, 41] (GTH-SZV and GTH-DZVP specifically) that are distributed with the CP2K [42] and PySCF [94] packages. In Figure 13 we show the convergence of the minimal-basis MP2 energy as a function of effective cell size for increasingly large \(k\)-point calculations. This demonstrates the essential difficulty in converging to the bulk limit for correlated calculations: the finite-size error will converge with \(n_{k}^{-1/3}\). Shifting the \(k\)-point grid to (1/8, 1/8, 1/8) and/or twist averaging (TA) does not change the asymptotic behavior of the energy. In all other LNO calculations, we use \(\Gamma\)-centered \(k\)-point grids. In all calculations, the density of \(k\)-points along each reciprocal lattice vector was chosen so that the density of \(k\)-points is as close to constant as possible. While a minimal basis is useful for a qualitative understanding of the finite-size error, it is does not provide sufficient accuracy to resolve the different LNO structures examined in this work. The double-zeta basis set (GTH-DZVP) is large enough to provide qualitative accuracy, but converging the result to the bulk limit is prohibitively expensive for the systems considered here. We can nonetheless provide some estimates of the ground-state CCSD and MP2 (DZVP) energies as shown in Table 4 and 5. Since there is no evidence of particularly "strong correlation" in any of these systems (see Appendix E for a more detailed discussion), MP2 and CCSD should provide qualitatively correct estimates of the ground state energy. The unusually large MP2 correlation energy for the P2/c structure suggests it may not be as reliable for this structure, and this suspicion is confirmed by the CCSD and ph-AFQMC calculations. For CCSD and ph-AFQMC, the P2\({}_{1}\)/c structure is lowest in energy which agrees qualitatively with the DFT calculations in Ref. [39]. However, this prediction carries with it a great deal of uncertainty due to the small simulation size, small one-particle basis set, and error in the MP2/CCSD/ph-AFQMC approximations. Figure 12: The four known LiNiO\({}_{2}\) polymorphs: high-symmetry R3\(\bar{m}\), collinear JT-distorted C2/m, zig-zag JT-distorted P2\({}_{1}\)/c, and disproportionated P2/c. Green spheres represent Li, gray polyhedra are NiO\({}_{6}\) octahedra, and elongated Ni-O bonds are depicted as bold blue arrows. ### Single shot density matrix embedding theory Another way to apply high-level correlated methods to periodic solids is through quantum embedding methods in which a local _impurity_ is treated with a high-level method and the remainder of the system, the _bath_, is treated at a lower level of theory. For periodic solids, dynamical mean-field theory (DMFT) is perhaps the most widely successful such method [100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 224, 212, 213, 214, 215, 216, 217, 218, 229, 220, 221, 222, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 314, 315, 316, 317, 318, 319, 324, 325, 326, 327, 328, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 404, 409, 421, 403, 404, 405, 406, 407, 409, 411, 404, 412, 406, 408, 413, 414, 415, 416, 417, 418, 419, 420, 414, 421, 419, 435, 410, 411, 412, 413, 414, 415, 416, 417, 419, 422, 423, 424, 425, 426, 427, 428, 429, 436, 437, 444, 445, 446, 447, 445, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 464, 470, 465, 466, 467, 468, 469, 471, 472, 473, 474, 475, 476, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 499, 499, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 444, 449, 451, 452, 453, 454, 456, 457, 458, 459, 460, 461, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 474, 478, 479, 481, 470, 473, 474, 475, 476, 477, 478, 479, 482, 470, 471, 472, 479, 470, 474, 471, 475, 476, 477, 478, 479, 483, 484, 485, 486, 487, 489, 490, 491, 492, 494, 495, 496, 497, 498, 499, 499, 410, 498, 499, 411, 412, 413, 414, 415, 416, 417, 419, 420, 421, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 43, 435, 436, 437, 438, 439, 445, 446, 447, 448, 449, 450, 451, 452, 454, 456, 457, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 478, 479, 480, 481, 482, 483, 485, 486, 487, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 410, 499, 410, 499, 411, 412, 413, 414, 415, 416, 417, 419, 420, 421, 42, 423, 42, 424, 425, 426, 427, 428, 429, 44, 430, 44, 449, 450, 451, 452, 453, 456, 457, 458, 459, 460, 471, 472, 473, 47, 478, 479, 48, 499, 490, 499, 410, 499, 411, 412, 413, 414, 415, 416, 417, 419, 420, 423, 424, 425, 426, 427, 428, 42, 429, 430, 431, 432, 43, 435, 436, 44, 44, 453, 45, 456, 46, 46, 470, 471, 472, 47, 473, 478, 479, 48, 499, 410, 499, 420, 49, 411, 44, 42, 425, 42, 42, 426, 427, 428, 429, 44, 435, 44, 456, 467, 478, 489, 499, 420, 499, 430, 499, 440, 499, 450, 499, 410, 499, 412, 44, 44, 495, 496, 413, 497, 498, 499, 400, 499, 414, 499, 420, 499, 421, 44, 449, 450, 499, 415, 499, 422, 450, 499, 451, 499, 460, ### Quantum resource estimates for LNO Quantum resource estimates for LNO using the SF and DF LCUs are reported in Table 6. THC is not reported due to the difficulty of re-optimizing the THC tensors to have low L1-norm as discussed in References [32; 37; 79]. For the sparse LCU, a threshold of \(1\times 10^{-4}\) was determined by averaging the thresholds for the systems in Table 1 required to achieve 1 m\(E_{\text{H}}\) per unit cell. For the SF LCU the truncation of the auxiliary basis was set to eight times the number of molecular orbitals which was determined by requiring the error in the MP2 energy for the smallest C2/m system to be less than 1 m\(E_{\text{h}}\) per formula unit. For DF, the same requirement was used to determine a cutoff for the second factorization of \(1\times 10^{-3}\). The trends are consistent with what was observed in Section IV: DF is consistently more efficient than either sparse or SF LCUs. For the smaller systems, these calculations are anticipated to be useful for benchmarking faster classical methods. For the larger systems, the estimated run times are daunting, but we are optimistic that further algorithmic improvements can make calculations like these feasible in the future. ## VI Conclusion In this work we developed the theory of symmetry-adapted block encodings for extended system simulation using four different representations of the Hamiltonian as LCUs in order to improve quantum resource costs for reaching the thermodynamic limit when simulating solids. In order to realize an asymptotic speedup due to symmetry, we substantially modify the block encodings compared with their molecular counterparts. To demonstrate these asymptotic improvements we compiled constant factors for all four LCUs and compared their performance on a suite of benchmark systems and a realistic problem in materials simulation. We find that despite a clear asymptotic speedup for walk operator construction there are competing factors (such as lower compression in Hamiltonian tensor factorizations) that make it difficult to observe a large speedup using symmetry. It was recently shown that variationally constructing tensor compressions for Hamiltonian simulation can improve quantum resource requirements [37; 79; 107] and thus we believe the compressions can be improved to ultimately demonstrate a speedup for these types of simulations. For the sparse and SF LCUs we derive a \(\mathcal{O}(\sqrt{N_{k}})\) speedup in constructing select and prepare by ensuring only the minimal amount of symmetry unique information is accessed by the quantum circuit through QROM. In both cases a speedup is observable, though it is much clearer in the SF case. Observing the sparse LCU speedup is more challenging due to the difficulty of converging the \(N_{k}\) and \(N\) dependence of the two-electron integrals. Compared with the molecular case where sparse was competitive with the DF and THC algorithms [71; 32], largely due to the simplicity of select, we find that sparse is not viable for converging to the thermodynamic limit of solids. The DF and THC tensor factorizations yield LCUs as unitaries in non-orthogonal bases and lead to much higher compression than sparse and SF LCUs. In the DF case we derive an asymptotic \(\mathcal{O}(\sqrt{N_{k}})\) improvement in Toffoli complexity and qubit cost when constructing the qubitization walk operator. Unfortunately, \(\lambda\) is increased in these cases. The increase is attributed to the lower variational freedom in constructing non-orthogonal bases when repre Figure 14: Convergence of the total DMET energy for a 4 formula unit (16 atom) impurity with respect to effective size of the mean-field calculation for the different distorted structures. senting the two-electron integral tensor in factorized form compared with the non-symmetry adapted setting. For the THC case, no asymptotic speedup is formally possible. This stems from the linear cost of unary iteration over all basis states. Nevertheless, due to competing prefactors between unary iteration and state preparation, we do observe a \(\sqrt{N_{k}}\) scaling improvement in the Toffoli per step and logical qubit cost for the range of systems studied. This is likely a finite size effect, but may be a practically important when considering which algorithm to chose in the future. Thus, improving the \(\lambda\) value of THC through more sophisticated and affordable means is worth further investigation. Reaching the thermodynamic and complete basis set limit is very challenging, even for classical wavefunction methods like CCSD and ph-AFQMC. Previous ph-AFQMC results for simple insulating solids with two-atom unit cells suggest that at least a \(3\times 3\times 3\) and \(4\times 4\times 4\) sampling of the Brillouin zone is required to extrapolate correlation energies to the thermodynamic limit [108]. Similarly, it has been found that quadruple-zeta quality basis sets are required to converge the cohesive energy to less than 0.1 eV / atom, while a triple-zeta quality basis is likely sufficient for quantities such as the lattice constant and bulk modulus [109]. Similar system sizes and basis sets were found to be required for CCSD simulations of metallic systems [18]. Although the theory of finite size corrections [110, 111, 112, 113] is still an area of active research [114, 115], the simulation of bulk systems even with these corrections typically requires on the order of 50 atoms, which in turn corresponds to hundreds of electrons and thousands of orbitals. For excited state properties, particularly those concerning charged excitations, even larger system sizes may be required without the use of sophisticated finite size correction schemes [116]. Thus, we suspect that simulating large system sizes will continue to be necessary in order to obtain high accuracy for condensed phase simulations. It is important to note that high accuracy classical wavefunction methods are often considered too expensive for practical materials simulation, and DFT is still the workhorse of the field. Appendix F shows that simulating even simple solids with coarse \(k\)-meshes can take on the order of hours, which would otherwise take seconds for a modern DFT code. From the quantum resource estimates it is clear that several orders of magnitude of improvement are necessary before practical materials simulation is possible. Despite this, the fairly low scaling of phase estimation as a function of system size serves as encouragement to pursue quantum simulation for materials further. The aforementioned convergence difficulties are demonstrated in our classical calculations on the LNO system when attempting to resolve the discrepancy between band-theory predictions and experimental observations of the ground state geometry. Furthermore, the variance in energy between CCSD, MP2, ph-AFQMC, and DMET (and their expenses) make it difficult to select an efficient method for determining Hamiltonian parameter cutoffs to \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline System & LCU & \(k\)-mesh & \(\lambda\) & Num. Spin-Orbs. & Toffolis & Logical Qubits & Physical Qubits [M] & run time [days] \\ \hline R3m & Sparse & \([2,2,2]\) & 120382.037 & 116 & 6.16\(\times 10^{13}\) & 166946 & 242.72 & 1.51\(\times 10^{4}\) \\ & & \([3,3,3]\) & 718377.133 & 116 & 3.57\(\times 10^{15}\) & 1625295 & 2808.82 & 9.82\(\times 10^{5}\) \\ & SF & \([2,2,2]\) & 183778.821 & 116 & 7.86\(\times 10^{13}\) & 89162 & 129.77 & 1.93\(\times 10^{4}\) \\ & & \([3,3,3]\) & 2966279.293 & 116 & 4.60\(\times 10^{15}\) & 404723 & 699.68 & 1.27\(\times 10^{6}\) \\ & DF & \([2,2,2]\) & 10730.422 & 116 & 4.97\(\times 10^{12}\) & 149939 & 180.16 & 1.08\(\times 10^{3}\) \\ & & \([3,3,3]\) & 44794.803 & 116 & 7.28\(\times 10^{13}\) & 598286 & 869.02 & 1.79\(\times 10^{4}\) \\ C2/m & Sparse & \([2,2,1]\) & 58422.522 & 116 & 1.03\(\times 10^{13}\) & 83532 & 100.47 & 2.53\(\times 10^{3}\) \\ & & \([4,4,2]\) & 89333.394 & 116 & 5.37\(\times 10^{15}\) & 3051285 & 5272.93 & 1.48\(\times 10^{6}\) \\ & SF & \([2,2,1]\) & 95803.204 & 116 & 2.05\(\times 10^{13}\) & 44657 & 53.90 & 5.05\(\times 10^{3}\) \\ & & \([4,4,2]\) & 2899609.300 & 116 & 5.23\(\times 10^{15}\) & 405310 & 700.69 & 1.44\(\times 10^{6}\) \\ & DF & \([2,2,1]\) & 4873.648 & 116 & 1.18\(\times 10^{12}\) & 75178 & 90.44 & 2.56\(\times 10^{2}\) \\ & & \([4,4,2]\) & 51416.281 & 116 & 9.82\(\times 10^{13}\) & 598736 & 869.68 & 2.41\(\times 10^{4}\) \\ P2/c & Sparse & \([1,1,1]\) & 84977.359 & 464 & 2.06\(\times 10^{13}\) & 99918 & 120.21 & 5.07\(\times 10^{3}\) \\ & & \([2,2,2]\) & 1627121.892 & 464 & 1.67\(\times 10^{16}\) & 3182362 & 6454.14 & 4.59\(\times 10^{6}\) \\ & SF & \([1,1,1]\) & 201894.726 & 464 & 8.74\(\times 10^{13}\) & 92786 & 135.04 & 2.15\(\times 10^{4}\) \\ & & \([2,2,2]\) & 5666363.179 & 464 & 2.07\(\times 10^{16}\) & 839487 & 1450.95 & 5.68\(\times 10^{6}\) \\ & DF & \([1,1,1]\) & 2753.901 & 464 & 9.72\(\times 10^{11}\) & 75834 & 91.23 & 2.11\(\times 10^{2}\) \\ & & \([2,2,2]\) & 40788.113 & 464 & 1.40\(\times 10^{14}\) & 1192900 & 1732.40 & 3.44\(\times 10^{4}\) \\ P2\({}_{1}\)/c & Sparse & \([1,2,1]\) & 105584.297 & 232 & 3.39\(\times 10^{13}\) & 182864 & 265.83 & 8.34\(\times 10^{3}\) \\ & & \([2,4,2]\) & 1714723.913 & 232 & 1.50\(\times 10^{16}\) & 3116825 & 6321.24 & 4.12\(\times 10^{6}\) \\ & SF & \([1,2,1]\) & 271178.934 & 232 & 8.92\(\times 10^{13}\) & 96882 & 140.98 & 2.19\(\times 10^{4}\) \\ & & \([2,4,2]\) & 7798992.981 & 232 & 2.13\(\times 10^{16}\) & 438080 & 757.32 & 5.85\(\times 10^{6}\) \\ & DF & \([1,2,1]\) & 3958.111 & 232 & 1.27\(\times 10^{12}\) & 75383 & 90.69 & 2.76\(\times 10^{2}\) \\ & & \([2,4,2]\) & 46189.645 & 232 & 1.23\(\times 10^{14}\) & 1192758 & 1732.20 & 3.02\(\times 10^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Quantum Resource estimates for all four LNO structures normalized by the number of formula units represented in each simulation cell. R3m and C2/m are both one formula unit while P2/c is four formula units and P2\({}_{1}\)/c is two formula units. The sparse threshold is selected to be \(1.0\times 10^{-4}\), the SF the auxiliary index is truncated at eight times the number of molecular orbitals, and the DF the second factorization is truncated at \(1.0\times 10^{-4}\). use in quantum resource estimation. If anything, this highlights the need for high accuracy classical computation when performing quantum resource estimates and ultimately picking an algorithm for quantum simulation. The quantum resource estimates for LNO simulations are exorbitantly expensive even at small \(k\)-mesh; estimated to run in \(\mathcal{O}(10^{2})-\mathcal{O}(10^{3})\) days using the DF LCU. Just as resource estimates for chemistry fell drastically with algorithmic developments clearly further algorithmic improvements are needed to make a LNO sized problem feasible on a quantum computer. Qubitization is a general tool for Hamiltonian simulation and there may be other simulation scenarios when the improved walk operators yield faster simulations. There are also areas to further improve the quantum algorithms by taking advantage of space group symmetry along with translational symmetry. In classical calculations this can lead to substantial computational savings even at the mean-field level. Just as in the case of quantum algorithms for molecular simulations we expect the quantum resource costs to fall with further exploration of algorithmic improvements. ## Acknowledgements The authors thank Yuan Su for helpful conversations. FDM thanks Miguel Morales for helpful discussions on the form of the \(k\)-point THC factorization. DWB worked on this project under a sponsored research agreement with Google Quantum AI. DWB is also supported by Australian Research Council Discovery Projects DP190102633 and DP210101367.
2308.10601
Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer
Deep neural networks are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on clean inputs. Although many attack methods can achieve high success rates in the white-box setting, they also exhibit weak transferability in the black-box setting. Recently, various methods have been proposed to improve adversarial transferability, in which the input transformation is one of the most effective methods. In this work, we notice that existing input transformation-based works mainly adopt the transformed data in the same domain for augmentation. Inspired by domain generalization, we aim to further improve the transferability using the data augmented from different domains. Specifically, a style transfer network can alter the distribution of low-level visual features in an image while preserving semantic content for humans. Hence, we propose a novel attack method named Style Transfer Method (STM) that utilizes a proposed arbitrary style transfer network to transform the images into different domains. To avoid inconsistent semantic information of stylized images for the classification network, we fine-tune the style transfer network and mix up the generated images added by random noise with the original images to maintain semantic consistency and boost input diversity. Extensive experimental results on the ImageNet-compatible dataset show that our proposed method can significantly improve the adversarial transferability on either normally trained models or adversarially trained models than state-of-the-art input transformation-based attacks. Code is available at: https://github.com/Zhijin-Ge/STM.
Zhijin Ge, Fanhua Shang, Hongying Liu, Yuanyuan Liu, Liang Wan, Wei Feng, Xiaosen Wang
2023-08-21T09:58:13Z
http://arxiv.org/abs/2308.10601v1
# Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer ###### Abstract. Deep neural networks are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on clean inputs. Although many attack methods can achieve high success rates in the white-box setting, they also exhibit weak transferability in the black-box setting. Recently, various methods have been proposed to improve adversarial transferability, in which the input transformation is one of the most effective methods. In this work, we notice that existing input transformation-based works mainly adopt the transformed data in the same domain for augmentation. Inspired by domain generalization, we aim to further improve the transferability using the data augmented from different domains. Specifically, a style transfer network can alter the distribution of low-level visual features in an image while preserving semantic content for humans. Hence, we propose a novel attack method named Style Transfer Method (STM) that utilizes a proposed arbitrary style transfer network to transform the images into different domains. To avoid inconsistent semantic information of stylized images for the classification network, we fine-tune the style transfer network and mix up the generated images added by random noise with the original images to maintain semantic consistency and boost input diversity. Extensive experimental results on the ImageNet-compatible dataset show that our proposed method can significantly improve the adversarial transferability on either normally trained models or adversarially trained models than state-of-the-art input transformation-based attacks. Code is available at: [https://github.com/Zhijin-Ge/STM](https://github.com/Zhijin-Ge/STM). Adversarial attack, Adversarial transferability, Black-box attack + Footnote †: journal: Computer graphics translating, scaling, and mixing up different images. By analysis these methods, we notice that they mainly adopt the transformed data in the _same source domain_ for augmentation, which might limit the adversarial transferability. Lin _et al._(Lin et al., 2017) treated the process of generating adversarial examples on the white-box model as a standard neural network training process, in which the adversarial transferability is equivalent to the model generalization. As we all know, _domain bias_(Liu et al., 2017) decays the model generalization, _i.e._, the model trained on a specific data domain cannot generalize well to other domain datasets. Similarly, there is domain bias between different models due to various architectures and randomness during the training. Recently, several studies (Liu et al., 2017; Wang et al., 2018) address the _domain bias_ issue by training the models using data from different domains to reduce the risk of overfitting to the source domain and improve the model generalization ability. With the analogy between adversarial transferability and model generalization, it inspires us to utilize the data from different domains to improve adversarial transferability. In practice, it is usually expensive to obtain data from different domains, let alone images from different domains with the same semantic label. Thanks to the success of DNNs, style transfer has made great progress, which can alter the distribution of low-level visual features in an image whilst preserving semantic contents for humans (Liu et al., 2017; Wang et al., 2018). Recently, image style transfer acted as an effective data augmentation technique to boost the generalization across different domains (Wang et al., 2018; Zhang et al., 2018). This inspires us to transform the data using a style transfer network and propose a new Style Transfer Method (STM) to boost adversarial transferability. Specifically, we introduce a new arbitrary style transfer network to transfer the style of a clean input image, which introduces data from different domains for augmentation. Since the stylized images may mislead the surrogate model, this can result in imprecise gradients during the iterative optimization process. To address this issue, we first fine-tune our style transfer network so that the generated images can be correctly classified by multiple models. We also mix up the original image with its style-transformed images to further avoid such imprecise gradients. For a more stable gradient update, we adopt the average gradient of multiple transformed images with random noise to update the perturbation. We illustrate some transformed images by various input transformation-based attacks in Figure 1. As we can see, there is no significant visual difference between the clean images and transformed images by DIM (Wang et al., 2018), TIM (Wang et al., 2018), SIM (Lin et al., 2017) and Admix (Admix et al., 2018). As \(S^{2}\)IM (Liu et al., 2017) transforms the image in the frequency domain, it significantly changes the images and introduces semantic shift. Conversely, STM changes the style but maintains the semantic content, which changes the low-level statistical features (_e.g._, texture, contrast) of the clean image and makes the transformed image deviate from the source domain. It significantly enhances the diversity of the input images for gradient calculation and results in better transferability. We summarize our contributions as follows: * We find that existing input transformation-based methods mainly adopt the transformed data in the same domain, which might limit the adversarial transferability. To address this limitation, we propose a novel attack method, which introduces data from different domains to enhance the adversarial transferability. * We propose a new input transformation by mixing up the input images with the transformed image generated by our fine-tuned style transfer network and adding random noise for diverse images from various domains. * Empirical evaluations show that our STM can significantly boost transferability on either normally or adversarially trained models. In particular, STM outperforms state-of-the-art methods by a margin of 7.45% on average for adversarially trained models. ## 2. Related Work This section provides a brief overview of the adversarial attacks and the improved domain generalization with style transfer networks. ### Adversarial Attacks After Szegedy _et al._(Szegedy et al., 2017) identified adversarial examples, many attacks have been developed to generate adversarial examples. Goodfellow _et al._(Gool et al., 2016) proposed the Fast Gradient Sign Method (FGSM) to generate adversarial examples with one step of gradient update. Kurakin _et al._(Kurakin et al., 2017) further extend FGSM to an iterative version with a smaller step size \(\alpha\), denoted as I-FGSM, which exhibits superior attack success rates in the white-box setting. On the other hand, black-box attacks are more practical since they only access limited or no information about the target model. Query-based attacks (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018) often take hundreds or even thousands of queries to generate adversarial examples, making it inefficient. By contrast, transfer-based attacks (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) generate the adversarial on the surrogate model without accessing the target model, leading to great practical applicability and attracting increasing attention. Unfortunately, although I-FGSM has exhibited great effectiveness in the white-box setting, it has low transferability when attacking black-box models. To boost the adversarial transferability, Liu _et al._(Liu et al., 2017) proposed an ensemble-model attack that attacks multiple models simultaneously. Dong _et al._(Dong et al., 2016) integrated momentum into I-FGSM (called MI-FGSM) to stabilize the update direction. Lin _et al._(Lin et al., 2017) adopted Nesterov's accelerated gradient to further enhance the transferability. Wang _et al._(Wang et al., 2018) considered the gradient variance of the previous iteration to tune the current gradient. Wang _et al._(Wang et al., 2018) proposed enhanced momentum by accumulating the gradient of several data points in the direction of the previous gradient for better transferability. Figure 1. Illustration of two transformed images by various input transformation-based attacks. Our STM generates some images in different domains by using our style augmentation module, leading to different styles from clean images but maintaining the semantic contents compared with existing transformation-based attacks. Inspired by the data augmentation strategies (Srivastava et al., 2017; Wang et al., 2018), various input transformation methods have been proposed to effectively boost adversarial transferability. Xie _et al._(Xie et al., 2019) proposed to adopt diverse input patterns by randomly resizing and padding to generate transferable adversarial examples. Dong _et al._(Dong et al., 2019) used a set of translated images to optimize the adversarial perturbations, and approximated such process by convolving the gradient at untranslated images with a kernel matrix for high efficiency. Lin _et al._(Lin et al., 2019) leveraged the scale-invariant property of DNNs and thus averaged the gradients _w.r.t._ different scaled images to update adversarial examples. Wang _et al._(Wang et al., 2019) mixed up a set of images randomly sampled from other categories while maintaining the original label of the input to craft more transferable adversaries. Long _et al._(Long et al., 2019) proposed a novel spectrum simulation attack to craft more transferable adversarial examples by transforming the input image in the frequency domain. ### Domain Generalization with Style Transfer The success of DNNs heavily relies on the _i.i.d._ assumption, _i.e._, training and testing datasets are drawn from the same distribution. When such an assumption is violated, DNNs usually suffer from severe performance degradation (Zhou et al., 2019). A typical solution to _domain bias_ is transfer learning, in which a network is pre-trained on a related task with a large dataset and then fine-tuned on a new dataset (Zhou et al., 2019; Wang et al., 2019). However, transfer learning needs to reuse the same architecture as that of the pre-trained network and careful application of layer freezing to prevent the prior knowledge from being forgotten during fine-tuning. Domain adaptation is another way to address domain bias, which encompasses a variety of techniques for adapting a model post-training to improve its accuracy on a specific target domain. It is often implemented by minimizing the distance between the source and the target feature domains in some manner (Wang et al., 2019; Wang et al., 2019). Though domain adaptation is usually effective, its functionality is limited since it can only help a model generalize to a specific target domain. Domain generalization (Wang et al., 2019; Wang et al., 2019) aims to address the issue by learning data from diverse domains to boost the model generalization on other unseen domains. Recently, various style augmentation methods have been proposed to explore domain generalization. Wang _et al._(Wang et al., 2019) proposed a style-complement module that generates stylized images to enhance the generalization on unseen target domains. Jackson _et al._(Jackson et al., 2017) proposed data augmentation-based random style transfer to improve the robustness of the convolutional neural networks, which can also enhance the robustness to domain shift. Inspired by the above works, we postulate that introducing data from different domains to craft adversarial examples can also improve adversarial transferability. ## 3. Methodology In this section, we first provide details of several input transformation-based attacks. Then we explain our motivation and propose a new attack method named Style Transfer Method (STM) to generate adversarial examples. Finally, we will discuss the differences with other input transformation-based methods. ### Preliminaries **Notation**. Given a classifier \(f(\mathbf{x};\theta)\) with parameters \(\theta\), and a benign input image \(\mathbf{x}\) with ground-truth label \(y\). Let \(J(\mathbf{x},y;\theta)\) be the loss function (_e.g._, the cross-entropy loss). Specifically, there are two categories of adversarial attacks, _i.e._, non-targeted and targeted attacks. Non-targeted attack searches an adversarial example \(\mathbf{x}^{adv}\) that satisfies \(\|\mathbf{x}-\mathbf{x}^{adv}\|_{p}\leq\epsilon\) but misleads the classifier \((f(\mathbf{x}^{adv};\theta)\neq y)\). Targeted attack fools the classifier into outputting a specific label \((f(\mathbf{x}^{adv};\theta)=y^{*})\). Here \(\epsilon\) is the maximum magnitude of perturbation, \(y^{*}\) is the target label, and \(p\) could be \(0,2\), or \(\infty\). To align with other works, we set \(p=\infty\) in this work. Input transformation-based attacks are typical methods for boosting the adversarial transferability, _e.g._, Diverse Input Method (DIM) (Xie et al., 2019), Translation-Invariant Method (TIM) (Dong et al., 2019) and Scale-Invariant Method (SIM) (Lin et al., 2019). Besides these methods, several more powerful methods such as Admix (Wang et al., 2019) and S\({}^{2}\)IM (Long et al., 2019) were proposed, and we will introduce these two methods in this subsection in detail. **Admix Attack Method.** Admix (Wang et al., 2019) proposes to mix a set of images randomly sampled from other categories while retaining the original label of the input to craft more transferable adversaries. This method can integrate with the SIM method, and its average gradient can be expressed as follows: \[\bar{\mathbf{g}}_{t}=\frac{1}{m_{1}\cdot m_{2}}\sum_{\mathbf{x}^{\prime}\in \mathbf{X}^{\prime}}\sum_{l=0}^{m_{1}-1}\nabla_{\mathbf{x}^{adv}_{l}}J\left( \left(\mathbf{x}^{adv}_{l}+\eta\cdot\mathbf{x}^{\prime}\right)/2^{i},y;\theta \right), \tag{1}\] where \(\eta\) controls the strength of admixed images, \(m_{1}\) is the number of admixed images for each \(\mathbf{x}^{\prime}\), and \(\mathbf{X}^{\prime}\) denotes the set of \(m_{2}\) randomly sampled images from other categories. **S\({}^{2}\)IM Attack Method.**\(S^{2}\)IM (Long et al., 2019) applies a spectrum transformation to input data, thus performs the model augmentation in the frequency domain, and proposes the transformation based on the discrete cosine transform (\(\mathcal{D}\)) and inverse discrete cosine transform (\(\mathcal{D}_{\mathcal{I}}\)) techniques to diversify input images. \[\bar{\mathbf{g}}_{t}=\nabla_{\mathbf{x}^{adv}_{t}}J(\mathcal{T}(\mathbf{x}^{ add}_{t}),y;\theta), \tag{2}\] where \(\mathcal{T}(\mathbf{x})=\mathcal{D}_{\mathcal{I}}(\mathcal{D}(\mathbf{x}+ \xi)\odot\mathbf{M})\), \(\odot\) is the Hadamard product, \(\xi\sim\mathcal{N}(0,\sigma^{2})\) and each element of \(\mathbf{M}\sim\mathcal{U}(1-\rho,1+\rho)\) is random variant sampled from Gaussian and uniform distribution. Generally, the above input transformation-based methods are often integrated into MI-FGM (Dong et al., 2019). \[\mathbf{g}_{t+1}=\mu\cdot\mathbf{g}_{t}+\frac{\bar{\mathbf{g}}_{t}}{\|\bar{ \mathbf{g}}_{t}\|_{1}}, \tag{3}\] where \(\mathbf{g}_{0}=0\), and the adversarial examples can be generated by \(\mathbf{x}^{adv}_{t+1}=\mathbf{x}^{adv}_{t}+\epsilon\cdot\text{sign}(\mathbf{g }_{t+1})\). S\({}^{2}\)IM performs data augmentation in the frequency domain, which inspires us to explore the data from different domains to improve the attack transferability. Admix mixes images from other categories to obtain diverse inputs and also motivates us to retain the semantic labels of stylized images by mixing up the original image content. The differences between our method and these two methods will be shown in Section 3.4. ### Motivation Lin _et al._(Lin et al., 2019) analogized the generation of adversarial examples with standard neural network training and considered that the transferability of adversarial examples is related to the generalization of normally trained models. Therefore, some existing methods to improve attack transferability are mainly from the perspective of optimization [(8; 23; 42)] or data augmentation [(23; 26; 44)]. In this paper, we notice that the _domain bias_ issue also affects the generalization ability of the normally trained model. For example, a model trained on a specific data domain cannot generalize well to other domain datasets. Besides, even the same architecture network trained on two datasets from different domains will result in two different model parameters, which causes a bias issue between models. In the black-box setting, this domain bias issue also exists due to the structural differences between black-box models and the source model, which limits the transferability of the adversarial examples. In the domain generalization fields, several studies [(4; 58)] address the _domain bias_ issue by training the models using data from different domains to improve the model generalization ability. On the contrary, we find that previous input transformation-based methods mainly apply data augmentation in the same source domain, which might limit the adversarial transferability. It inspires us to utilize data from different domains to improve adversarial transferability. However, data from different domains are often expensive and difficult to obtain. Thanks to the development of style transfer models and domain generalization with style transfer studies [(10; 16; 17; 36; 41)], we can quickly obtain a large amount of data from different domains. Based on the above analysis, we propose a new attack method named Style Transfer Method (STM) to boost adversarial transferability, which transforms the data into different domains by using an arbitrary style transfer network. It is worth noting that we are not directly using the stylized image for the attack, we mainly use the gradient information of the stylized images during the iterations to obtain more effective adversarial perturbations. ### Style Transfer Method The overall framework of our Style Transfer Method (STM) is shown in Figure 2, which we will cover in detail below. **Arbitrary style transfer.** An important component of our method is the style transfer network \(ST(\cdot)\), which can replace the style of an input image \(\mathbf{x}\) with that of an arbitrary style image \(\mathbf{s}\). To apply a specific style, the network must observe the chosen style image. This is accomplished through a style predictor network \(P(\cdot)\), which maps a style image \(\mathbf{s}\) to a style embedding \(\mathbf{z}=P(\mathbf{s})\), where \(\mathbf{z}\) is a feature vector of the \(\mathbf{s}\). The style embedding influences the action of the style transfer network via conditional instance normalization [(10)], and the style transfer network \(ST(\mathbf{x},P(\mathbf{s}))\), as a typical encoder/decoder architecture, generates the corresponding style images specifically for the normalized parameters of each style. In general, the style predictor network predicts the feature maps of a style image \(\mathbf{s}\) and embeds them into the style transfer network to generate a specific style of image. Jackson _et al._[(17)] suggested that rather than providing randomly selected style images through a style predictor to generate random style embeddings, it would be Figure 2. The overall framework of our proposed style transfer attack method. (a): We fine-tune the pre-trained style transfer network such that the generated stylized images keep the original semantic labels as much as possible for the classification networks. (b): Combine the original image with the generated stylized images by the mixing ratio \(\gamma\) to retain semantic consistency, and add random noise for image augmentation. (c): Add the generated perturbations to the original image to generate the adversarial example. more computationally efficient to simulate this process by sampling them directly from a probability distribution, which was trained on about 79,433 artistic images. From this strategy, the arbitrary style images can be obtained as follows: \[\mathbf{x}_{s}=ST(\mathbf{x},\mathbf{z}),\ \ \mathbf{z}=(1-v)\cdot\mathcal{N}( \mu_{\text{s}},\Sigma)+v\cdot P(\mathbf{x}), \tag{4}\] where \(v\) is the style embedding interpolation parameter, \(\mu_{\text{s}}\) and \(\Sigma\) are the empirical mean and covariance matrix of the style image embeddings \(P(\mathbf{s})\). Here, \(\mu_{\text{s}}=\mathbb{B}_{\mathbf{z}}[P(\mathbf{s})]\), \(\Sigma_{i,j}=Coq[P(\mathbf{s})_{i},P(\mathbf{s})_{j}]\). In this work, we simplify the process of image style transfer to adapt the process of generating adversarial examples. Specifically, the style predictor network is not necessary for us due to the fact that our inputs are clean images, and to obtain an arbitrary style image, we replace the style embedding vector \(\mathbf{z}\) with a standard orthogonal distribution vector. Thus, the arbitrary style images of the adversarial examples at the \(t-\)th iteration can be obtained as: \[\mathbf{x}_{s}=ST(\mathbf{x}_{t}^{adv},\mathbf{z}),\ \ \mathbf{z}=\mathcal{N}(0,1). \tag{5}\] It is an available method since a standard orthogonal distribution can be transformed by a series of affine transformations to obtain an arbitrary style embedding vector (Golovolov et al., 2012; Golovolov et al., 2012). **Preserving semantic consistency of stylized images.** Although style transfer networks can generate stylized images while preserving semantic content for humans, they change low-level features of original images (_e.g._, texture, color, and contrast), which might change the semantic labels of stylized images. Since the stylized images may mislead the surrogate model, this will lead to imprecise gradient information during iterations and affect the success rate of adversarial attacks. To address this issue, as shown in Figure 2 (a), we first fine-tune our style transfer network by using a classifier that integrated several classification models. Specifically, we expect that the images generated by the style transfer network can still be correctly classified by the classifier. We define the fine-tuning loss function as follows: \[L(\mathbf{x})=\sum_{i=1}^{k}w_{k}J_{k}(ST(\mathbf{x},\mathbf{z}),y;\theta), \tag{6}\] where \(J_{k}(\cdot)\) is the \(k-\)th model cross-entropy loss, and \(\sum_{i=1}^{k}w_{k}=1\) is the ensemble weight. For each input image, we keep the parameter weight of the classification model constant and use gradient descent to minimize the loss function \(L(\mathbf{x})\) to update the model parameters of the style transfer network \(ST(\cdot)\) during the model fine-tuning process. In detail, we integrate Inception-v3, Inception-v4 (Srivastava et al., 2015), ResNet-101, and ResNet-152 (He et al., 2016) classification models and average the cross-entropy loss of these models. We randomly select \(1,000\) images on the Imagenet-1000 (Krizhevsky et al., 2014) validation dataset as our fine-tuning dataset, and each of these images can be correctly classified by neural networks. Then we fine-tune the style augmentation module on them using the Adam optimizer, and we set 30 epochs with a learning rate of \(1\times 10^{-4}\). Although fine-tuning the style transfer network can enable partial data recovery to the original semantic labels, the recognition accuracy of classifier networks for data from different domains is still limited due to the influence of _domain bias_. Inconsistent semantic labels may generate imprecise gradient directions during gradient calculation, thus limiting the transferability of the adversarial examples. To address this limitation, we mix up the generated stylized images with the original image, as shown in Figure 2 (b), which can be expressed as follows: \[\tilde{\mathbf{x}}=\gamma\cdot\mathbf{x}+(1-\gamma)\cdot\mathbf{x}_{s}, \tag{7}\] where \(\gamma\) is a mixing ratio, and \(\gamma\in[0,1]\). Mixing up the original image with its style-transformed images allows the augmented image to introduce features from different domains while preserving the original semantic labels to avoid generating imprecise gradient information during the iterative optimization process. Lastly, we add stochastic noise \(\mathbf{r}\) on the stylized images to obtain diverse images from different domains to enhance the transferability of the adversarial examples, where \(\mathbf{r}\sim\mathcal{U}[-(\beta\cdot\epsilon)^{d},(\beta\cdot\epsilon)^{d}]\), and \(\beta\) is a given parameter. After the above analysis, we propose a novel style transfer-based attack method to improve the attack transferability, and the proposed algorithm is outlined in Algorithm 1. ``` 0: A clean image \(\mathbf{x}\) with ground-truth label \(y\), surrogate classifier with parameters \(\theta\), and the loss function \(J\). 0: Parameters: The magnitude of perturbation \(\epsilon\); maximum iteration \(T\); decay factor \(\mu\); the upper bound of neighborhood \(\beta\) for \(\mathbf{r}\); the mixing ratio \(\gamma\); the number of random generating examples \(N\). 0: An adversarial example \(\mathbf{x}^{adv}\). 1:\(\alpha=\epsilon/T\); 2:\(\mathbf{g}_{0}=0\), \(\mathbf{x}_{0}^{adv}=\mathbf{x}\); 3:for\(t=0,1,\cdots,T-1\)do 4:for\(i=0,1,\cdots,N-1\)do 5: Obtain a random stylized image \(\mathbf{x}_{s}\) by \(ST(\mathbf{x}_{t}^{adv},\mathbf{z})\); 6: Mix the original image by \(\tilde{\mathbf{x}}=\gamma\cdot\mathbf{x}+(1-\gamma)\cdot\mathbf{x}_{s}+ \mathbf{r}\); 7: Calculate the gradient \(\tilde{\mathbf{g}}_{i}=\nabla_{\tilde{\mathbf{x}}}/(\tilde{\mathbf{x}},y;\theta)\); 8:endfor 9: Get the average gradient, \(\tilde{\mathbf{g}}=\frac{1}{N}\sum\limits_{i=1}^{N}\tilde{\mathbf{g}}_{i}\); 10:\(\mathbf{g}_{t+1}=\mu\cdot\mathbf{g}_{t}+\frac{\tilde{\mathbf{g}}}{\|\tilde{ \mathbf{g}}\|_{1}}\); 11: Update \(\mathbf{x}_{t+1}^{adv}\) by \[\mathbf{x}_{t+1}^{adv}=\text{Clip}_{\tilde{\mathbf{x}}}^{\epsilon}\{\mathbf{x}_{ t}^{adv}+\alpha\cdot\text{sign}(\mathbf{g}_{t+1})\};\] 12:endfor 13:return\(\mathbf{x}^{adv}=\mathbf{x}_{T}^{adv}\). ``` **Algorithm 1** Style Transfer attack Method (STM) ### Differences with Other Methods * As shown in Figure 1, compared with DIM, TIM, SIM, Admix, and S\({}^{2}\)IM, our STM method introduces generated data from different domains for enhancing the transferability of adversarial examples, while these existing methods mainly adopt data in the same domain for augmentation. * Our STM preserves some original information by mixing the content of original images, while the Admix method obtains diversity of images by mixing images with different categories. * S\({}^{2}\)IM transforms the spatial domain into the frequency domain for enhancement, while STM introduces images from different domains based on statistical differences in the low-level features of the dataset. ## 4. Experiments In this section, we conduct extensive experiments on the ImageNet-compatible dataset. We first provide the experimental setup. Then we compare the results of the proposed methods with existing methods on both normally trained models and adversarially trained models. Finally, we conduct ablation studies to study the effectiveness of key parameters in our STM. The experimental results were performed multiple times and averaged to ensure the experimental results are reliable. ### Experimental Settings **Dataset.** We adopt the ImageNet-compatible dataset for our experiments, which is widely used in other works (Dosovosov et al., 2017; Zhang et al., 2018). It contains 1,000 images with size of \(299\times 299\times 3\), ground-truth labels, and target labels for targeted attacks. **Models.** To validate the effectiveness of our methods, we test the attack performance on several popular pre-trained models, _i.e._, Inception-v3 (Inc-v3) (Zhou et al., 2017), ResNet-50 (Res-50), ResNet-152 (Res-152), Resnet-v2-101 (Res-101) (He et al., 2017), Inception-v4 (Inc-v4) and Inception-Resnet-v2 (IncRes-v2) (Zhou et al., 2017). We also consider adversarially trained models _i.e._, Inc-v3\({}_{ens3}\), Inc-v3\({}_{ens4}\) and IncRes-v2\({}_{ens}\)(Zhou et al., 2017). **Baselines.** We take five popular input transformation-based state-of-the-art attacks as our baselines, _i.e._, DIM (Dosov et al., 2017), TIM (Zhou et al., 2017), SIM (Zhou et al., 2017), Admix (Zhou et al., 2017), S\({}^{2}\)IM (Zhang et al., 2018). All these methods are integrated into MI-FGSM (Dosov et al., 2017). **Hyper-parameters.** In this work, we set the maximum perturbation \(\epsilon=16\), the number of iterations \(T=10\), step size \(\alpha=1.6\) and decay factor \(\mu=1.0\). We set the transformation probability \(p=0.5\) in DIM and the kernel length \(k=7\) in TIM. For SIM and Admix, we use the number of copies \(m_{1}=5\), the number of mixed samples \(m_{2}=3\), and the admix ratio \(\eta=0.2\). For S\({}^{2}\)IM, we adopt the tuning factor \(\rho=0.5\), the standard deviation \(\sigma=16\) of \(\bar{\xi}\), and the number of spectrum transformations \(N=20\). For our proposed STM, we set the mixing ratio \(\gamma=0.5\), the noise upper bound \(\beta=2.0\), and the number of style transfer images \(N=20\). ### Attack a Single Model To validate the effectiveness of our STM, we first compare STM with various input transformation-based attacks, including DIM, TIM, SIM, Admix, and S\({}^{2}\)IM. All these methods are integrated with the MI-FGSM (Dosov et al., 2017). The adversarial examples are generated on Inc-v3, Inc-v4, IncRes-v2, and Res-101, respectively. We report the attack success rates, _i.e._, the misclassification rates on the crafted adversarial examples in Table 1. We observe that STM can effectively improve the attack success rate on black-box models. For example, DIM, Admix, and S\({}^{2}\)IM achieve the attack success rates of 72.4%, 78.6% and 87.5%, respectively on Inc-v4 when generating adversarial examples on Inc-v3. In contrast, STM can achieve the attack success rate of 90.8%, which \begin{table} \begin{tabular}{|c|c|c c c c c c c|c|} \hline Model & Attack & Inc-v3 & Inc-v4 & IncRes-v2 & Res-101 & Inc-v3\({}_{ens3}\) & Inc-v3\({}_{ens4}\) & IncRes-v2\({}_{ens}\) & Avg. \\ \hline \hline \multirow{8}{*}{Inc-v3} & DIM & 99.7\({}^{*}\) & 72.4 & 66.7 & 62.8 & 32.0 & 30.7 & 16.6 & 54.41 \\ & TIM & **100.0\({}^{*}\)** & 51.0 & 46.8 & 47.8 & 30.0 & 30.9 & 21.5 & 46.85 \\ & SIM & **100.0\({}^{*}\)** & 69.7 & 68.2 & 63.8 & 37.8 & 37.9 & 21.8 & 57.03 \\ & Admix & **100.0\({}^{*}\)** & 78.6 & 75.1 & 69.5 & 40.9 & 41.7 & 23.1 & 61.27 \\ & S\({}^{2}\)IM & 99.7 & 87.5 & 86.7 & 77.7 & 58.2 & 56.2 & 34.9 & 71.55 \\ & **STM** & **99.9\({}^{*}\)** & **90.8** & **90.1** & **82.4** & **68.3** & **68.1** & **46.3** & **77.98** \\ \hline \multirow{8}{*}{Inc-v4} & DIM & 75.0 & 99.2\({}^{*}\) & 68.8 & 71.7 & 29.0 & 26.2 & 16.6 & 55.21 \\ & TIM & 58.7 & 99.8\({}^{*}\) & 47.7 & 58.9 & 28.0 & 28.2 & 20.8 & 48.87 \\ & SIM & 82.4 & **99.9\({}^{*}\)** & 74.0 & 80.7 & 46.6 & 44.2 & 30.8 & 65.51 \\ & Admix & 85.7 & **99.9\({}^{*}\)** & 76.5 & 81.6 & 47.7 & 44.8 & 29.3 & 66.50 \\ & S\({}^{2}\)IM & **90.4** & 99.6\({}^{*}\) & 86.3 & 85.9 & 58.5 & 55.4 & 37.2 & 73.33 \\ & **STM** & 90.0 & 99.0\({}^{*}\) & **86.4** & **86.1** & **61.0** & **59.2** & **41.2** & **74.70** \\ \hline \multirow{8}{*}{IncRes-v2} & DIM & 72.3 & 70.7 & 97.3\({}^{*}\) & 72.2 & 32.5 & 30.2 & 20.9 & 56.59 \\ & TIM & 62.9 & 57.2 & 98.9\({}^{*}\) & 63.3 & 32.9 & 31.8 & 26.4 & 53.34 \\ & SIM & 85.8 & 81.8 & **99.4\({}^{*}\)** & 82.3 & 61.1 & 54.4 & 46.6 & 73.06 \\ & Admix & 85.9 & 82.0 & 99.3\({}^{*}\) & 82.1 & 61.6 & 52.5 & 45.6 & 72.71 \\ & S\({}^{2}\)IM & 90.1 & 88.6 & 98.1\({}^{*}\) & 85.8 & 67.7 & 63.3 & 55.9 & 78.50 \\ & **STM** & **91.8** & **91.3** & 98.5\({}^{*}\) & **87.6** & **76.3** & **71.5** & **64.5** & **83.07** \\ \hline \multirow{8}{*}{Res-101} & DIM & 78.1 & 75.8 & 67.2 & 99.8\({}^{*}\) & 30.8 & 28.6 & 17.8 & 56.90 \\ & TIM & 61.3 & 53.6 & 44.1 & 99.5\({}^{*}\) & 30.3 & 31.9 & 22.5 & 49.10 \\ \cline{1-1} & SIM & 70.9 & 60.1 & 53.8 & 99.7\({}^{*}\) & 26.7 & 26.2 & 15.9 & 50.51 \\ \cline{1-1} & Admix & 72.3 & 65.1 & 58.1 & 99.6\({}^{*}\) & 26.4 & 27.1 & 16.3 & 52.19 \\ \cline{1-1} & S\({}^{2}\)IM & 88.4 & **85.7** & 80.2 & 99.7\({}^{*}\) & 49.6 & 46.0 & 33.5 & 69.06 \\ \cline{1-1} \cline{2-11} & **STM** & **89.3** & 85.5 & **80.8** & **99.9\({}^{*}\)** & **56.5** & **56.1** & **36.9** & **72.16** \\ \hline \end{tabular} \end{table} Table 1. Untargeted attack success rates (%) of various input transformation-based attacks in the single model setting. The adversarial examples are crafted on Inc-v3, Inc-v4, IncRes-v2, and Res-101 by DIM, TIM, SIM, Admix, S\({}^{2}\)IM, and our STM attack methods, respectively. \({}^{*}\) indicates the white-box model. outperforms S\({}^{2}\)IM by a margin of 3.3%. On the adversarially trained models, STM consistently exhibits better performance than other input transformation-based methods and improves the average attack success rate by at least 7.45% than other methods. This confirms our motivation that introducing data from different distribution domains for augmentation can enhance the transferability of the adversarial attack, especially for adversarially trained models. ### Attack an Ensemble of Models Liu _et al._ (Liu et al., 2017) have shown that attacking multiple models simultaneously can improve the transferability of the generated adversarial examples. To further demonstrate the efficacy of our proposed STM, we used the ensemble model attack in (Liu et al., 2017), which fuses the logit outputs of various models. The adversaries are generated by integrating three normally trained models, including Inc-v3, Inc-v4, and IncRes-v2. All the ensemble models are assigned equal weights and we test the performance of transferability on two normally trained models and three adversarially trained models. As shown in Table 2, our proposed STM always achieves the highest attack success rates in the black-box setting. Compared with previous input transformation-based attack methods, STM achieves an average success rate of 90.3% on five black-box models, which outperforms S\({}^{2}\)IM by an average of 4.92%. We also notice that our method has a success rate of over 80% for attacks on all the adversarially training models. This also validates that our method combined with the ensemble model can obtain adversarial examples with higher transferability. ### Combined with Input Transformation-based Attacks Existing input transformation-based attacks have shown great compatibility with each other. Similarly, our method can also be combined with other input transformation-based methods to improve the transferability of adversarial examples. To further demonstrate the efficacy of the proposed STM, we compare the attack success rates of S\({}^{2}\)IM (known as the best method) and our STM when combined by DIM, TIM, and SIM, respectively. We generate adversarial examples on the Inc-v3 model and test the transferability of adversarial examples on six black-box models. As shown in Table 3, under the same setting, STM performs much better than S\({}^{2}\)IM when combined with various input transformation-based attacks. On average, STM outperforms S\({}^{2}\)IM by a clear margin of 3.24%, 2.6% and 3.87% when combined with DIM, TIM and SIM, respectively. Especially, our STM tends to achieve much better results on the ensemble adversarially trained models, which have shown great effectiveness in blocking the transferable adversarial examples. Such consistent and remarkable improvement supports its high compatibility with existing input transformation-based attacks and further validates its superiority in boosting adversarial transferability. ### Attack Defense Models In this subsection, besides normally trained models and adversarially trained models, we further validate the effectiveness of our methods on other defenses, including Bit-Red (Zhu et al., 2017), ComDefend (Liu et al., 2017), JPEG (Liu et al., 2017), HGD (Zhu et al., 2017), R&P (Zhu et al., 2017), NIPS-r3 (Zhu et al., 2017), RS (Dong et al., 2018) and NPR (Zhu et al., 2017). The adversarial examples are generated in the same setting as in Section 4.3. More specifically, adversarial examples are generated on an ensemble of Inc-v3, Inc-v4, and IncRes-v2, and the weight for each model is 1/3. The experimental results are shown in Table 4. In the setting of ensemble models, we can observe that our algorithm can significantly boost existing attacks. For example, Admix and S\({}^{2}\)IM only attain an average success rate of 66.9% and 77.5% on the six defense models, respectively, while our STM can achieve an average rate of 84.7%, which is 17.8% and 7.2% higher than them, respectively. This demonstrates the remarkable effectiveness of our proposed method against both adversarially trained models and other defense models and brings a greater threat to advanced defense models. ### Ablation Studies In this subsection, we conduct a series of ablation experiments to study the impact of fine-tuning the style transfer network and mixing the original image. To simplify the analysis, we only consider the transferability of adversarial examples crafted on Inc-v3 by the STM method. Since the stylized images may mislead the surrogate model, this will lead to imprecise gradient information during iterations and affect the success rate of adversarial attacks. To address this issue, we first fine-tune the style transfer network so that the generated images can be correctly classified by multiple models. We also mix up the original image with its style-transformed images to further avoid such imprecise gradients. a) We first test the classification accuracy of the stylized images by adding these two strategies to the classification network and evaluate the performance of the black-box attacks with these strategies. As shown in Table 5, when we transform the original images into stylized images, the prediction accuracy of stylized images decreases. This indicates that stylized images might not maintain semantic consistency for classification networks. The results also show that fine-tuning the model and mixing up the original image content can effectively maintain the semantic labels of the stylized images. We can further observe that mixing up the content of the original image can significantly maintain semantic consistency, which avoids generating imprecise gradient information during the iterative optimization process. b) As shown in Table 6, we find that all of these strategies are beneficial to improve the transferability of adversarial examples, especially for mixing up the original image content. And fine-tuning the style transfer network can significantly improve the transferability, \begin{table} \begin{tabular}{c|c c c c c|c} \hline Attack & Res-101 & Res-152 & Inc-v3 & Inc-v3 & \(\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}\small{}\text{}\text{}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}\text{}\small{}}\text{\small{}\text{}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}\small{}\text{}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}}\text{\small{}\text{}\small{}\text{}\small{}\text{}\text{\small{}}\text{\small{} validating our assumption that maintaining semantic consistency can boost adversarial transferability. When combining all of these strategies, STM achieves the best performance, supporting our rational design. ## 5. Conclusion Inspired by the fact that the _domain bias_ issue affects the generalization ability of the normally trained models, we postulate that it might also impact the transferability of the adversarial examples. However, we find that existing input transformation-based methods mainly adopt the transformed data in the same source domain, which might limit the adversarial transferability. Based on this finding, we propose a novel attack method named Style Transfer Method (STM), which transforms the data into different domains using an arbitrary style transfer network to enhance the adversarial transferability. To maintain semantic consistency and avoid stylized images misleading the surrogate model and leading to imprecise gradients during iterative process, we fine-tune the style transfer network and mix up the content of the original image with its style-transformed images. Our method can be well integrated with existing input transformation-based methods to further improve adversarial transferability. Empirical results on the ImageNet-compatible dataset demonstrated that our STM can achieve higher attack success rates in both normally trained models and adversarially trained models, with excellent performance on both non-target and targeted attacks than state-of-the-art input transformation-based attacks. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (Nos. 62072334, 61976164, and 6227071567), and the National Science Basic Research Plan in Shaanxi Province of China (No. 2022GY-061). \begin{table} \begin{tabular}{|c c c|c c c c c|} \hline Style transfer & Fine-tuning & Mixing up & Inc-v4 & IncRes-v2 & Res-101 & Inc-v3\({}_{ens3}\) & Inc-v3\({}_{ens4}\) & IncRes-v2\({}_{ens}\) \\ \hline \hline \(X\) & \(X\) & 49.7 & 47.1 & 61.9 & 22.3 & 23.4 & 10.9 \\ \hline _✓_ & \(X\) & 57.8 & 55.3 & 64.7 & 36.5 & 34.4 & 18.8 \\ _✓_ & _✓_ & 71.9 & 70.3 & 72.3 & 46.4 & 45.3 & 26.0 \\ _✓_ & _✓_ & **90.8** & **90.1** & **82.4** & **68.3** & **68.1** & **46.3** \\ \hline \end{tabular} \end{table} Table 6. Untargeted attack success rates (%) on black-box models when applying different strategies, such as image style transformation, fine-tuning the model, and mixing up the original image content. Each strategy adds random noise by default. The adversarial examples are crafted on Inc-v3. \begin{table} \begin{tabular}{|c|c c c c c c c|c|} \hline Attack & Inc-v3 & Inc-v4 & IncRes-v2 & Res-101 & Inc-v3\({}_{ens3}\) & Inc-v3\({}_{ens4}\) & IncRes-v2\({}_{ens}\) & Avg. \\ \hline \hline \(\text{S}^{2}\text{I}\)-DIM & 99.3\({}^{*}\) & 92.9 & 91.5 & 91.2 & 69.5 & 67.7 & 47.8 & 79.99 \\ \hline \(\text{ST}\)**-DIM** & **99.9\({}^{*}\)** & **93.9** & **93.2** & **92.3** & **75.1** & **75.2** & **53.0** & **83.23** \\ \hline \(\text{S}^{2}\text{I}\)-TIM & 99.3\({}^{*}\) & 88.7 & **87.5** & **82.8** & 74.5 & 74.2 & 59.7 & 80.96 \\ \hline \(\text{ST}\)**-TIM** & **99.9\({}^{*}\)** & **89.4** & 86.1 & **82.8** & **80.9** & **79.7** & **66.1** & **83.56** \\ \hline \(\text{S}^{2}\text{I}\)-SIM & **99.8\({}^{*}\)** & 91.3 & 90.9 & 91.2 & 71.2 & 70.5 & 48.6 & 80.50 \\ \hline \(\text{ST}\)**-SIM** & **99.8\({}^{*}\)** & **93.4** & **92.9** & **92.0** & **78.9** & **76.3** & **57.3** & **84.37** \\ \hline \end{tabular} \end{table} Table 3. Untargeted attack success rates (%) of \(\text{S}^{2}\text{IM}\) and our STM when integrated with DIM, TIM, and SIM, respectively. The adversarial examples are generated on Inc-v3. \({}^{*}\) indicates the white-box model. \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline Attack & HGD & R\&P & NNPs-r3 & Bit-Red & JPEG & ComDefend & RS & NPR & AVG. \\ \hline \hline Admix & 64.0 & 58.1 & 67.8 & 46.7 & 81.7 & 82.9 & 42.3 & 49.1 & 61.6 \\ \(\text{S}^{2}\text{IM}\) & 74.2 & 74.2 & 81.0 & 58.6 & 88.4 & 88.7 & 55.2 & 58.9 & 72.4 \\ \hline **STM** & **80.7** & **82.5** & **87.6** & **72.1** & **91.5** & **93.7** & **70.6** & **77.6** & **82.0** \\ \hline \end{tabular} \end{table} Table 4. Untargeted attack success rates (%) on six defense models. The adversarial examples are crafted on the ensemble models, _i.e.,_ Inc-v3, Inc-v4 and IncRes-v2. \begin{table} \begin{tabular}{|c c c|c c c c c c|} \hline Style transfer & Fine-tuning & Mixing up & Inc-v3 & Inc-v4 & IncRes-v2 & Res-50 & Res-101 & Res-152 & AVG. \\ \hline \hline \(X\) & \(X\) & \(X\) & 95.1 & 97.6 & 100.0 & 83.3 & 85.4 & 87.3 & 91.45 \\ \hline _✓_ & \(X\) & \(X\) & 38.2 & 47.2 & 49.7 & 11.6 & 17.8 & 18.6 & 30.52 \\ _✓_ & _✓_ & _✓_ & 61.0 & 67.0 & 72.5 & 27.7 & 37.6 & 37.5 & 50.55 \\ _✓_ & _✓_ & _✓_ & **88.2** & **92.2** & **95.1** & **68.1** & **70.2** & **75.6** & **81.57** \\ \hline \end{tabular} \end{table} Table 5. Classification accuracy (%) of the ImageNet-compatible dataset on original images and the transformed images by different strategies, such as image style transformation, fine-tuning the model, and mixing up the original image content.
2303.12364
ExBEHRT: Extended Transformer for Electronic Health Records to Predict Disease Subtypes & Progressions
In this study, we introduce ExBEHRT, an extended version of BEHRT (BERT applied to electronic health records), and apply different algorithms to interpret its results. While BEHRT considers only diagnoses and patient age, we extend the feature space to several multimodal records, namely demographics, clinical characteristics, vital signs, smoking status, diagnoses, procedures, medications, and laboratory tests, by applying a novel method to unify the frequencies and temporal dimensions of the different features. We show that additional features significantly improve model performance for various downstream tasks in different diseases. To ensure robustness, we interpret model predictions using an adaptation of expected gradients, which has not been previously applied to transformers with EHR data and provides more granular interpretations than previous approaches such as feature and token importances. Furthermore, by clustering the model representations of oncology patients, we show that the model has an implicit understanding of the disease and is able to classify patients with the same cancer type into different risk groups. Given the additional features and interpretability, ExBEHRT can help make informed decisions about disease trajectories, diagnoses, and risk factors of various diseases.
Maurice Rupp, Oriane Peter, Thirupathi Pattipaka
2023-03-22T08:03:27Z
http://arxiv.org/abs/2303.12364v3
ExBEHRT: Extended Transformer for Electronic Health Records to Predict Disease Subtypes & Progressions ###### Abstract In this study, we introduce ExBEHRT, an extended version of BEHRT (BERT applied to electronic health records), and apply different algorithms to interpret its results. While BEHRT considers only diagnoses and patient age, we extend the feature space to several multimodal records, namely demographics, clinical characteristics, vital signs, smoking status, diagnoses, procedures, medications, and laboratory tests, by applying a novel method to unify the frequencies and temporal dimensions of the different features. We show that additional features significantly improve model performance for various downstream tasks in different diseases. To ensure robustness, we interpret model predictions using an adaptation of expected gradients, which has not been previously applied to transformers with EHR data and provides more granular interpretations than previous approaches such as feature and token importances. Furthermore, by clustering the model representations of oncology patients, we show that the model has an implicit understanding of the disease and is able to classify patients with the same cancer type into different risk groups. Given the additional features and interpretability, ExBEHRT can help make informed decisions about disease trajectories, diagnoses, and risk factors of various diseases. ## 1 Introduction Over the past decade, electronic health records (EHRs) have become extremely popular for documenting a patient's medical history, with many existing records combining heterogeneous temporal information about diagnoses, procedures, laboratory tests, observations and demographic data from a variety of sources (primary care, hospital visits, etc.). In general, a sequence of medical events of a single patient is referred to as a _patient journey_. Given the immense amount of data available (datasets range up to over 100M patients) and its level of detail, there is incredible potential for the use of machine learning to provide new insights into disease pattern recognition, early detection of rare diseases, and personalised risk prediction and treatment planning. Embedding algorithms derived from natural language processing (NLP) have shown remarkable performance when trained to represent patients' medical histories. Due to the chronological structure of EHRs, such algorithms can provide various insights into disease trajectories and clinical phenotypes. Recent advances in NLP have also shown that transformer-based methods such as BERT (Devlin et al. (2018)), GPT-3 (Brown et al. (2020)) and their variations are significantly superior to other approaches, as they are able to model complex temporal dependencies over a long period of time. In this paper, we present a novel approach to incorporate multimodal features into Transformer models by adding medical concepts separately and vertically, rather than chaining all concepts horizontally. We show that these features are important in various downstream applications such as mortality prediction, patient subtyping and disease progression prediction. #### Generalizable Insights about Machine Learning in the Context of Healthcare The main contributions from this work can be summarized as follows: 1. A novel form of incorporating any sort of multi-modal EHR features into BERT (or any other Transformer-based model) without having to extend the resources needed to train the model due to consistent, fixed patient journey sequences. 2. The addition of patient information that, to our knowledge, was not included in any previous work (BMI, smoking status, laboratory values) and improves model performance for several downstream tasks. These additional features provide a more comprehensive and complex understanding of patients, leading to deeper and more robust insights for clinicians when interpreting model results. In combination with the expected gradients model explainability, we can gain new insights into the different pieces of information and their impact on the outcome. 3. An exploration of unsupervised clustering of cancer patients using the patient representation of ExBEHRT, identifying groups of cancer types and subgroups within one cancer type with diverse information about their characteristics for recognizing risk subtypes and treatment patterns. ## 2 Related Work Recent studies have adapted transformers to structured EHR data and shown their superiority in various benchmarks compared to other similar algorithms (Kalyan et al. (2022)). Since most publications in this area are a derivative of BERT (Devlin et al. (2018)), in this section we will focus exclusively on BERT-based approaches applied to EHR data. The first adaptation of EHR to BERT, called BEHRT (Li et al. (2020)), incorporated diagnosis codes and ages from EHRs and added additional embeddings to separate individual visits (segment embedding) and a position embedding for the visit number. To separate visits, the authors added **SEP** tokens1. between visits, analogous to the **SEP** token between sentences in BERT and **CLS** token as an artificial start token. The model was pre-trained by using the Masked Language Modelling (MLM) objective on diagnosis concepts. Med-BERT (Rasmy et al. (2021)) introduced a code serialisation embedding in addition to diagnosis and position embeddings, indicating the order of diagnoses within a visit. MedBERT was pre-trained with MLM and a binary classification target of whether a patient had at least one hospital stay of more than one week (_prolonged length of stay in hospital_ or PLOS). CEHR-BERT (Pang et al. (2021)) and BRLTM (Meng et al. (2021)) contain many more measures than the other two approaches. Instead of separate a diagnosis embedding, the studies combined all medical concepts (i.e. conditions, procedures and medications) of a patient into a single vector. This method results in considerable overhead when training a model, as the maximum length of the patient journey is significantly higher than if only the diagnosis codes were included. Adding more features (e.g. observations) would increase the resources required due to the increased length of the vector. In addition, there are a variety of models that either combine the BERT architecture with other machine learning models (Shang et al. (2019), Poulain et al. (2022), Li et al. (2021)) or focus exclusively on specific use cases (Azhir et al. (2022), Prakash et al. (2021), Rao et al. (2022)). All the aforementioned approaches either lack generalizability to different domains due to specific pre-training (a key advantage of transfer learning using transformers), do not incorporate enough variety in patient information to generate informed decisions or are limited in the amount of data of a single patient they can process. ## 3 ExBEHRT for EHR Representation Learning ExBEHRT is an extension of BEHRT where medical concepts are not concatenated into one long vector (as in Figure 2 for the example patient shown in Figure 1), but grouped into separate, learnable embeddings per concept type. In this way, we avoid exploding input lengths when adding new medical features and give the model the opportunity to learn which concepts it should focus on. From a clinical perspective, it would also be stringent to separate diagnoses, procedures, drugs, etc., as they have different clinical value for downstream applications. We take the number of diagnoses in a visit as an indicator of how many "horizontal slots" are available for other concepts in that visit (e.g. two for the first visit in Figure 3). Therefore, the maximum length of the patient journey is defined by the number of diagnosis codes of a patient, regardless of the number of other concepts added to the model. As shown by the procedures in figure 3, but carried out in the same way with lab tests, there are three possible cases of adding a new concept to a visit: Figure 1: An example of the procedures and diagnoses of a patient with three visits. 1. The number of procedures is equal to the amount of horizontal slots available in the visit (visit 1 - two each). The procedures can therefore be represented as a 1D vector. 2. The number of procedures exceeds the amount of slots available in the visit (visit 2 - one diagnosis, two procedures). Here, the procedures fill up the number of horizontal slots line by line until there are no more procedures left, resulting in a 2D vector of dimensions \(\#slots\times\lceil\frac{\#procedures}{\#slots}\rceil\). 3. The number of procedures subceeds the amount of slots available (visit 3 - one diagnosis, no procedures). The procedures are represented as a 1D vector and then padded to the amount of horizontal slots available. The padding token **PAD** can be understood as an indicator to the model which parts of a patient journey can be neglected, as they don't contain information. It is added at the end of a sentence to ensure the same length \(m\) for each patient. After the reshaping described above, all procedures of all patients are padded to the same amount of rows \(n\) to enable batch processing. \(n\) is set to the.95 percentile over all representations of visits of all patients before training. Since BMI, smoking status and gender naturally don't fluctuate within one visit, they do not need to be rearranged in a complex way. Therefore, the value recorded Figure 3: An example of how ExBEHRT represents the patient from figure 1. As the features are stacked vertically, additional concepts (such as labs as shown in figure 4) will not increase the sentence length \(m\). Figure 2: An example of how models like CEHR-BERT and BRLTM represent the patient from figure 1 by horizontally stacking all features into a 1D representation. Note that each additional measure potentially increases the maximal sentence length \(m\). within a visit is copied to all horizontal slots of the corresponding visit. In addition, the embeddings of diagnoses, age and segment are the same as described in the original BEHRT publication. Before the inputs are passed to the model, each token is embedded in a 288-dimensional vector and all tokens are summed vertically. A visualisation of the complete input can be found in Figure 4. Figure 4: A sample input of ExBEHRT. Each of the concepts has its own embedding, where each of the tokens is mapped to a 288-dimensional vector, which is learned during model training. After embedding, all concepts are summed vertically element-wise to create a single \(288\times m\) dimensional vector as input for the model. ### Pre-training with Masked Language Modelling For pre-training the BERT-based models, we applied the standard MLM procedure described in the original BERT paper (Devlin et al. (2018)) applied on diagnosis code prediction using their BertAdam optimizer with cross-entropy loss. We followed the vast majority of subsequent papers, where in each iteration 15% of the diagnosis codes of a patient are selected randomly and either masked (80% of the time), replaced with another diagnosis code (10% of the time) or kept the same (10% of the time). ### Fine-tuning on Disease-Specific Event Prediction We validated our model on several disease-specific binary classification tasks from different domains. One oncology-specific task (Lu et al. (2022)) commonly found in the literature is the prediction of cancer patient mortality within six and twelve months. The observation window (the information provided to the model) is the entire patient journey to the first cancer diagnosis (including the visit with the first cancer diagnosis). The third task, which was part of the CEHR-BERT paper, is to predict the readmission of a patient with heart failure to the hospital within 30 days after heart failure. The observation window includes all visits within one year before the (first) heart failure. To account for the strong class imbalance between positive and negative outcomes, we also included focal loss (Lin et al. (2017)) in our hyperparameter search space. Focal loss reduces the relative loss for well-classified examples and puts more emphasis on difficult, misclassified examples. ### Cancer Patient Clustering with ExBEHRT Embeddings To generate patient clusters and visualize them in a meaningful way, we applied a combination of the dimensionality reduction technique UMAP (McInnes et al. (2018)) and the clustering algorithm HDBSCAN (Campello et al. (2013)). As ExBEHRT is not specialized on a specific disease, we conducted another pass of pre-training, where we initialized the model weights with the pre-trained ExBEHRT weights and applied MLM on cancer diagnosis codes only. This non-finetuned model was then used for generating the patient embeddings. After training, we conducted the following steps for the unsupervised clustering: 1. Generate the ExBEHRT embedding (vector of size 288) for each patient (stemming from the CLS token at the beginning of each patient journey) 2. Reduce the embeddings' size from \(a\)) from 288 to 10 using UMAP to get representations for clustering and avoid the curse of dimensionality 3. Cluster the 10-dimensional vectors using HDBSCAN 4. Reduce the embeddings' size from \(a\)) from 288 to 2 using UMAP to get 2D coordinates for each patient 5. Visualize the Clusters from \(c\)) in the 2D space of the embedding from \(d\) ## 4 Cohort In this study, we used the Optum(r) de-identified EHR database. It is derived from healthcare provider organizations in the United States, which include more than 57 contributing sources and 111,000 sites of care including hospital-based medical services networks comprising academic, private, and community hospitals treating more than 106 million patients. Optum(r) data elements also include demographics, medications prescribed and administered, immunizations, allergies, lab results (including microbiology), vital signs and other observable measurements, clinical and hospitalisation administrative data, and coded diagnoses and procedures. The population in Optum(r) EHR is geographically diverse, spanning all 50 US states. ### Pre-Training Cohort We selected only data points collected during hospitalisations to ensure the quality and consistency of the data2. Each patient must have at least five visits with valid ICD-9 or ICD-10 diagnosis codes to ensure sufficient temporal context. Our cohort is selected with the same criteria as in BEHRT, resulting in 5.4M individuals. In order to prevent any sort of data leakage during pre-training and fine-tuning, the data is split into three datasets before training: training (80%), validation (10%) and testing (10%). Footnote 2: This includes emergency patients, inpatients, observation patients, nursing facility patients, hospice patients and inpatient rehabilitation patients. ### Fine-Tuning Cohorts To validate the model's performance on cancer-specific tasks, we limited patients to have at least five diagnoses, regardless of the number of visits, in order to incorporate enough information for valid predictions. At least one of these diagnoses must be a cancer code (ICD-10 C[0-99]). This cohort consists of 437,903 cancer patients (31.67% deceased within 6 months and 38.45% within 12 months of first cancer diagnosis), split into three datasets (training (80%), validation (10%), test (10%)), with each patient who is also part of the \begin{table} \begin{tabular}{l l} \hline **Feature** & **Metric** \\ \hline Birth year & 1973\(\pm\)25, min: 1932, max: 2021 \\ Gender & 41.49\% male, 58.51\% female \\ Distribution by race & 68\% Cau., 22\% Afr. Am., 1\% As., 9\% other \\ No. of diagnosis codes per patient & 14\(\pm\)11.1, min: 5, max: 121 \\ No. of visits per patient & 9\(\pm\)6.6, min: 5, max: 63 \\ \% of patients without labs & 14.33\% \\ \% of patients without procedures & 1.64\% \\ \% of patients without BMI & 21.74\% \\ \% of patients without smoking status & 27.11\% \\ \% of deceased patients & 14.52\% \\ \hline \end{tabular} \end{table} Table 1: Statistics of the pre-training cohort. pre-training cohort described in section 4.1 being assigned to the same data split as in the other cohort to avoid data leakage. We constructed the heart failure readmission cohort similarly, but did not restrict patients to a specific number of visits or diagnoses as long as one code was a heart failure code (ICD-10 I50). Again, we felt this was an appropriate use-case for clinicians. This resulted in a cohort of 503,161 heart failure patients (28.24% readmitted within 30 days), split into three data sets. The detailed statistics of these two cohorts can be found in the appendix 6. ### Data Processing For the diagnoses, we mapped all ICD-9 codes to ICD-10 codes according to the general equivalence mappings provided by the National Bureau of Economic Research3. Furthermore, only primary diagnoses are considered, as we wanted to focus on the most important diagnostics. Similar to Meng et al. (2021) and Choi et al. (2015), we limited the diagnosis codes to three characters to maintain a reasonable amount of relevant detail. Per visit diagnoses were de-duplicated to avoid biasing the model towards recurring codes during long visits. After de-duplication, patients with more than 128 diagnoses were discarded, as only 0.625% of all patients had more than 128 diagnosis codes. In addition, we included procedures4 and laboratory types5 in the model. As with diagnoses, procedure codes and laboratory types were de-duplicated per visit. Footnote 3: [https://www.mber.org/research/data/icd-9-cm-and-icd-10-cm-and-icd-10-pcs-crosswalk-or-general-equivalence-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-model-for-model-for-diagnosis-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model](https://www.mber.org/research/data/icd-9-cm-and-icd-10-cm-and-icd-10-pcs-crosswalk-or-general-equivalence-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-diagnosis-model-for-model-for-model-for-model-for-diagnosis-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-model-for-model-for-model-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-for-model-model) of the corresponding publications6. All models were trained for 40 epochs on a Tesla T4 GPU with 16GB memory, where, as in the original publication, the epoch with the highest micro-averaged MLM precision score7 was selected. We further report the balanced accuracy to get a better sense of the overall performance of the models. Footnote 6: The source code can be found here: [https://github.com/deepmedicine/BEHRT](https://github.com/deepmedicine/BEHRT) and [https://github.com/ZhiGroup/Med-BERT](https://github.com/ZhiGroup/Med-BERT) Footnote 7: Per default, the precision score is evaluated at a 0.5 threshold. Certainly, one could apply additional model calibration, but we hypothesized that this could introduce bias before fine-tuning. As presented in table 2, adding additional features to BEHRT significantly increases the pre-training performance in diagnosis code prediction. Adding a second pre-training objective slightly harms the MLM performance, but could nevertheless lead to improved fine-tuning performance due to additional context. ### Fine-Tuning on Event Prediction Results For this set of tasks we report the metrics commonly used to evaluate algorithms that perform binary predictions: area under the receiver operating characteristic curve (AUROC), average precision score (APS) and the precision at the 0.5 threshold. We used their micro-averaged implementations to follow the line of previous work and ensure a more robust assessment of the overall performance. More details on the hyperparameter optimization process can be found in appendix 6. We denote the task of prediction of death within N months after the first cancer prediction as _Death in 6M_ and _Death in 12M_ and predicting readmission of patients within 30 days of their first heart attack as _HF readout_. In addition to comparing the performance of our algorithm with that of BEHRT, we also benchmarked against two of the best performing "conventional" machine learning algorithms for tabular data, XGBoost (XGB, Chen and Guestrin (2016)) and Logistic Regression (LR). Details on the preprocessing of the data and the tuning of the hyperparameters can be found in appendix 6. \begin{table} \begin{tabular}{l l l l l l} \hline \hline & **BEHRT** & **Med-BERT** & **ExBEHRT** & **ExBEHRT+P** \\ \hline Precision & 54.6\% & 56.2\% & **64.2\%** & 63.9\% \\ Balanced Accuracy & 8.86\% & 8.03\% & **16.58\%** & 15.52\% \\ \hline \hline \end{tabular} \end{table} Table 2: Pre-Training results of various models. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline **Task** & **Metric** & **LR** & **XGB** & **BEHRT** & **Mod-BERT** & **ExBEHRT** & **ExBEHRT+P** \\ \hline \multirow{3}{*}{Death in 6M} & APS & 42.8\(\pm\)0.0\% & 45.5\(\pm\)0.1\% & 47.7\(\pm\)0.4\% & 46.2\(\pm\)0.4\% & **53.1\(\pm\)0.3\%** & 52.6\(\pm\)0.3\% \\ & AUROC & 63.5\(\pm\)0.0\% & 66.4\(\pm\)0.1\% & 66.7\(\pm\)0.6\% & 65.3\(\pm\)0.3\% & **71.5\(\pm\)0.5\%** & 70.9\(\pm\)0.5\% \\ & Precision & 73.0\(\pm\)0.1\% & 74.3\(\pm\)0.1\% & 75.2\(\pm\)0.2\% & 74.5\(\pm\)0.1\% & **78.1\(\pm\)0.1\%** & 77.9\(\pm\)0.1\% \\ \hline \multirow{3}{*}{Death in 12M} & APS & 51.6\(\pm\)0.0\% & 45.5\(\pm\)0.1\% & 55.5\(\pm\)0.1\% & 54.4\(\pm\)0.2\% & **59.8\(\pm\)0.2\%** & 59.6\(\pm\)0.2\% \\ & AUROC & 66.7\(\pm\)0.0\% & 66.3\(\pm\)0.1\% & 70.1\(\pm\)0.2\% & 68.9\(\pm\)0.3\% & **74.3\(\pm\)0.4\%** & 73.8\(\pm\)0.4\% \\ & Precision & 70.4\(\pm\)0.1\% & 74.4\(\pm\)0.1\% & 73.2\(\pm\)0.1\% & 72.4\(\pm\)0.1\% & **76.4\(\pm\)0.1\%** & 76.3\(\pm\)0.1\% \\ \hline \multirow{3}{*}{HF readout} & APS & 29.8\(\pm\)0.0\% & **31.3\(\pm\)0.1\%** & 19.9\(\pm\)0.1\% & 19.8\(\pm\)0.1\% & 30.0\(\pm\)1.6\% & 25.1\(\pm\)0.1\% \\ & AUROC & 51.9\(\pm\)0.1\% & 53.6\(\pm\)0.1\% & 51.2\(\pm\)0.1\% & 51.0\(\pm\)0.1\% & 56.7\(\pm\)1.7\% & **56.8\(\pm\)0.2\%** \\ \cline{1-1} & Precision & **72.0\(\pm\)0.0\%** & **72.3\(\pm\)0.1\%** & **81.0\(\pm\)0.1\%** & 81.0\(\pm\)0.0\% & 78.7\(\pm\)0.2\% & **81.6\(\pm\)0.1\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Average fine-tuning results of various models and their standard deviations. As shown in table 3, the variants of ExBEHRT outperform the four baselines we created on all tasks. We also found that the addition of the second pre-training target PLOS can lead to slightly better performance in some scenarios, but is not superior overall. Nonetheless, XGBoost provides a higher APS than the transformer-based models on HF Readmit, but performs worse on the other metrics. For the two cancer mortality tasks, we further performed a cancer-specific evaluation of the ten most common cancers, as the different cancer types differ drastically in their expected outcomes. This evaluation can be found in appendix 6. For all tasks presented here, we also examined the effects of omitting certain concepts to measure the impact of the new features. These ablations can be found in appendix 6. In addition, we examined the effects on the positional variance of concepts within a visit. Since the model was trained to predict diagnosis codes at specific time points and not within a visit, there could be a possible bias that the model performs worse when the different features are mixed within a visit. The model should perform similarly regardless of which slot procedures and laboratory values are added to, as there is no temporal order within a visit. This ablation can be found in appendix 6. ### Interpretability on Event Prediction Results For all interpretability experiments, we used the ExBEHRT model fine-tuned on the task _Death in 6M_, meaning whether a cancer patient will decease within six months after their first cancer diagnosis. We visualize the interpretability for individual patients only, as both interpretability approaches presented here are example-based and not model-agnostic. #### 5.4.1 Self-Attention Visualization Analogous to previous work (Li et al. (2020), Rasmy et al. (2021), Meng et al. (2021)), we visualised the attention of the last network layer using BertViz (Vig (2019)). However, since in all of these models the embeddings are summed before being passed through the network, self-attention has no way of attributing individual input features to the outcome. Nevertheless, we can draw conclusions about how the different slots interact with each other and which connections the model considers important. Figure 5 shows the self-attention of a single patient in the last layer of ExBEHRT. The journey represents a 69-year-old woman who never smoked and died a year after being diagnosed with lung cancer. The left figure shows the attention of all 12 attention heads in this layer, while the right figure shows the attention of one single head. As expected, the model focuses heavily on the slots within a visit, as these slots are highly interconnected by definition. Although the model was not specifically trained on cancer codes, it pays close attention to slot 7 (slot containing the cancer diagnosis), suggesting that it has learned some correlation between the cancer diagnosis and the predicted outcome. Interestingly, slot 7 receives a lot of attention on the first and second visits, but not on the other two previous visits, suggesting that the model is able to learn causality over long periods of time. Table 15 in appendix 6 contains information about all diagnoses, procedures and labs for each slot of the patient from the figure 5. #### 5.4.2 Expected Gradients Interpretability Due to the limitations of self-attention visualisation, we have explored the technique Expected Gradients (Erion et al., 2020) for more detailed interpretability. With this algorithm, we can infer the meaning of individual input tokens, which is not possible with self-attention. Since each token (diagnosis code, procedure code, age, etc.) is mapped to a 288-dimensional embedding before being passed to the model, we first calculated the expected gradients for the embedding and then summed the absolute values to obtain a single gradient value for each token. In this way, each individual token has an associated gradient that is linked to the output of the model and provides detailed insights into which medical concept has what impact on the prediction of the model. Our example patient is a 58-year-old woman who was a regular smoker. She died at the age of 65, three months after her blood cancer diagnosis. In figure 6, we summed all expected gradients for each of the input features. This way, we can evaluate the feature importances on the output for a specific patient. For this patient, diagnoses and procedures (treatments & medications) were by far the most importance features. With this visualization, we can further evaluate basic biases. For example, gender was not considered to be an important feature, indicating that predictions would be similar for a person with another gender. Figure 5: Left: The self-attention of all 12 attention heads of the last layer of ExBEHRT. Higher opacity corresponds to higher attention. Right: The self-attention of one attention head of the last layer. In figure 7, we visualized the absolute expected gradients for each of the features and summed them at each time slot. This way, we can evaluate the different feature importances over time to get a notion of where the model puts emphasis on. Interestingly, the model put more importance on what kind of medications & treatments that patient received in the first two visits, where as in the last visit (the visit in which the patient was diagnosed with blood cancer), it put more importance on diagnoses and labs. Generally, slot 5, where the cancer was diagnosed, was attributed with the highest importance. Figure 8 displays the absolute sums of gradients of each individual input token, providing a detailed interpretation of which medical concept has had what impact on the models prediction. Unsurprisingly, the cancer code C81 has had the biggest impact on the outcome. However, earlier codes like J40 or 71020 also contribute to the models prediction, indicating that the model includes information from the whole patient j Figure 6: The absolute sums of the expected gradients summed by input feature. Figure 7: The absolute sums of the expected gradients summed by input feature and time slot. The dotted lines indicate the next visit. ### Patient Clustering From the 260'645 cancer patients from the general cohort, HDBSCAN was able to cluster 90% (234'575) into 24 clusters (mean: 9'774, min: 1'102, max: 47'722). As shown in figure 9, the clusters are clearly separated spatially, indicating a distinct separation of the different cancer types. We labelled each cluster with the most occurring diagnosis code within this cluster, regardless of the type of code. Interestingly, similar concepts (e.g. cancer of female reproductive organs (clusters 14-16), different types of leukaemia (clusters 6-8)) or cancer of digestive organs (clusters 2, 4, 5, 18, 19) lay in areas close to each other, indicating a spatial logic within the disease types. On average, the most common cancer diagnosis within a cluster was present in 84% of patients assigned to that cluster, indicating a strong internal focus on cancer codes within the model. Of the 23 clusters, 22 had a unique cancer code as the most common diagnosis and included, on average, 85% of all patients diagnosed with the corresponding cancer code. These two metrics indicate strong cross-cluster purity and homogeneity within the clusters. For a more detailed description of all clusters as well as the hyperparameters used in the different clustering steps, see appendix 6 in table 14 and figures 13 and 14. #### 5.5.1 Disease Subtyping To draw conclusions about the internal clustering of HDBSCAN, we examined the most frequently occurring diagnoses, procedures and labs for each cluster. We focused only on concepts that occurred at least 5% more frequently within the cluster than in the entire cohort. In this way, we ensured that very common diagnoses such as pelvic pain were not included in our cluster analysis. Figure 8: A visualization of the absolute sums of the expected gradients of diagnoses, labs and procedures on a concept level. Darker colours represent higher values and the SEP tokens indicate the separation between two visits. A closer look at clusters 7 and 8 shows the potential of ExBEHRT to form subgroups of the same cancer type (Figures 10 & 11). Although almost all patients in both clusters have lymphocytic leukaemia, their diagnoses, procedures and applied laboratory tests differ considerably. Figure 9: The unsupervised cluster assignments from HDBSCAN, visualized with a 2-dimensional UMAP projection. The gray points are patients not assigned to any cluster (10%). The labels indicate the most frequent diagnosis code of each cluster. Besides cluster 10, all labels are neoplasms. Examination of the different patient characteristics (table 4) of these two clusters (table 4) shows that the model has indeed learned to distinguish between chronic lymphocytic leukaemia (CLL, cluster 7) and acute lymphocytic leukaemia (ALL, cluster 8) without having explicit information on these subtypes. As we limited the ICD-10 codes to three digits, only the general lymphocytic leukemia code C91 is given to the model without the subtypes C91.0 for ALL and C91.1 for CLL. In the table, _% of journey with cancer_ indicates the ratio of the time between the first and last cancer diagnosis compared to the duration of the whole patient journey. _Cancer-free_ refers to the percentage of patients within a cluster, which have records of at least two visits after the last visit with a cancer diagnosis. The _average death rate_ comes directly from the Optum(r) EHR database and unfortunately does not indicate the cause of death. \begin{table} \begin{tabular}{l l l} \hline \hline **Metric** & **Cluster 7 (CLL)** & **Cluster 8 (ALL)** \\ \hline Median age & 70 & 5 \\ Median birth year & 1946 & 2009 \\ Median BMI & 26 & 17 \\ \% of men & 60.5\% & 55.2\% \\ Average death rate & 54.7\% & 6.6\% \\ \% of journey with cancer & 29.9\% & 45.5\% \\ Cancer-free & 47.3\% & 48.1\% \\ \hline \hline \end{tabular} \end{table} Table 4: Statistics of the two lymphoblastic leukemia clusters indicating a clear separation between CLL and ALL. Figure 11: The three most common procedures, labs and diagnoses for the ALL cluster. Figure 10: The three most common procedures, labs and diagnoses for the CLL cluster. Another example, the pancreatic cancer cluster 4, shows that with a second pass of HDBSCAN on this cluster only, we can identify risk subgroups of pancreatic cancer. In all three identified clusters, more than 90% of the patients actually do have pancreatic cancer and all share similar general characteristics. However, as displayed in table 5, ExBEHRT identified one subgroup with significantly higher chance of recovering from cancer and a lower probability of death, even though this information was not provided to the model at any point. Figure 12: The three identified patient subclusters with pancreatic cancer visualized with a kernel density estimate plot for visual clarity. Even though the three clusters generally share the same characteristics in diagnoses, Age, BMI etc., patients belonging to the smaller purple cluster died less frequently and recovered nearly twice as often from cancer compared to the other two clusters. \begin{table} \begin{tabular}{l l l l} \hline \hline **Metric** & **Gray** & **Blue** & **Purple** \\ \hline Median age & 67 & 68 & 68 \\ Median birth year & 1950 & 1947 & 1944 \\ Median BMI & 25 & 25 & 26 \\ \% of men & 52.3\% & 50.9\% & 60.0\% \\ Average death rate & 76.5\% & 75.9\% & **70.0\%** \\ \% of journey with cancer & 27.0\% & 24.0\% & **18.3\%** \\ Cancer-free & 34.0\% & 36.9\% & **62.7\%** \\ \hline \hline \end{tabular} \end{table} Table 5: Statistics of the three pancreatic cancer clusters indicating a clear differentiation between higher risk (gray, blue) and lower risk patients (purple). ## 6 Discussion In this study, we presented a novel method for adding patient features to BEHRT that significantly increases the predictive power for multiple downstream tasks in different disease domains. The novel method of stacking features vertically led to improvements in hardware requirements and benchmarks, and facilitates the possible extension to new concepts in the future. Given the large number and heterogeneity of patients with which the model was pre-trained, we are confident that ExBEHRT will generalise well to new data, patients and tasks. Combined with interpretability, the model offers more detailed insights into disease trajectories and subtypes of different patients than previous approaches, which could help clinicians form more detailed assessments of their patients' course and health. Furthermore, with a personalised understanding of patient groups, it is possible to identify unmet needs and improve patient outcomes. **Limitations and Future Work** It is worth noting that the pre-training precision reported in BEHRT's original paper is higher than the one we were able to reproduce with the same model on our data (0.6597 (theirs) vs. 0.5456 (ours)). One possible explanation is that we drastically increased the task complexity since our model predicts a label out of 1916 instead of 300 diagnosis concepts. Nevertheless, we were able to show that additional features significantly improved both the quantitative and qualitative performance of the model, and we expect that this would also be the case when using the original dataset and the medical codes from BEHRT. Furthermore, it is extremely difficult to validate the quality, completeness and correctness of EHR datasets because EHR data is usually processed anonymously and comes from a variety of heterogeneous, fragmented sources. The pure nature of EHR data also introduces bias, as physicians may have an incentive to diagnose additional or other conditions, as medical billing is closely related to the number and type of diagnoses reported. In addition, there is also the question of bias and fairness in our results. In a possible next step, we would like to verify the results and interpretations of this work with clinicians to ensure robust and sound predictions given the interpretability we have acquired. In addition, we would like to test the generalisability of ExBEHRT to other clinical use-cases such as severity prediction and risk typing of other diseases and certain cancers.
2307.01189
Trainable Transformer in Transformer
Recent works attribute the capability of in-context learning (ICL) in large pre-trained language models to implicitly simulating and fine-tuning an internal model (e.g., linear or 2-layer MLP) during inference. However, such constructions require large memory overhead, which makes simulation of more sophisticated internal models intractable. In this work, we propose an efficient construction, Transformer in Transformer (in short, TinT), that allows a transformer to simulate and fine-tune complex models internally during inference (e.g., pre-trained language models). In particular, we introduce innovative approximation techniques that allow a TinT model with less than 2 billion parameters to simulate and fine-tune a 125 million parameter transformer model within a single forward pass. TinT accommodates many common transformer variants and its design ideas also improve the efficiency of past instantiations of simple models inside transformers. We conduct end-to-end experiments to validate the internal fine-tuning procedure of TinT on various language modeling and downstream tasks. For example, even with a limited one-step budget, we observe TinT for a OPT-125M model improves performance by 4-16% absolute on average compared to OPT-125M. These findings suggest that large pre-trained language models are capable of performing intricate subroutines. To facilitate further work, a modular and extensible codebase for TinT is included.
Abhishek Panigrahi, Sadhika Malladi, Mengzhou Xia, Sanjeev Arora
2023-07-03T17:53:39Z
http://arxiv.org/abs/2307.01189v2
# Trainable Transformer in Transformer ###### Abstract Recent works attribute the capability of in-context learning (ICL) in large pre-trained language models to implicitly simulating and fine-tuning an internal model (e.g., linear or 2-layer MLP) during inference. However, such constructions require large memory overhead, which makes simulation of more sophisticated internal models intractable. In this work, we propose a new efficient construction, _Transformer in Transformer_ (in short, TinT), that allows a transformer to simulate and fine-tune more complex models during inference (e.g., pre-trained language models). In particular, we introduce innovative approximation techniques that allow a TinT model with less than 2 billion parameters to simulate and fine-tune a 125 million parameter transformer model within a single forward pass. TinT accommodates many common transformer variants and its design ideas also improve the efficiency of past instantiations of simple models inside transformers. We conduct end-to-end experiments to validate the internal fine-tuning procedure of TinT on various language modeling and downstream tasks. For example, even with a limited one-step budget, we observe TinT for a OPT-125M model improves performance by \(4-16\%\) absolute on average compared to OPT-125M. These findings suggest that large pre-trained language models are capable of performing intricate subroutines. To facilitate further work, a modular and extensible codebase for TinT is included 1. Footnote 1: [https://github.com/abhishekpanigrahi1996/transformer_in_transformer](https://github.com/abhishekpanigrahi1996/transformer_in_transformer) ## 1 Introduction Transformers [41] have brought about a revolution in language modeling, and scaling model size has enabled significant advancements in capabilities [5; 7]. One such capability [45] is in-context learning (ICL), where language models "learn" from given training exemplars in the context and subsequently predict the label of a test example within a single inference pass. Several works [1; 10; 42] propose that ICL occurs when the large ("simulator") model mimics and trains a smaller and simpler auxiliary model --such as a linear or 2-layer MLP model-- on the in-context data. A crucial limitation of previous works is the large number of parameters needed for a simulator to perform such a complex subroutine during its forward pass, which restricts the simulator to performing very few training steps on fairly simple models. For example, simulating training of a linear layer can require tens of millions of parameters [1], and extending the simulator to train a larger model would require a simulator with trillions of parameters. The current work shows that minor modifications to the standard transformer architecture allow it to efficiently simulate and approximately train an internal _auxiliary_ transformer during a single inference pass (Section 2). We call our architecture _Transformer in Transformer_, or TinT in short. We show how TinT can internally simulate and train several popular and capable Transformer models such as GPT [32], OPT [52], and other variants [40; 35]. In particular, TinT incorporates novel designs and approximations such as encoding auxiliary weights as prefix embeddings and efficient utilization of attention modules for computational parallelism. As a result, TinT with fewer than two billion parameters can internally simulate and perform one update on an auxiliary transformer with 125 million parameters (e.g., GPT-2 or OPT-125m). The scale and efficiency of our construction are crucial to its significance, as it suggests that even transformers of moderate scale can explicitly learn from context during inference. We validate our approach with end-to-end experiments on many language modeling and downstream tasks. Results demonstrate that a TinT model constructed to simulate and tune an OPT-125m model leads to a perplexity reduction of \(0.3\) to \(0.7\) points in language modeling. Additionally, TinT learns from in-context exemplars in the few-shot setting, resulting in an absolute gain of \(12\%\) to \(16\%\) over the auxiliary model. TinT can also learn from the context tokens of the evaluation inputs in the zero-shot setting, leading to an absolute performance improvement of up to \(4\%\) when no explicit exemplars are provided (Section 3.2). To the best of our knowledge, TinT is the first simulator to undergo such a comprehensive end-to-end evaluation on standard language tasks. In contrast, previous studies primarily conducted probing tests on transformers pre-trained using synthetic datasets or lacked empirical validation [1, 42, 13], likely due to the immense scale required by their constructions. ## 2 Our Construction The TinT model needs to simulate three operations on the _auxiliary_ model: forward pass, backward pass, and gradient update. The auxiliary model is a transformer with four main _modules_: (1) linear layer, (2) attention module, (3) activation function, and (4) layer normalization. Thus TinT must express \(3\times 4\), i.e., \(12\) operations, using compositions of the identical four types of modules available in TinT. For presentation, we describe TinT for OPT and GPT auxiliary models. The simulation details of other variants are in Appendix 1. ### Summary of modifications and their efficiency contributions We summarize all of the modifications introduced for efficient simulation. Some improve the parameter efficiency of all \(12\) simulated operations, whereas others are specific to certain layers. Numbers in parentheses indicate the parameter saving factor when relevant. 1. **Prefix embeddings** (\(5\times\) compared to [44, 29]): As described in Section 2.2, we use the token embeddings of the first few inputs (i.e., the _prefix_, see Definition 2.1) to represent the relevant auxiliary model weights at each layer. All operations use this template to improve the parameter efficiency of simulating even simple operations (e.g. linear layers). Figure 1: The overall structure of TinT. Each Forward, Backward, and Descent module is represented using combinations of linear, self-attention, layernorm, and activation layers. The input consists of prefix embeddings, that represent relevant auxiliary model parameters in each layer, input token embeddings, and a binary prefix mask to separate the train and evaluation segments of the input. The auxiliary model parameters are updated in the descent module using the training part of the segment, and the updated prefix tokens are transferred to the forward modules via residual connections for evaluating the rest of the segment. 2. \(H_{\text{sim}}\)**-split linear operations** (\(H_{\text{sim}}\times\)): In Section 2.3, we parallelize expensive linear operations in TinT using \(H_{\text{sim}}\)-split operations. 3. **Linear attention**: We use linear attention modules to perform the forward, backward, and gradient operations for an auxiliary model linear layer. Softmax attention also suffices but requires more parameters and incurs an approximation error (Theorem 2.5). We use softmax attention modules to simulate the auxiliary model attention modules in TinT. 4. **First order gradients** (\(4\times\)): We use the first-order term of the gradient for the layer normalization and activations layers (Section 2.4). 5. **Gradients only through value vectors in attention** (\(5\times\)): We only use the gradients of the value vectors of the attention module to backpropagate to previous layers (Section 2.5). We show that under certain conditions this approximation can be arbitrarily accurate (Theorem 2.13). 6. **Parameter sharing** (\(3\times\) or \(4\times\)): We save \(3\times\) parameters by applying the same forward module in TinT to simulate the query, key, and value computation of the auxiliary model's self-attention module (Section 2.6). Similarly, we divide the feedforward layer in the auxiliary model into 4 sub-layers and save \(4\times\) parameters by employing a single TinT module to simulate the computation of each sub-layer. We focus on the parameter-efficient modules of our model and defer the complete formal construction to the appendix. For illustration, we ignore the bias parameters here but discuss them in the appendix. **Notation:** Let \(D\) denote the embedding dimension for a token and \(T\) denote the length of an input sequence. \(H\) denotes the number of attention heads. With the exception of contextual embeddings, we use subscripts to indicate if the quantity is from TinT model or from the auxiliary model. For example, \(D_{\text{aux}}\) refers to the embedding dimension and \(D_{\text{sim}}\) refers to TinT model embedding dimension. For contextual embeddings, we use \(\mathbf{e}_{t}^{(\ell)}\in\mathbb{R}^{D_{\text{sim}}}\) to denote activations in TinT and \(\mathbf{x}_{t}^{(\ell)}\in\mathbb{R}^{D_{\text{aux}}}\) to denote activations in the auxiliary model, where \(\ell\) is the layer and \(t\) is the sequence position. When convenient, we drop the superscript that represents the layer index and the subscript that represents the position index. For a matrix \(\mathbf{A}\), \(\mathbf{a}_{j}\) refers to its \(j\)th row, and for any vector \(\mathbf{b}\), \(b_{j}\) refers to its \(j\)th element. TinT uses one-hot position embeddings \(\{\mathbf{p}_{i}^{\text{TNT}}\in\mathbb{R}^{T_{\text{sim}}}\}_{i\leq T_{\text{ sim}}}\). ### Operating on an auxiliary model with prefix embeddings The straightforward way to simulate the forward pass of the auxiliary model would be to store its weights in the simulator's weights and run a forward pass as usual. However, this gives the simulator no way to update the weights of the auxiliary model, since the simulator cannot modify its own weights during a forward pass. The only way to update the auxiliary model weights is by storing them in model _activations_ that can be accessed and modified over the course of a forward pass. Wei et al. [44], Perez et al. [29] model the simulator after a Turing machine. Each simulator token embedding \(\mathbf{e}_{t}^{(\ell)}\in\mathbb{R}^{D_{\text{sim}}}\) acts as a workspace for operations. Weights and intermediate computations are copied between the workspace and memory using attention modules. Memory space can either be Figure 2: TinT simulates the forward pass of a linear layer as a \(H\)-head (\(H=6\) here) attention layer, with parameters of the auxiliary model as the key, the encodings of input tokens as the query, and the positional one-hot vector of the prefix embeddings as the value. We omitted the identical transformation for key, query, and value matrices for simplicity. allocated in a token embedding, thereby increasing the embedding size \(D_{\text{sim}}\)[1], or passed into the model as additional context tokens, thereby increasing the simulator's input sequence length \(T_{\text{sim}}\). Both strategies increase the size of the construction, and using attention modules for copy operations results in a drastic scaling. For example, if \(D_{\text{aux}}=768\), a dot product with weight \(\mathbf{w}\in\mathbb{R}^{768}\), i.e. \(\langle\mathbf{w},\mathbf{x}_{t}^{(\ell)}\rangle\), requires at least \(8.7\) million parameters in the simulator2. Footnote 2: The copy attention will require \(1.7\) million parameters, while the dot product with a feedforward module (following [1]) will require \(>7\) million parameters. Alternatively, storing memory as context tokens and allowing the attention modules to attend to those tokens removes the need for copying operations [13]. Then, a dot product with weight \(\mathbf{w}\in\mathbb{R}^{768}\), i.e. \(\langle\mathbf{w},\mathbf{x}_{t}^{(\ell)}\rangle\), requires \(1.7\) million parameters only. However, naively inserting the parameters as tokens again causes a drastic scaling, since the attention module grows quadratically with the sequence length \(T_{\text{sim}}\). According to this approach, we define _prefix embeddings_ in TinT, which contain only the relevant auxiliary parameters at each layer. **Definition 2.1** (Prefix Embeddings).: We use \(\{\mathbf{v}_{j}^{(\ell)}\}_{j=1}^{K}\) to denote the \(K\) prefix embeddings at the \(\ell\)th layer in TinT model. Prefix embeddings contain quantities (e.g., auxiliary model weights or simulated intermediate activations) needed for each simulation. Figure 1 illustrates TinT. Using prefix embeddings allows us to (1) parallelize operations between the auxiliary weights and the intermediate activations across \(H_{\text{sim}}\) attention heads, (2) keep the embedding dimension \(D_{\text{sim}}\) relatively small, and (3) choose between sharing the auxiliary model across input sequences or resetting the auxiliary model to its original state after each input sequence. We control the number of prefix embeddings \(K\) by designing each layer's prefix to only contain the required auxiliary parameters for the relevant simulated operation. Stacking multiple tokens per embedding allows us to efficiently parallelize across multi-head attention (Section 2.3). Residual connections propagate updated auxiliary model parameters to later layers of TinT. This strategy results in a tradeoff between the true context length that TinT can operate on and the resulting model size. ### Stacking in prefix-tokens, \(H_{\text{sim}}\)-split linear operations and Linear attention We motivate three parameter-efficient techniques using a \(D_{\text{aux}}\times D_{\text{aux}}\) linear layer as a case study. The linear layer is applied token-wise, so we consider a single position \(t\) without loss of generality. **Definition 2.2** (Linear layer).: For a weight \(\mathbf{W}\in\mathbb{R}^{D_{\text{aux}}\times D_{\text{aux}}}\), a linear layer takes \(\mathbf{x}\in\mathbb{R}^{D_{\text{aux}}}\) as input and outputs \(\mathbf{y}=\mathbf{W}\mathbf{x}\). **Stacking:** Let \(\mathbf{w}_{i}\) denote the \(i\)th row of \(\mathbf{W}\), so we must compute \(\langle\mathbf{w}_{i},\mathbf{x}_{t}\rangle\) for all \(i\in[D_{\text{aux}}]\). To do so, the TinT input embedding \(\mathbf{e}_{t}\) must contain \(\mathbf{x}_{t}\) in its first \(D_{\text{aux}}\) coordinates. We provide the weights \(\{\mathbf{w}_{i}\}\) as prefix embeddings \(\{\mathbf{v}_{j}\}\) (Definition 2.1). As a first attempt, we might simply put each \(\mathbf{w}_{i}\) in its own \(\mathbf{v}_{i}\) vector, which means we would need \(K=D_{\text{aux}}\) prefix embeddings at the start of the sequence. For GPT-2, \(D_{\text{aux}}=768\), so placing each weight as an individual prefix embedding will not allow the TinT to accept many standard language context tokens, and the complexity of attention modules in the TinT will grows quadratically with input length. To avoid such inefficiencies, we stack \(S\) weights on top of each other to form each prefix embedding \(\mathbf{v}_{i}\). \(S\) drives a trade-off between the embedding dimension of the TinT, \(D_{\text{sim}}:=D_{\text{aux}}S\), and the context length to the TinT, \(T_{\text{sim}}:=K+T_{\text{aux}}\). We set \(S=4\). **Attention Module:** We can now use a self-attention module of TinT to perform the dot product between the rows of the weights and the input. We modify the usual attention layer to also include the one-hot position embeddings \(\{\mathbf{p}_{i}^{\text{Tint}}\in\mathbb{R}^{T_{\text{sim}}}\}_{i\leq T_{\text{sim}}}\). Here, we illustrate the self-attention layer with a single attention head and defer the definition of a multi-head attention layer to Definition B.1. **Definition 2.3** (TinT self-attention with single head).: For parameters \(\{\mathbf{W}_{Q}^{\text{Tint}},\mathbf{W}_{K}^{\text{Tint}},\mathbf{W}_{V}^{\text{Tint}} \in\mathbb{R}^{D_{\text{sim}}\times D_{\text{aux}}}\}\), \(\{\mathbf{W}_{Q}^{p},\mathbf{W}_{K}^{p},\mathbf{W}_{V}^{p}\in\mathbb{R}^{D_{\text{sim}} \times T_{\text{sim}}}\}\), the self-attention layer with single attention head and a function \(f_{\text{attn}}:\mathbb{R}^{T_{\text{sim}}}\rightarrow\mathbb{R}^{T_{\text{ sim}}}\) takes a sequence \(\{\widehat{\mathbf{e}}_{t}\in\mathbb{R}^{D_{\text{sim}}}\}_{t\leq T_{\text{sim}}}\) as input and outputs \(\{\widehat{\mathbf{e}}_{t}\in\mathbb{R}^{D_{\text{sim}}}\}_{t\leq T_{\text{sim}}}\), such that \[\widehat{\mathbf{e}}_{t}=\sum_{j\leq T_{\text{sim}}}a_{t,j}\mathbf{v}_{j}, \qquad\text{ where }a_{t,j}=f_{\text{attn}}(\mathbf{K}\mathbf{q}_{t})_{j},\] \[\mathbf{q}_{t}=\mathbf{W}_{Q}^{\text{Tint}}\widehat{\mathbf{e}}_{t}+\mathbf{W}_{Q }^{p}\mathbf{p}_{t},\qquad\mathbf{k}_{t}=\mathbf{W}_{K}^{\text{Tint}}\widehat{\mathbf{e}}_{t} +\mathbf{W}_{K}^{p}\mathbf{p}_{t},\qquad\mathbf{v}_{t}=\mathbf{W}_{V}^{\text{Tint}}\widehat{ \mathbf{e}}_{t}+\mathbf{W}_{V}^{p}\mathbf{p}_{t}\text{ for all }t\leq T_{\text{sim}},\] and \(\mathbf{K}\in\mathbb{R}^{T_{\text{sim}}\times D_{\text{sim}}}\) is the key matrix defined with its rows as \(\{\mathbf{k}_{t}\}_{t\leq T_{\text{sim}}}\). \(\mathbf{q}_{t},\mathbf{k}_{t},\mathbf{v}_{t}\) are referred to as the query, key, and value vectors at position \(t\), and \(a_{t,j}\) is referred to as the attention score between tokens at position \(t\) and \(j\). \(f_{\text{attn}}\) can be either linear or softmax functions, and the corresponding layers are referred to as linear and softmax self-attention respectively. _Remark 2.4_.: In order to compute and backpropagate the loss during inference, the self-attention layers in TinT need to be non-causal on the first few tokens of the input.3 Bidirectional attention is used in TinT modules performing backpropagation on auxiliary self-attention layers (Section 2.5). For gradient updates, the prefix embeddings depend on the gradients in the token embeddings. Explicit masks apply bidirectional attention to the input. Footnote 3: Similar prefix models have been developed in [33; 23]. **TinT Linear Forward module:** A first attempt would be to use different attention heads to operate on different rows; however, this only uses \(S\) attention heads whereas large Transformers usually have many more heads. Moreover, the output from the multi-head attention would need to be reorganized before it could be reduced efficiently via summation. Such rearranging requires constructing a \(D_{\text{sim}}\times D_{\text{sim}}\) linear layer. We instead parallelize across more attention heads and ensure the resulting output can easily be compiled to produce the matrix-vector product. The key idea is that we shard each individual weight into \(S^{\prime}\) parts. We set \(S\) and \(S^{\prime}\) such that \(H_{\text{sim}}=S\times S^{\prime}\), effectively computing dot products using all the attention heads available to the TinT. Please see Figure 2. \(H_{\text{sim}}\)**-split linear operations:** The output resulting from the attention module has shape \((D_{\text{sim}}/H_{\text{sim}})\times H_{\text{sim}}\) and is sparse. In order to complete linear forward pass, we need to sum and aggregate the appropriate terms to form a \(D_{\text{sim}}\)-length vector with \(\mathbf{W}\mathbf{x}\) in the first \(D_{\text{aux}}\) coordinates. Straightforwardly summing along rows or columns results in the incorrect terms being aggregated, since the model was sharded. Rearranging the entire matrix to ensure that there are no such conflicting terms summed requires an additional \(D_{\text{sim}}\times D_{\text{sim}}\) linear layer. But we can save a factor \(H_{\text{sim}}\) in the number of parameters via efficient operations that leverage the local structure of the attention output. We space out the results across the \(D_{\text{sim}}/H_{\text{sim}}\) rows and then sum along the \(H_{\text{sim}}\) columns to get the desired \(D_{\text{sim}}\)-length vector. This requires \(D_{\text{sim}}^{2}/H_{\text{sim}}+D_{\text{sim}}H_{\text{sim}}\) parameters. Please see Appendix C.1 for more details. Note that this methodology of leveraging local structure is also useful in compressing the constructions for other linear operations in the TinT, e.g. the TinT's backpropagation modules of Layer Normalization and activation operations (Appendices E and F). **Linear Attention:** The above construction uses a linear attention mechanism instead of the canonical softmax. In the following theorem, we show that any linear attention module with bounded entries can be approximated by softmax attention with a few additional parameters. To avoid accumulating such errors, we continue to use linear attention in TinT in place of softmax attention, wherever necessary. **Theorem 2.5** (Informal, c.f. Theorem B.2).: _For any \(\epsilon>0\), \(B>0\), and a linear attention module with \(H\) attention heads and bounded parameters, there exists a softmax attention module with \(2H_{\text{sim}}\) attention heads and \(4\times\) additional parameters, such that on every sequence of inputs with norm bounded by \(B\), the output sequences of the softmax attention and the linear attention differ by \(\mathcal{O}(\epsilon)\) at each position._ ### First order gradients for layer normalization Below, we show that computing exact gradients for layer normalization is expensive, so we efficiently approximate backpropagation by computing the dominating term. **Definition 2.6**.: [Layer Normalization] Define a normalization function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) that performs \(f(\mathbf{x})=(\mathbf{x}-\mu)/\sigma\), where \(\mu\) and \(\sigma\) are the mean and standard deviation of \(\mathbf{x}\), respectively. Then, layer normalization with parameters \(\gamma,\mathbf{b}\in\mathbb{R}^{D_{\text{sim}}}\) takes as input \(\mathbf{x}\in\mathbb{R}^{D_{\text{sim}}}\) and outputs \(\mathbf{y}\in\mathbb{R}^{D_{\text{sim}}}\), which is computed as \(\mathbf{z}=f(\mathbf{x})\), \(\mathbf{y}=\gamma\odot\mathbf{z}+\mathbf{b}\). **Definition 2.7**.: [Exact Gradient for Layer Normalization] Using notations in Definition 2.6, given the gradient of the loss w.r.t the output of the Layer Normalization \(\partial_{\mathbf{y}}\), backpropagation computes \(\partial_{\mathbf{x}}\) as \[\partial_{\mathbf{x}}=(\partial_{\mathbf{z}}-D_{\text{aux}}-1\sum_{i=1}^{D_{\text{aux} }}\partial_{z_{i}}-\langle\partial_{\mathbf{z}},\mathbf{z}\rangle\mathbf{z})/\sigma\qquad \partial_{\mathbf{z}}=\gamma\odot\partial_{\mathbf{y}}\] The exact backpropagation operation is expensive because computing \(\langle\partial_{\mathbf{z}},\mathbf{z}\rangle\mathbf{z}\) requires a sequential operation involving at least two MLP layers, so we approximate it with a first-order Taylor expansion, which we formally prove is entry-wise close to the true gradient. **Definition 2.8**.: [\(\epsilon\)-approximate Layer Normalization Gradient] With notations defined above, this layer takes \(\partial_{\mathbf{y}},\mathbf{x}\in\mathbb{R}^{D_{\text{aux}}}\) as input and outputs \(\widehat{\partial_{\mathbf{x}}}=\frac{1}{\epsilon}(f(\mathbf{x}+\epsilon\gamma\odot \partial_{\mathbf{y}})-f(\mathbf{x}))\). **Theorem 2.9** (Informal, c.f. Thm E.1).: _With bounded \(\ell_{2}\)-norms of \(\mathbf{x}\), \(\partial_{\mathbf{y}}\), \(\gamma,\mathbf{b}\), \(\left\|\partial_{\mathbf{x}}-\widehat{\partial_{\mathbf{x}}}\right\|_{\infty}\leq \mathcal{O}(\epsilon)\)._ A first-order approximation can only be close to the gradient with symmetric Jacobian. Hence, we cannot apply such an approximation to backpropagate the linear and self-attention layers. We use a TinT module with 2 linear layers, separated by Group Normalization [48], to compute \(\widehat{\partial_{\mathbf{x}}}\), leading to a \(4\times\) parameter reduction compared to computing the exact gradient. ### Gradient backpropagation through values in self-attention in the auxiliary For simplicity, we present the results for a self-attention layer with a single attention head and defer multi-head attention to the appendix (Appendix D). The self-attention in the auxiliary model is similar to that of the TinT (Definition 2.3) but does not use a position vector. **Definition 2.10** (Auxiliary model softmax self-attention).: A self-attention layer with parameters \(\{\mathbf{W}_{Q},\mathbf{W}_{K},\mathbf{W}_{V}\}\) takes a sequence \(\{\mathbf{x}_{t}\}_{t\leq T_{\text{aux}}}\) and outputs a sequence \(\{\mathbf{y}_{t}\}_{t\leq T_{\text{aux}}}\), such that \[\mathbf{y}_{t}=\sum_{j}a_{t,j}\mathbf{v}_{j},\qquad\text{with }a_{t,j}=\operatorname{ softmax}(\mathbf{K}\mathbf{q}_{t})_{j},\quad\mathbf{q}_{t}=\mathbf{W}_{Q}\mathbf{x}_{t},\quad\mathbf{k}_{t} =\mathbf{W}_{K}\mathbf{x}_{t},\quad\mathbf{v}_{t}=\mathbf{W}_{V}\mathbf{x}_{t},\] for all \(t\leq T_{\text{aux}}\), and \(\mathbf{K}\in\mathbb{R}^{T_{\text{aux}}\times D_{\text{aux}}}\) defined with rows \(\{\mathbf{k}_{t}\}_{t=1}^{T_{\text{aux}}}\). **Definition 2.11** (Exact gradient for softmax self-attention).: Given the gradients of the loss w.r.t the output sequence \(\{\partial_{\mathbf{y}_{t}}\}_{t=1}^{T}\), backpropagation computes \(\{\partial_{\mathbf{x}_{t}}\}_{t=1}^{T}\), with \[\partial_{\mathbf{x}_{t}} =\mathbf{W}_{Q}^{\top}\partial_{\mathbf{q}_{t}}+\mathbf{W}_{K}^{\top} \partial_{\mathbf{k}_{t}}+\mathbf{W}_{V}^{\top}\partial_{\mathbf{v}_{t}},\qquad\partial_{ \mathbf{v}_{t}}=\sum_{j}a_{j,t}\partial_{\mathbf{y}_{j}},\] \[\partial_{\mathbf{q}_{t}} :=\sum_{j}a_{t,j}((\partial_{\mathbf{y}_{t}})^{\top}\mathbf{v}_{j})[\mathbf{k }_{j}-\sum_{j^{\prime}}a_{t,j^{\prime}}\mathbf{k}_{j^{\prime}}],\qquad\partial_{ \mathbf{k}_{t}}:=\sum_{j}a_{t,j}((\partial_{\mathbf{y}_{t}})^{\top}(\mathbf{v}_{j}-\sum_ {j^{\prime}}a_{t,j^{\prime}}\mathbf{v}_{j^{\prime}}))\mathbf{q}_{j}\] for all \(t\leq T_{\text{aux}}\), with \(\mathbf{K}\in\mathbb{R}^{T_{\text{aux}}\times D_{\text{aux}}}\) defined with rows \(\{\mathbf{k}_{t}\}_{t=1}^{T_{\text{aux}}}\). The computation of \(\partial_{\mathbf{q}_{t}}:=\sum_{j}a_{t,j}((\partial_{\mathbf{y}_{t}})^{\top}\mathbf{v}_{j })[\mathbf{k}_{j}-\sum_{j^{\prime}}a_{t,j^{\prime}}\mathbf{k}_{j^{\prime}}]\) (and similarly \(\partial_{\mathbf{k}_{t}}\)) requires at least 2 self-attention layers and an MLP layer, since we must compute and multiply attention scores \(a_{t,j}\) and \((\partial_{\mathbf{y}_{t}})^{\top}\mathbf{v}_{j}\) before computing \(\partial_{\mathbf{q}_{t}}\). Thus, we only update the self-attention using the gradients w.r.t. \(\mathbf{v}_{t}\). **Definition 2.12** (Approximate Self-Attention Backpropagation).: With the notations defined above, this layer takes a sequence \(\{\partial_{\mathbf{y}_{t}}\in\mathbb{R}^{D_{\text{aux}}}\}_{t\leq T_{\text{aux}}}\) and \(\{\mathbf{x}_{t}\in\mathbb{R}^{D_{\text{aux}}}\}_{t\leq T_{\text{aux}}}\) as input and outputs \(\{\widehat{\partial_{\mathbf{x}_{t}}}\}_{t\leq T}\), with \(\widehat{\partial_{\mathbf{x}_{t}}}=\mathbf{W}_{V}^{\top}\partial_{\mathbf{v}_{t}}\), where \(\partial_{\mathbf{v}_{t}}=\sum_{j}a_{j,t}\partial_{\mathbf{y}_{j}}\). We formally show that when the attention head for each position pays a lot of attention to a single token (i.e., behaves like hard attention [29]), \(\widehat{\partial_{\mathbf{x}_{t}}}\) is entry-wise close to \(\partial_{\mathbf{x}_{t}}\) for all \(t\). Computing \(\{\widehat{\partial_{\mathbf{x}_{t}}}\}_{t=1}^{T}\) instead of \(\{\partial_{\mathbf{x}_{t}}\}_{t=1}^{T}\) induces a \(5\times\) parameter reduction. **Theorem 2.13** (Informal, c.f. Theorem D.5).: _If on input sequence \(\{\mathbf{x}_{t}\}_{t\leq T_{\text{aux}}}\), the attention scores are \(\varepsilon\)-close to a hard-attention at each position, then for all \(t\), \(\left\|\partial_{\mathbf{x}_{t}}-\widehat{\partial_{\mathbf{x}_{t}}}\right\|\leq\mathcal{O }(\varepsilon)\)._ **Bidirectional prefix attention:** Computing \(\partial_{\mathbf{v}_{t}}\) is similar to computing \(\mathbf{y}_{t}\), except that the attention scores are transposed due the chain rule being applied to a causal auxiliary model: tokens must attend to gradients of future tokens during backpropagation. Therefore, this module requires a bidirectional attention mask in order to compute the self-attention gradients. ### Parameter sharing in the Tint Consider the self-attention layer(Section 2.5). The relevant Tint module performs linear operations with \(\mathbf{W}_{Q},\mathbf{W}_{K},\mathbf{W}_{V}\) to compute query, key, and value vectors at each position \(t\) (Definition 2.2) and hence can be simulated with the Linear Forward module (Section 2.3). We additionally leverage parameter sharing to apply a single Linear Forward module for each of the three computations, changing only the prefix embeddings to correspond to \(\mathbf{W}_{Q}\), \(\mathbf{W}_{K}\), or \(\mathbf{W}_{V}\). Applying the same structure to feed-forward linear layers results in a \(4\times\) reduction in the number of necessary modules (Appendix H). ## 3 Experiments Our approach provides constructions for diverse variants of pre-trained language models. Table 1 highlights many types of modules and the required size and computation for each. The size of a constructed model is influenced by various factors, including the number of layers, and embedding dimension in the auxiliary. We demonstrate the effectiveness of constructed models through language modeling and in-context learning tasks. We evaluate the TinT construction on OPT-125m model. ### Experimental Setup **Tasks:** To verify that our construction performs valid internal tuning, we perform experiments in language modeling and many downstream tasks in zero-shot and few-shot settings. For language \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{GPT2} & \multicolumn{4}{c}{OPT-125m} \\ \cline{2-6} Training portion & \(30\%\) & \(50\%\) & \(70\%\) & \(90\%\) & \(30\%\) & \(50\%\) & \(70\%\) & \(90\%\) \\ \hline Vanilla Model & 25.6 & 24.9 & 24.5 & 23.3 & 29.6 & 28.8 & 28.0 & 28.0 \\ Dyna. Eval & 24.9 & 24.0 & 23.5 & 22.2 & 29.0 & 28.2 & 27.4 & 27.4 \\ TinT & 25.1 & 24.3 & 23.8 & 22.6 & 29.3 & 28.4 & 27.5 & 27.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Language modeling results on WikiText-103. We use \(30\%,50\%,70\%\) and \(90\%\) of sequences for training in dynamic eval and TinT and the rest of the sequence for evaluation. TinT improves upon the auxiliary model perplexities by \(0.3-0.7\) absolute on average. The small perplexity difference between the TinT and dynamic evaluation suggests that the approximations introduced in the descent algorithm (Sections 2.4 and 2.5) have minimal impact on TinT’s performance. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{Module Size} \\ \cline{2-5} Module Name & Forward & Backward & Descent & Total \\ \hline Linear layer & \(Q\) & \(Q\) & \(Q\) & \(3Q\) \\ Layer norms & \(Q\) & \(Q+2D_{\text{sim}}H_{\text{sim}}\) & \(Q\) & \(3Q+2D_{\text{sim}}H_{\text{sim}}\) \\ Self-Attention & \(2Q\) & \(2Q\) & \(2Q\) & \(6Q\) \\ Activation & \(Q_{split}\) & \(2D_{\text{sim}}H_{\text{sim}}\) & \(0\) & \(Q_{split}+2D_{\text{sim}}H_{\text{sim}}\) \\ \hline Self-Attention block & \(4Q\) & \(4Q+2D_{\text{sim}}H_{\text{sim}}\) & \(4Q\) & \(12Q+2D_{\text{sim}}H_{\text{sim}}\) \\ Feed-forward block & \(3Q\) & \(3Q+4D_{\text{sim}}H_{\text{sim}}\) & \(3Q\) & \(9Q+4D_{\text{sim}}H_{\text{sim}}\) \\ Transformer block & \(7Q\) & \(7Q+6D_{\text{sim}}H_{\text{sim}}\) & \(7Q\) & \(21Q+6D_{\text{sim}}H_{\text{sim}}\) \\ Transformer & \(7QL+LQ_{split}\) & \((7Q+6D_{\text{sim}}H_{\text{sim}})L\) & \(7QL\) & \((21Q+6D_{\text{sim}}H_{\text{sim}})L\) \\ \hline OPT-125m & 0.4B & 0.4B & 0.4B & 1.2B \\ OPT-350m & 1.2B & 1.1B & 1.1B & 3.4B \\ OPT-1.3B & 3.7B & 3.6B & 3.5B & 10.8B \\ OPT-2.7B & 7.4B & 7.2B & 7.2B & 21.8B \\ \hline \hline \end{tabular} \end{table} Table 1: Number of parameters of TinT for the forward, backward, and gradient update operations on various modules. For simplicity, we have ignored biases in the following computation. We set \(H_{\text{sim}}=12\) for OPT-125M and \(H_{\text{sim}}=16\) for the other models, \(D_{\text{sim}}=4D_{\text{aux}}\) for all the models, and \(T_{\text{sim}}=T_{\text{aux}}+K\), with \(T_{\text{aux}}=2048\) for opt models, and \(K=D_{\text{aux}}/4\). \(Q=4Q_{split}+3T_{\text{sim}}D_{\text{sim}}/H_{\text{sim}}\), where \(Q_{split}=\frac{1}{H_{\text{sim}}}(D_{\text{sim}})^{2}+H_{\text{sim}}D_{ \text{sim}}\), denotes the number of parameters in a TinT Linear Forward module (Section 2.3). modeling, we use the Wikitext-103 [24] dataset. Furthermore, we evaluate our construction on \(7\) text classification tasks, namely SST-2 [38], MR [27], CR [17], MPQA [47], Amazon Polarity [53], AGNews [53], and Subj [28]. Our methodology draws inspiration from dynamic evaluation [19], where a segment of the input sequence is used to update the auxiliary model, and the remaining portion is used to assess the updated auxiliary model's performance. **Model:** We evaluate a TinT model that tunes an OPT-125m pre-trained model internally. We compare with the vanilla OPT-125m model and its dynamic evaluation variant. We employ the domain-based calibration approach [16] to mitigate label bias from OPT-125m. **Settings:** We explore the following settings in downstream tasks: 1) Single and Multi.: We finetune the auxiliary model using either a single example or concatenated examples (ICL-style) within each input; 2) Label loss and full-context loss: We finetune on the loss either from only label words or the entire context (Figure 3). We evaluate both zero-shot and few-shot settings, using the context of the evaluation example and 32 training examples for internal learning respectively. ### Verification of TinT In language modeling (Table 2), the perplexity decreases with the utilization of TinT, especially as the training proportion increases. For downstream tasks (Table 3), we observe that explicit internal training within TinT surpasses vanilla zero-shot evaluation and in-context learning, even with a limited budget of a single forward pass. Moreover, TinT achieves a performance comparable to dynamic evaluation, indicating that the approximations made during its construction largely preserve its effectiveness for fine-tuning. Though calibration may not always be beneficial in every setting, 4 we observe that the efficacy of TinT remains comparable to dynamic evaluation. Additionally, we find that TinT outperforms or is on par with its similar-sized pre-trained model (opt-1.3b) except in the calibrated few-shot setting, suggesting that the pre-trained models could benefit from a similar internal training procedure. Please refer to Appendix J for more details on experiments. Footnote 4: Such inconsistencies in the calibration method have been observed in previous works [5]. ## 4 Related Work **Interpretability:** One area of study, known as mechanistic interpretability, reverse-engineers the algorithms simulated by these models [12; 26; 43; 25; 8]. These works aim to understand local patterns, e.g. activation and attention patterns, to get insight into overall behavior. Another research direction aims to use declarative programs to describe and compile transformer models, enabling interpretable comprehension of their functioning [46; 21]. **Transformers as Turing Machines:** Several recent works have aimed to understand the expressivity of transformers. Perez et al. [29][31] showed that Transformers with hard attention are Turing complete, with Wei et al. [44] showing statistically meaningful transformer constructions for Turing machines, taking statistical learnability into account. In 2.2, we point out that this scheme often results in gigantic constructions. To understand the behavior of moderate transformer architectures, Figure 3: This illustration showcases different settings in few-shot learning (\(k=3\)) using TinT. The **Single** mode (left) has one example for each input, and the auxiliary model is updated with a batch of inputs. The **Multi.** mode (right) concatenates all examples to form a single input. For **Label loss**, only underlined label words are used for internal training, while **full context loss** includes all tokens. other works have investigated specific classes of algorithms, e.g. bounded-depth Dyck languages [50], modular prefix sums [2], adders [25], regular languages [4], and sparse logical predicates [11]. Liu et al. [22] provide a unified theory on understanding automata-like mechanisms within transformers. **Transformers as Fast Weight Programmers (FWPs):** FWPs enable input-dependent weight updates during inference. Ba et al. [3] show a connection between self-attention and FWPs. Schlag et al. [36], Irie et al. [18] show that self-attention layers can update parameters of linear and recurrent networks during input processing. Clark et al. [9] propose Fast Weights Layers (FWL), a component added to a frozen pre-trained model that can be efficiently fine-tuned as the model processes the sequence. **Alternative Explanations for ICL:** A complementary direction is to explain ICL in the Bayesian framework. Xie et al. [49] model pretraining data as a mixture of HMMs and cast ICL identifying one of these mixture components. Hahn and Goyal [14] improved upon this work by modeling language using a compositional grammar, and propose ICL as a recombination of those compositional operations. On the other hand, careful experiments in Chan et al. [6] show that several data distributional properties (e.g. Zipf's law) drive the in-context ability of trained transformers. ## 5 Conclusion TinT presents a parameter-efficient construction capable of simulating gradient descent on an internal transformer model over the course of an inference pass. Using fewer than 2 billion parameters it can simulate fine-tuning a 125 million parameter transformer (e.g., GPT-2) internally, dramatically reducing the scale required by previous works by several orders of magnitude. Experiments in language modeling and in-context learning demonstrate that the approximations designed for efficiency purposes preserve the fine-tuning ability of TinT. The approximations and architectural modifications in TinT have potential value for future architectural development and applications such as pre-training and instruction tuning. Additionally, our work emphasizes the ability of moderate-scale architectures to encode intricate subroutines, enabling the training of advanced auxiliary models during inference. This has implications for interpretability and AI alignment research. While our work represents a significant improvement over previous simulations in terms of auxiliary model complexity, similar to prior research in this area, our insights into existing pre-trained models \begin{table} \begin{tabular}{l c|c c c c c c c c} \hline \hline **Model** & **k** & **Subj** & **AGNews** & **SST2** & **CR** & **MR** & **MPQA** & **Amazon** & **Avg.** \\ \hline \multicolumn{10}{c}{_Without Calibration_} \\ \hline OPT-125m & 0 & 64.0 & 66.0 & 70.5 & 64.5 & 71.0 & 68.0 & 76.5 & 68.6 \\ OPT-13b & 0 & 59.0 & 55.5 & 54.0 & 50.5 & 52.5 & 74.0 & 57.0 & 57.5 \\ OPT-125m dyna. eval & 0 & 71.0 & 67.0 & 79.5 & 71.5 & 70.0 & 68.0 & 85.5 & 73.2 \\ OPT-125m Tint & 0 & 67.5 & 66.0 & 76.5 & 69.0 & 76.0 & 70.5 & 78.5 & 72.0 \\ \hline OPT-125m & 32 & 58.7(\({}_{4.9}\)) & 33.7(\({}_{8.4}\)) & 50.8(\({}_{1.2}\)) & 51.3(\({}_{1.9}\)) & 50.0(\({}_{0.0}\)) & 54.3(\({}_{2.5}\)) & 55.0(\({}_{0.7}\)) & 50.5(\({}_{1.9}\)) \\ OPT-13b & 32 & 74.2(\({}_{8.1}\)) & 71.3(\({}_{5.3}\)) & 89.8(\({}_{5.6}\)) & 71.5(\({}_{4.5}\)) & 68.3(\({}_{6.1}\)) & 81.7(\({}_{3.3}\)) & 70.3(\({}_{0.9}\)) & 75.3(\({}_{0.4}\)) \\ OPT-125m dyna. eval & 32 & 78.0(\({}_{1.4}\)) & 66.7(\({}_{1.6}\)) & 71.5(\({}_{1.4}\)) & 73.7(\({}_{1.3}\)) & 72.0(\({}_{0.0}\)) & 80.7(\({}_{0.6}\)) & 79.8(\({}_{0.2}\)) & 74.6(\({}_{2.7}\)) \\ OPT-125m TinT & 32 & 82.3(\({}_{2.7}\)) & 69.3(\({}_{0.0}\)) & 73.7(\({}_{0.8}\)) & 75.7(\({}_{1.9}\)) & 72.3(\({}_{1.2}\)) & 83.2(\({}_{1.0}\)) & 78.2(\({}_{0.2}\)) & 76.4(\({}_{0.7}\)) \\ \hline \multicolumn{10}{c}{_With Calibration_} \\ \hline OPT-125m & 0 & 64.0 & 66.0 & 53.0 & 54.5 & 52.5 & 55.5 & 58.0 & 57.6 \\ OPT-1.3b & 0 & 73.5 & 61.5 & 57.5 & 53.0 & 54.5 & 79.5 & 61.0 & 62.9 \\ OPT-125m dyna. eval & 0 & 62.5 & 66.0 & 60.5 & 53.5 & 54.0 & 56.5 & 74.5 & 61.1 \\ OPT-125m TinT & 0 & 64.0 & 66.0 & 56.5 & 59.0 & 53.5 & 62.0 & 66.5 & 61.1 \\ \hline OPT-125m & 32 & 83.5(\({}_{2.4}\)) & 40.7(\({}_{10.4}\)) & 50.8(\({}_{0.8}\)) & 67.7(\({}_{4.1}\)) & 57.7(\({}_{10.8}\)) & 79.2(\({}_{8.4}\)) & 56.0(\({}_{8.1}\)) & 62.2(\({}_{2.7}\)) \\ OPT-13b & 32 & 81.8(\({}_{1.9}\)) & 66.2(\({}_{1.3}\)) & 93.7(\({}_{1.0}\)) & 82.8(\({}_{2.8}\)) & 91.3(\({}_{1.9}\)) & 83.5(\({}_{2.5}\)) & 92.0(\({}_{2.9}\)) & 80.2(\({}_{0.7}\)) \\ OPT-125m dyna. eval & 32 & 87.2(\({}_{2.0}\)) & 67.2(\({}_{0.6}\)) & 72.8(\({}_{1.9}\)) & 73.3(\({}_{2.6}\)) & 66.7(\({}_{7.4}\)) & 81.5(\({}_{3.7}\)) & 70.3(\({}_{2.1}\)) & 74.1(\({}_{2.9}\)) \\ OPT-125m TinT & 32 & 85.3(\({}_{1.9}\)) & 67.3(\({}_{0.6}\)) & 71.8(\({}_{3.8}\)) & 70.7(\({}_{1.9}\)) & 63.7(\({}_{0.0}\)) & 83.5(\({}_{1.6}\)) & 77.5(\({}_{0.2}\)) & 74.3(\({}_{1.4}\)) \\ \hline \hline \end{tabular} \end{table} Table 3: Zero-shot and few-shot in-context learning results across \(7\) downstream tasks. All the few-shot results are averaged over three training seeds. TinT consistently surpasses its auxiliary model and achieves comparable performance to dynamic evaluation. TinT outperforms auxiliary models by \(3-4\%\) and \(12-16\%\) absolute points on average in \(0\)-shot and \(32\)-shot experiments respectively. TinT performs competitively with a similar-sized pre-trained model (opt-1.3b) in both \(0\)-shot and \(32\)-shot settings. We show the standard deviation for few-shot settings in parentheses. are limited. Furthermore, we have not yet examined potential biases that may arise in the auxiliary models due to one-step gradient descent. We plan to investigate these aspects in future work.
2306.07809
Low-Resource White-Box Semantic Segmentation of Supporting Towers on 3D Point Clouds via Signature Shape Identification
Research in 3D semantic segmentation has been increasing performance metrics, like the IoU, by scaling model complexity and computational resources, leaving behind researchers and practitioners that (1) cannot access the necessary resources and (2) do need transparency on the model decision mechanisms. In this paper, we propose SCENE-Net, a low-resource white-box model for 3D point cloud semantic segmentation. SCENE-Net identifies signature shapes on the point cloud via group equivariant non-expansive operators (GENEOs), providing intrinsic geometric interpretability. Our training time on a laptop is 85~min, and our inference time is 20~ms. SCENE-Net has 11 trainable geometrical parameters and requires fewer data than black-box models. SCENE--Net offers robustness to noisy labeling and data imbalance and has comparable IoU to state-of-the-art methods. With this paper, we release a 40~000 Km labeled dataset of rural terrain point clouds and our code implementation.
Diogo Lavado, Cláudia Soares, Alessandra Micheletti, Giovanni Bocchi, Alex Coronati, Manuel Silva, Patrizio Frosini
2023-06-13T14:36:06Z
http://arxiv.org/abs/2306.07809v1
Low-Resource White-Box Semantic Segmentation of Supporting Towers on 3D Point Clouds via Signature Shape Identification ###### Abstract Research in 3D semantic segmentation has been increasing performance metrics, like the IoU, by scaling model complexity and computational resources, leaving behind researchers and practitioners that (1) cannot access the necessary resources and (2) do need transparency on the model decision mechanisms. In this paper, we propose SCENE-Net, a low-resource white-box model for 3D point cloud semantic segmentation. SCENE-Net identifies signature shapes on the point cloud via group equivariant non-expansive operators (GENEOs), providing intrinsic geometric interpretability. Our training time on a laptop is 85 min, and our inference time is 20 ms. SCENE-Net has 11 trainable geometrical parameters, and requires fewer data than black-box models. SCENE-Net offers robustness to noisy labeling and data imbalance and has comparable IoU to state-of-the-art methods. With this paper, we release a 40 000 Km labeled dataset of rural terrain point clouds and our code implementation. ## 1 Introduction Powerful Machine Learning (ML) algorithms applied to critical applications, such as autonomous driving or environmental protection, highlight the importance of (1) ease of implementation for non-tech organizations entailing data efficiency and general-purpose hardware, and (2) transparent models regarding their decision-making process, thus ensuring a responsible deployment [1, 2, 3]. Most methods in Explainable AI (XAI) provide _post hoc_ explanations to black-box models (i.e., algorithms unintelligible to humans). However, these are often limited in terms of their model fidelity [1, 4], that is, they provide explanations for the predictions of the underlying model (e.g., heatmaps [5, 6] and input masks [7, 8]), instead of providing a mechanistic understanding of its inner-workings. Conversely, intrinsic interpretability methods (i.e., white-box models) provide an understanding of their decisions through their architecture and parameters [4]. Transparency is achieved by enforcing constraints that reflect domain knowledge and simple designs [1; 9], which can result in a loss in performance when compared to complex black-boxes. We propose a novel white-box model, **SCENE-Net**, that provides intrinsic geometric interpretability by leveraging on group equivariant non-expansive operators (GENEOs) [10; 11]. Unlike traditional interpretable models, GENEOs are complex observers parameterized with meaningful geometric features. In our case, task dependency comes as a collaboration of Machine Learning and electrical utility teams to transparently segment power line supporting towers on 3D point clouds to inspect extensive power grids automatically. Electrical grid operators have the critical job of assessing the risk of contact between the power grid and its environment to prevent failures and forest fires. These grids spread over countries and even continents, thus making careful inspection an important and challenging problem. Often, this task is based on LiDAR large-scale point clouds with high-point density, no sparsity, and no object occlusion. However, the captured point clouds are quite extensive and mostly composed of rural areas. These data are different from large urban datasets for autonomous driving [12; 13] due to the point of view, point density, occlusion, and extension. To bootstrap this work, we created a labeled dataset of 40 000 Km of rural and forest terrain, and the **T**ransmission **S**ystem, named **TS40K**. These point clouds show noisy labels and class imbalance (see Appendix B for details), and our SCENE-Net is robust to labeling noise as it encodes the geometric properties we need to detect. Moreover, practitioners in high-risk tasks, such as autonomous driving and power grid inspection, are often limited in terms of resources, namely computational power and available data, to train and deploy state-of-the-art models [14; 15]. This clashes with the current trend in DL to scale up models in both complexity and needed resources in order to maximize performance, for example, state-of-the-art 3D semantic segmentation models [16; 17; 18; 19; 20] follow this trend. Our model, SCENE-Net, maintains a simple design conventional to white-boxes (it is composed of 11 trainable parameters) that allows for resource-efficient training while taking advantage of powerful Deep Learning (DL) strategies, such as convolutions. By assessing our model on the SemanticKITTI benchmark [13], we show that SCENE-Net achieves performance on-par with state-of-the-art methods in pole segmentation. Our main contributions are: * SCENE-Net is the first white-box model for 3D semantic segmentation on large-scale landscapes, including non-urban environments (Section 4); * The architecture of SCENE-Net has fewer trainable parameters than traditional methods and is resource-efficient in both data and computational requirements (Section 5); * Empirically, SCENE-Net is intrinsically and posthoc interpretable and robust under noisy labels, with au par IoU (Section 5); Figure 1: Signature shapes for power line supporting tower detection. For our TS40K sample shown in (a), SCENE-Net accurately detects the body of the tower (b), while a comparable CNN has a large false positive area in the vegetation (c). Our model is interpretable with 11 trainable geometric parameters whereas the CNN has a total of 2190 parameters. The ground and power lines are mislabeled in the ground truth. * We present TS40K, a new 3D point cloud dataset covering 40 000 Km of non-urban terrain, with more than 9000 million 3D points (details in Appendix B); ## 2 Related Work Point Cloud Semantic Segmentation.Processing point clouds is a challenging task due to their unstructured nature and invariance to permutations. Voxel-based strategies endow point clouds with structure in order to apply 3D convolutions [21; 22; 23]. However, memory footprint is too large for high-resolution voxel grids, while low resolution entails information loss. Subsequent methods try to answer these issues by employing sparse convolutions [24; 25] and octree-based CNNs [26; 27]. Point-based models take point clouds directly as input. The work of PointNet [28] and PointNet++ [29] inspired the use of point sub-sampling strategies with feature aggregation techniques to learn local features on each sub-point [30; 31]. Convolution-based methods [32; 33; 34; 16; 35] demonstrate good performance on 3D semantic segmentation benchmarks, such as _SemanticKITTI_[13] and SensatUrban [36]. Following this strategy, recent methods exploit multi-representation fusion, i.e, they combine different mediums (voxel grids, raw point clouds, and projection images) to boost feature retrieval [37; 17; 18; 20] and achieve top performance on the above benchmarks. While voxel-based methods are computationally expensive due to 3D convolutions on high-resolution voxel grids, point-based strategies have to use costly neighbor searching to extract local information. We propose a voxel-based architecture that is time-efficient with high-resolution voxel grids, with shapes of \(64^{3}\) and \(128^{3}\). Moreover, learning from imbalanced and noisy data is still a challenging task in point cloud segmentation [38], SCENE-Net is interpretable and robust to these conditions. Explainable Machine Learning.Explainability is a crucial aspect of ML methods in high-stakes tasks such as autonomous driving [1; 2; 3]. Two main approaches have been proposed in the literature: _post hoc_ explainability, and intrinsic interpretability. _Post hoc_ methods, such as LIME [7], meaningful perturbations [8], anchors [39], and ontologies [40], are applied to trained black-box models and provide instance-based explanations that correlate model predictions to the given input. These methods are model-agnostic, and thus more flexible, but they often lack mechanistic cause-effect relations and have a limited understanding of feature importance [4]. For instance, a dog image and random noise may generate similar importance heatmaps for the same class with the LIME method [4]. Moreover, they introduce computational overhead, which may limit their application in real-world scenarios with complex black-box models, such as in the 3D semantic segmentation task. In contrast, intrinsic interpretability methods provide an understanding of their decisions through their architecture and parameters [4]. Decision trees and linear regression are examples of white-box models. However, transparency is usually achieved by imposing domain constraints and simple designs, which implies limited performance compared to deep neural networks. Recent advances in interpretable techniques, such as concept whitening [9] and interpretable CNNs [41], have shown that interpretability does not have to imply performance loss. However, these methods provide evidential interpretability, that is, they offer intrinsic explanations to model predictions that are still linked to human interpretations and may imply an evidential correlation, but not causation. We propose a white-box model, SCENE-Net, with intrinsic geometric interpretability that is not subject to human interpretation. SCENE-Net analyzes the input 3D space according to prior knowledge of the geometry of objects of interest, which is encoded in functional observers and whose parameters are fine-tuned during training. These observers encode high-level geometrical concepts. Thus, our predictions exhibit direct mechanistic cause-effect w.r.t. the learned observers. SCENE-Net maintains a simple model design in high-level mathematical operations while taking advantage of DL complex convolutional kernels. ## 3 Group Equivariant Non-Expansive Operators (GENEOs). GENEOs are the building blocks of a mathematical framework [10] that formally describes machine learning agents as a set of operators acting on the input data. These operators provide a measure of the world, just as CNN kernels learn essential features to, for instance, recognize objects. Such agents can be thought of as observers that analyze data. They transform it into higher-level representations while respecting a set of properties (i.e., a group of transformations). An appropriate observer transforms data in such a way that respects the right group of transformations, that is, it commutes with these transformations. Formally, we say that the observer is _equivariant_ with respect to a group of transformations. The framework takes advantage of topological data analysis (TDA) to describe data as topological spaces. Specifically, a set of data \(X\) is represented by a topological space \(\Phi\) with admissible functions \(\varphi\colon X\to\mathbb{R}^{3}\). \(\Phi\) can be thought of as a set of admissible measurements that we can perform on the measurement space \(X\). For example, images can be seen as functions assigning RGB values to pixels. This not only provides uniformity to the framework but also allows us to shift our attention from raw data to the space of measurements that characterizes it. Now that the input data is well represented, let us introduce how the framework defines prior knowledge. Data properties are defined through maps from \(X\) to \(X\) that are \(\Phi\)-preserving homeomorphisms. That is, the composition of functions in \(\Phi\) with such homeomorphisms produces functions that still belong to \(\Phi\). Therefore, we can define a group \(G\) of \(\Phi\)-preserving homeomorphisms, representing a group of transformations on the input data for which we require equivariance to be respected. In other words, \(G\) is the group of properties that we chose to enforce equivariance w.r.t. the geometry in the original data. It is through \(G\) that we embed prior knowledge into a GENEO model. Following the previous example, planar translations can define a subgroup of \(G\). Let us consider the notion of a _perception pair_\((\Phi,G)\): it is composed of all admissible measurements \(\Phi\) and a subgroup of \(\Phi\)-preserving homeomorphisms \(G\). **Definition 3.1** (Group Equivariant Non-Expansive Operator (GENEO)).: Consider two perception pairs \((\Phi,G)\) and \((\Psi,H)\) and a homomorphism \(T\colon G\to H\). A map \(F\colon\Phi\to\Psi\) is a group equivariant non-expansive operator if it exhibits equivariance: \[\forall\varphi\in\Phi,\forall g\in G,F(\varphi\circ g)=F(\varphi)\circ T(g) \tag{1}\] and is non-expansive: \[\forall\varphi_{1},\varphi_{2}\in\Phi,\|F(\varphi_{1})-F(\varphi_{2})\|_{ \infty}\leq\|\varphi_{1}-\varphi_{2}\|_{\infty} \tag{2}\] Non-expansivity and convexity are essential for the applicability of GENEOs in a machine-learning context. When the spaces \(\Phi\) and \(\Psi\) are compact, non-expansivity guarantees that the space of all GENEOs \(\mathcal{F}\) is compact as well. Compactness ensures that any operator can be approximated by a finite set of operators sampled in the same space. Moreover, by assuming that \(\Psi\) is convex, [10] proves that \(\mathcal{F}\) is also convex. Convexity guarantees that the convex combination of GENEOs is also a GENEO. Therefore, these results prove that any GENEO can be efficiently approximated by a certain number of other GENEOs in the same space. In addition to drastically reducing the number of parameters in the modeling of the considered problems and making their solution more transparent, we underline that the use of GENEOs makes available various theoretical results that allow us to take advantage of a new mathematical theory of knowledge engineering. We stress that, besides the cited compactness and convexity theorems, algebraic methods concerning the construction of GENEOs are already available [42, 43, 44] ## 4 SCENE-Net: Signature geometriC Equivariant Non-Expansive operator Network In this section, we introduce the overall architecture of **SCENE-Net**. Next, we define the geometrical properties that describe power line supporting towers. Lastly, we detail the loss function used to train the observer. Overview3D Point clouds are generally denoted as \(\mathcal{P}\in\mathbb{R}^{N\times(3+d)}\), where \(N\) is the number of points and \(3+d\) is the cardinality of spatial coordinates plus any point-wise features, such as colors or normal vectors. The input point cloud is first transformed in accordance with a measurement function \(\varphi\colon\mathbb{R}^{3}\to\{0,1\}\) that signals the presence of 3D points in a voxel discretization. Next, the transformed input is fed to a layer of multiple GENEOs (GENEO-layer), each chosen randomly from a parametric family of operators, and defined by a set of trainable shape parameters \(\vartheta_{i}\) (Fig. 2). Such GENEOs are in the form of convolutional operators with carefully designed kernels as described later. Not only is convolution a well-studied operation, but it also offers equivariance w.r.t. translations by definition. During training, it is not the kernels themselves that are fine-tuned with back-propagation, since this would not preserve equivariance at each optimization step. Instead, the error is propagated to the shape parameters \(\vartheta_{i}\) of each operator. Following the GENEO-layer, its set of operators \(\Gamma=\{\Gamma_{i}^{\vartheta_{i}}\}_{i=1}^{K}\), with shape parameters \(\vartheta=\vartheta_{1},\ldots,\vartheta_{k}\), are combined through convex combination with weights \(\lambda=(\lambda_{1},\ldots,\lambda_{k})^{T}\) by \(\underset{\lambda,\vartheta}{\mathcal{H}}\colon\mathcal{P}\to\mathcal{P}\) such that \[\underset{\lambda,\vartheta}{\mathcal{H}}(x)=\sum_{i=1}^{K}\lambda_{i}\Gamma_ {i}^{\vartheta_{i}}(\varphi)(x) \tag{3}\] Since the convex combination of GENEOs is also a GENEO [10], \(\mathcal{H}\) preserves the equivariance of each operator \(\Gamma^{\vartheta}\in\Gamma\). In fact, \(\mathcal{H}\) defines a GENEO observer that analyzes the 3D input scenes looking for the geometrical properties encoded in \(\Gamma\). The convex coefficients \(\lambda\) represent the overall contribution of each operator \(\Gamma_{i}^{\vartheta}\) to \(\mathcal{H}\) to the analysis. The parameters grant our model its intrinsic interpretability. They are learned during training and represent geometric properties and the importance of each \(\Gamma^{\vartheta}\) in modeling the ground truth. Next, we transform the observer's analysis into a probability of each 3D voxel belonging to a supporting tower as a model \(\underset{\lambda,\vartheta}{\mathcal{M}}\colon\mathcal{P}\to[0,1]^{N}\) \[\underset{\lambda,\vartheta}{\mathcal{M}}(x)=\bigg{(}\tanh\Big{(} \underset{\lambda,\vartheta}{\mathcal{H}}(x)\Big{)}\bigg{)}_{+},\] where \((t)_{+}=\max\{0,t\}\) is the rectified linear unit (ReLU). Negative signals in \(\mathcal{H}(x)\) represent patterns that do not exhibit the sought-out geometrical properties. Conversely, positive values quantify their presence. Therefore, \(\tanh\) compresses the observer's value distribution into [-1, 1], and the ReLU is then applied to enforce a zero probability to negative signals. Lastly, a probability threshold \(\tau\in[0,1]\) is defined through hyperparameter fine-tuning and applied to \(\mathcal{M}\) resulting in a map \(\widetilde{\mathcal{M}}\colon\mathcal{P}\times\mathbb{R}\to\{0,1\}^{N}\) \[\widetilde{\underset{\lambda,\vartheta}{\mathcal{M}}}(x,\tau)= \Big{\{}\underset{\lambda,\vartheta}{\mathcal{M}}(x)\Big{\}}\geq\tau,\] where \(\widetilde{\mathcal{M}}\) denotes the **SCENE-Net** model. Knowledge Engineering via GENEOs.In this section, we formally define the knowledge embedded in the observer \(\mathcal{H}\). The following GENEOs describe power line supporting towers in order to fully discriminate them from their environment. Cylinder GENEO.The most striking characteristic of supporting towers against the rural environment is their long, vertical and narrow structure. As such, their identification is equivariant w.r.t. rotations along the _z-axis_ and translations in the _xy_ plane, which we encode by the means of a cylinder. **Definition 4.1**.: In order to promote smooth patterns, a cylinder is defined by \(g_{Cy}\colon\mathbb{R}^{3}\to[0,1]\): \[g_{Cy}(x)=e^{-\frac{1}{2\sigma^{2}}(\|\pi_{-3}(x)-\pi_{-3}(c)\|^{2}-r^{2})^{2}}\] where \(\pi_{-3}(x)=(\pi_{1}(x),\pi_{2}(x),0)\) and \(\pi_{i}\) defines a projection function of the \(i\)th element of the input vector. Definition 4.1 is a smoothed characterization of the Cylinder defined in Appendix A.1. The function \(g_{Cy}\) defines a smoothed cylinder centered in \(c\) by means of a Gaussian function, with the distance between \(x\) and the cylinder's radius (\(r\)) as its mean. The shape parameters are the Gaussian's standard deviation and \(r\), defined as \(\vartheta_{Cy}=[r,\sigma]\). Figure 2: Pipeline of SCENE-Net: an input point cloud \(\mathcal{P}\) is measured according to function \(\varphi\) and voxelized. This representation then is fed to a GENEO-layer, where each operator \(\Gamma_{i}^{\vartheta_{i}}\) separately convolves the input. A GENEO observer \(\mathcal{H}\) is then achieved by a convex combination of the operators in the GENEO layer. \(\mathcal{M}\) transforms the analysis of the observer into a probability of belonging to a tower. Lastly, a threshold operation is applied to classify the voxels. Note that this final step occurs after training is completed. GENEOs act on functions, transforming them to remain equivariant to a specific group of transformations. Our GENEOs act upon \(\Phi\), the topological space representing \(\mathcal{P}\) with admissible functions \(\varphi\colon\mathbb{R}^{3}\to\{0,1\}\). Specifically, we work with appropriate \(\varphi\in\Phi\) functions that represent point clouds and preserve their geometry. For instance, \(\varphi\) can be a function that signals the presence of 3D points in a voxel grid. Therefore, the cylinder GENEO \(\Gamma^{\vartheta}_{Cy}\) transforms \(\varphi\) into a new function that detects sections in the input point cloud that demonstrate the properties of \(g_{Cy}\) and, simultaneously, preserves the geometry of the 3D scene \[\Gamma^{\vartheta}_{Cy}\colon\Phi\to\Psi,\qquad\psi_{Cy}=\Gamma^ {\vartheta}_{Cy}(\varphi)\] \[\psi_{Cy}(x)=\int_{\mathbb{R}^{3}}\tilde{g}_{Cy}(y)\varphi(x-y)dy\] where \(\Psi\) is a new topological space that represents \(\mathcal{P}\) with functions \(\psi\colon\mathbb{R}^{3}\to[0,1]\) and \(\tilde{g}_{Cy}\) defines a normalized Cylinder. The kernel \(g_{Cy}\) is normalized to have a zero-sum to promote the stability of the observer. This way, we encourage the geometrical properties that exhibit the sought-out group of transformations and punish those which do not. Thus, \(\psi_{Cy}(x)\) assumes positive values for 3D points near the radius, whereas negative values discourage shapes that do not fall under the \(g_{Cy}\) definition. This leads to a more precise detection of the encoded group of transformations. The cylinder kernel discretized in a voxel grid can be seen in Fig. 3(a). Arrow GENEO.Towers are not the only element in rural environments characterized by a vertical and narrow structure. The identification of trees also shows equivariance w.r.t. rotations along the _z-axis_. Therefore, it is not enough to detect the body of towers, we also require the power lines that they support. To this end, we define a cylinder following the rationale behind the cylinder GENEO with a cone on top of it. This arrow defines equivariance w.r.t. the different angles at which power lines may find their supporting tower. **Definition 4.2**.: The function describing the Arrow is defined as \(g_{Ar}\colon\mathbb{R}^{3}\to[0,1]\): \[g_{Ar}(x)=\left\{\begin{array}{ll}e^{\frac{-1}{2\sigma^{2}}}(\|\pi_{-3}(x)- \pi_{-3}(c)\|^{2}-r^{2})^{2}&\text{if }\pi_{3}(x)<h\\ e^{\frac{-1}{2\sigma^{2}}}(\|\pi_{-3}(x)-\pi_{-3}(c)\|^{2}-(r_{c}\tan(\beta \pi))^{2})^{2}&\text{if }\pi_{3}(x)\geq h\end{array}\right.\] with \(\beta\in[0,0.5)\) defining the inclination of the cone. Definition 4.2 is a smoothed characterization of the Arrow defined in Appendix A.2. The radii of the cylinder and cone are defined by \(r\) and \(r_{c}\), respectively, with \(c\) as their center. Lastly, \(h\) defines the height at which the cone is placed on top of the cylinder. Thus, the shape parameters of the Arrow are defined by the vector \(\vartheta_{Ar}=[r,\sigma,h,r_{c},\beta]\). Lastly, we are also interested that this kernel sums to zero, so we define \[\Gamma^{\vartheta}_{Ar}\colon\Phi\to\Psi,\qquad\psi_{Ar}=\Gamma^ {\vartheta}_{Ar}(\varphi)\] \[\psi_{Ar}(x)=\int_{\mathbb{R}^{3}}\tilde{g}_{Ar}(y)\varphi(x-y)dy,\] where \(\tilde{g}_{Ar}(y)\) represents a normalized Arrow kernel. Its discretization is depicted in Fig. 3(b). Negative Sphere GENEO.Detecting power lines does not exclude the remaining objects in the scene whose identification also demonstrates equivariance w.r.t. rotations along the _z-axis_. Tree elements, such as bushes, are especially frequent in the TS40K dataset. Thus, we designed a negative sphere to diminish their detection and simultaneously punish the geometry of trees. **Definition 4.3**.: The Negative Sphere \(g_{NS}\colon\mathbb{R}^{3}\to[-\omega,1[\) is defined as \[g_{NS}(x)=-\omega e^{\frac{-1}{2\sigma^{2}}(\|x-c\|^{2}-r^{2})^{2}}.\] with \(\omega\in]0,1]\) defining a small negative weight that punishes the spherical shape. Definition 4.3 is a smoothed characterization of the Negative Sphere in Appendix A.3. The shape parameters of this operator are \(\vartheta_{NS}=[r,\sigma,\omega]\). Since we wish to discourage spherical patterns following the definition of \(g_{NS}\), so we do not enforce that its space sums to zero, obtaining \[\Gamma^{\vartheta}_{NS}\colon\Phi \to\Psi_{NS},\qquad\psi_{NS}=\Gamma^{\vartheta}_{NS}(\varphi)\] \[\psi_{NS}(x) =\int_{\mathbb{R}^{3}}g_{NS}(y)\varphi(x-y)dy.\] where \(\Psi_{NS}\) is a topological space containing functions \(\psi\colon\mathbb{R}^{3}\to[-\omega,1[\). Fig. 3(c) depicts the computation of this kernel in a voxel grid. GENEO Loss.The use of GENEOs in knowledge embedding forces our model to uphold the convexity of the observer during training. Thus, our problem statement is represented by the following optimization problem \[\underset{\lambda,\vartheta}{\text{minimize}} \operatorname*{\mathbb{E}}_{X,y,\alpha,\epsilon}\bigg{\{}\mathcal{ L}_{seg}(\lambda,\vartheta)\bigg{\}}\] s.t. \[\vartheta\geq 0\] \[\lambda^{T}\mathbf{1}=1\] \[\lambda\geq 0,\] where the segmentation loss \(\mathcal{L}_{seg}\) is defined as \[\mathcal{L}_{seg}(\lambda,\vartheta)=f_{w}(\alpha,\epsilon,y)\Big{(}\! \mathcal{M}(X)-y\Big{)}^{2}.\] The loss uses a weighted squared error following the weighting scheme \(f_{w}\) proposed in [45] to mitigate data imbalance. The hyperparameter \(\alpha\) emphasizes the weighting scheme, whereas \(\epsilon\) is a small positive number that ensures positive weights. Thus, \(\mathbb{E}\{\cdot\}\) represents the expectation of the segmentation loss over the data distribution. The above constraints ensure that our model \(\mathcal{M}\) maintains convexity throughout training, with \(\mathbf{1}\) denoting a vector composed of entries one. The reparametrization of the hyperparameters \(\lambda\) to obtain an equivalent optimization problem, considering \(\lambda_{k}=1-\sum_{i=1}^{K-1}\lambda_{i}\), thus obtaining Problem (4), \[\underset{\lambda,\vartheta}{\text{minimize}} \operatorname*{\mathbb{E}}_{X,y,\alpha,\epsilon}\bigg{\{}\mathcal{ L}_{seg}(\lambda,\vartheta)\bigg{\}}\] s.t. \[\vartheta\geq 0 \tag{4}\] \[\lambda\geq 0\] allows for dropping one of the constraints. Then, we ensure non-negativity of \(\mathcal{M}\)'s trainable parameters \(\lambda,\vartheta\) by relaxing Problem (4) and introducing a penalty in the optimization cost definition as \[\underset{\lambda,\vartheta}{\text{minimize}} \operatorname*{\mathbb{E}}_{X,y,\alpha,\epsilon}\bigg{\{}\mathcal{ L}_{seg}(\lambda,\vartheta)\bigg{\}}\ +\rho_{t}\Big{(}\sum_{i}^{K}h(\lambda_{i})\Big{)}+\rho_{t} \Big{(}\sum_{i}^{K}\sum_{j}^{T_{i}}h(\vartheta_{ij})\Big{)}, \tag{5}\] where \(h(x)=\big{(}-x\big{)}_{+}\), \(\rho_{t}\) and \(\rho_{t}\) are scaling factors of the negativity penalty \(h\) and \(T_{i}\) is the number of shape parameters in \(\vartheta_{i}\). GENEO final loss optimization is formalized in Problem (5). It consists of a data fidelity component (i.e., \(\mathcal{L}_{seg}\)) and two penalties on negative parameters. Figure 3: GENEO kernels discretized in a voxel grid and colored according to weight distribution. ## 5 Experiments In this Section, we assess properties of our model SCENE-Net that help electrical companies in the inspection of power lines: (1) interpretability of the model, (2) accuracy, (3) robustness to noisy labels, (4) training and inference time, and (5) performance on the SemanticKITTI benchmark. Further details about the TS40K dataset, the training protocol, inference performance with high-resolution voxel grids, and ablation studies can be found in the Supplementary Material. Interpretability of the trained SCENE-Net: The meaning of the 11 learned parameters.To understand if the model parameters are interpretable, we inspect SCENE-Net's 11 trainable parameters \(\vartheta\) and \(\lambda\) after training. Each \(\vartheta_{i}\in\vartheta\) holds the learned shape parameters of a geometrical operator \(\Gamma_{i}\), such as their height or radius. The convex coefficients \(\lambda\) weigh each operator \(\Gamma_{i}\) in our model's analysis. For example, we can conclude that the instance \(\vartheta_{NS}\) of the Negative Sphere GENEO (\(\Gamma_{NS}\)) holds a weight of 76.34% on SCENE-Net's output (Fig. 4). The geometric nature of the observer and combination parameters endow intrinsic **interpretability** to SCENE-Net. Post-hoc interpretation for specific predictions.We can correlate the detection of scene elements, such as vegetation, to the contributions of each GENEO. This provides an extra layer of transparency to our model. The Arrow kernel is responsible for the detection of towers, the Cylinder aids this process and diminishes the detection of vegetation, and the Negative Sphere stabilizes the model by balancing contributions of the previous kernels (Fig 5). Qualitative accuracy and quantitative metrics: SCENE-Net is more precise in detecting towers than a baseline CNN.To evaluate if SCENE-Net can correctly identify towers in landscapes of the noisy TS40K dataset, we chose the task of 3D semantic segmentation of power line towers. We trained SCENE-Net and a baseline CNN according to the protocol described in the Supplementary Material. We use a CNN with similar architecture and the same base operator (i.e., convolution) for feature retrieval. The main difference is their kernel initialization: SCENE-Net kernels are randomly initialized, but belong to a precise family of operators, while CNN kernels are completely random. Running models for 3D point cloud semantic segmentation [16; 37; 18; 20] was not done due to their computational requirements. The application penalizes the false positives more, thus we will emphasize Precision. Due to the imbalanced nature of the labels, we measured overall Precision, Recall, and Intersection over Union (IoU). Quantitatively, we observe a lift in Precision of 38%, and of 5% in IoU, and a drop of 13% in Recall (Table 1). The lower Recall of SCENE-Net is due to mislabeled points (Figs. 1 and 7), and our choice to privilege Precision over Recall, in view of the fact that the Precision - Recall curve is slightly better for SCENE-Net (Fig 6). SCENE-Net is robust to noisy labels.It is important to assess the resilience to noisy labels in the Ground-Truth (GT) since 3D point clouds show more than 50% of mislabeled points. These examples are abundant in the dataset and SCENE-Net is able to recover the body of the tower without detecting ground and power line patches that are mislabeled as tower (Fig. 7). Most noisy labels on this kind of dataset are due to annotation excess around the object of interest and are not randomly distributed. These consistently incorrect labels entail low Recall values (Tab. 1). SCENE-Net has low data requirements and has modest training time in common hardware.The design of SCENE-Net embedded with GENEO observers culminates in a model with 11 meaningful trainable parameters. This enables the use of common hardware (see Appendix D.2 for hardware specs) and a low data regime to train our model. The results reported in table 2 were achieved with 5% of the _SemanticKITTI_ training set. Training SCENE-Net with 50% and 100% of the available training data leads to a variation of 0.5% in pole IoU performance. Moreover, the number of parameters of SCENE-Net remains unchanged regardless of kernel size, whereas in traditional models, such as the baseline CNN with 2190 parameters, the number of parameters grows exponentially with larger kernel sizes. SCENE-Net on the SemanticKITTI: an efficient model for low-resource contexts.In Table 2, we present a comprehensive comparison of the performance of SCENE-Net against state-of-the-art models for the task of 3D semantic segmentation on the _SemanticKITTI_ benchmark, specifically in terms of pole IoU, number of parameters, and the ratio of pole IoU to number of parameters. For this problem, we add to the GENEO loss in (5) the Tversky loss [46] to boost IoU performance of SCENE-Net: \[\mathcal{L}_{Tversky}(y,\hat{y})=1-\frac{y\hat{y}+\delta}{y\hat{y}+\alpha( \textbf{1}-y)\hat{y}+\beta y(\textbf{1}-\hat{y})+\delta}\] where \(y\), \(\hat{y}\) are the ground truth and model prediction, \(\alpha,\beta>0\) are the penalty factors for false positives and false negatives respectively, and \(\delta>0\) is a smoothing term. The comparison results demonstrate that SCENE-Net is a highly efficient model in terms of its parameter contribution. Our model has the lowest number of parameters, with at least a 5-order magnitude difference from the other models. SCENE-Net also has the highest ratio of pole IoU to a number of parameters, indicating that it can achieve a high level of performance with a minimal number of parameters. Although SCENE-Net does not achieve the highest pole IoU \begin{table} \begin{tabular}{l l l l} \hline Method & Precision & Recall & IoU \\ \hline CNN & 0.44 (\(\pm\) 0.07) & **0.26** (\(\pm\) 0.02) & 0.53 \\ SCENE-Net & **0.82** (\(\pm\) 0.08) & 0.13 (\(\pm\) 0.05) & **0.58** \\ \hline \end{tabular} \end{table} Table 1: 3D semantic segmentation metrics on TS40K. Figure 5: _Post hoc_ analysis of SCENE-Net. We can examine the activation of each geometric operator and correlate it to the detection of certain elements in the scene. We see that the Arrow is responsible for the most activation, while the Negative Sphere has a smaller absolute value. performance, it is somewhat on par with state-of-the-art models. Additionally, SCENE-Net provides intrinsic geometric interpretability and resource efficiency, which makes it a valuable model in high-risk tasks that require trustworthy predictions and good performance but have limited data and computing power. ## 6 Discussion Traditional companies, like utilities, need a resource-efficient, responsible application of ML models for the segmentation of real-world point clouds, e.g., for inspecting thousands of kilometers of a power grid. In this paper, we present SCENE-Net, a low-resource white-box model for 3D semantic segmentation. Our approach offers a unique combination of intrinsic geometric interpretability, resource efficiency, and on-par state-of-the-art performance. LimitationsOur model prioritizes transparency and performance over broader applicability while allowing a flexible extension. In this paper, we segmented pole-like structures. State-of-the-art methods often show similar trade-offs, for example, 3D semantic segmentation models are tailored for autonomous driving [19, 18]. SCENE-Net requires a knowledge engineering phase that is not necessary for black-box models. Despite these limitations, we believe that the transparency and efficiency of SCENE-Net make it a valuable tool for high-stakes applications. To detect other shapes, other geometrical observers have to be created. As the convex combination of GENEOs is a GENEO, this problem is mitigated by creating a library of primary shapes to be combined to form more complex geometrical structures. This is relevant follow-up work, but out of the scope of this paper. Multiclass and multilabel segmentation can be achieved by combining different binary class segmentation models. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Method} & Pole & \#Parameters & Parameter \\ & IoU & (M) & Efficiency \\ \hline PointNet++ [29] & 16.9 & 1.48 & 1.19 \\ TangentConv [47] & 35.8 & 0.4 & 2.77 \\ KPConv [16] & 56.4 & 14.9 & 3.41 \\ RandLA-Net [30] & 51.0 & 1.24 & 3.63 \\ RPVNet [18] & **64.8** & 24.8 & 3.80 \\ SparseConv [24] & 57.9 & 2.7 & 3.91 \\ JS3C-Net [19] & 60.7 & 2.7 & 4.09 \\ SPVNAS [17] & 64.3 & 12.5 & 4.62 \\ **SCENE-Net (Ours)** & 57.5 & **1.1e-5** & **23.98** \\ \hline \hline \end{tabular} \end{table} Table 2: Semantic segmentation on _SemanticKITTI_. Large models are included for comparison but cannot be used in low-resource contexts. Parameter efficiency is \(\frac{\text{Pole lot}}{\text{log }\#\text{Parameters}}\). Figure 6: Precision-Recall curve for SCENE-Net and the CNN benchmark, with changing detection threshold. Although our model SCENE-Net has two orders of magnitude fewer parameters than the CNN, it attains a comparable area under the P-R curve. ImpactFrom our experience deploying SCENE-Net within a utility company, low-resource transparent systems can critically help human decision-making--here, by facilitating fast and careful inspection of power lines with interpretable signals of observed geometrical properties. With only three observers and 11 meaningful trainable parameters, SCENE-Net can help reduce the risk of power outages and forest fires by learning from data.
2302.03140
ClueGAIN: Application of Transfer Learning On Generative Adversarial Imputation Nets (GAIN)
Many studies have attempted to solve the problem of missing data using various approaches. Among them, Generative Adversarial Imputation Nets (GAIN) was first used to impute data with Generative Adversarial Nets (GAN) and good results were obtained. Subsequent studies have attempted to combine various approaches to address some of its limitations. ClueGAIN is first proposed in this study, which introduces transfer learning into GAIN to solve the problem of poor imputation performance in high missing rate data sets. ClueGAIN can also be used to measure the similarity between data sets to explore their potential connections.
Simiao Zhao
2023-02-06T22:04:35Z
http://arxiv.org/abs/2302.03140v1
# ClueGAIN: Application of Transfer Learning On Generative Adversarial Imputation Nets (GAIN) ###### Abstract Many studies have attempted to solve the problem of missing data using various approaches. Among them, Generative Adversarial Imputation Nets (GAIN) was first used to impute data with Generative Adversarial Nets (GAN) and good results were obtained. Subsequent studies have attempted to combine various approaches to address some of its limitations. ClueGAIN is first proposed in this study, which introduces transfer learning into GAIN to solve the problem of poor imputation performance in high missing rate data sets. ClueGAIN can also be used to measure the similarity between data sets to explore their potential connections. Machine Learning, ICML ## 1 Introduction Processing missing data is one of the unavoidable problems in data analysis. Important information can be lost if observation of the missing part is simply discarded, resulting in a systematic difference between incomplete and complete observed data. Therefore, data scientists have done a large amount of work, such as MICE (Van Buuren & Groothius-Oudshoorn, 2011; Buuren & Oudshoorn, 2000), MissForest (Stekhoven & Buhlmann, 2012) and DAE (Vincent et al., 2008), to find reliable methods to impute missing regions with rational values. Yoon et al. first proposed Generative Adversarial Imputation Net (GAIN) to impute data Missing Completed At Random (MCAR) (Yoon et al., 2018). GAIN performs better than the traditional imputation method and does not rely on complete training data. However, it still has some limitations, mainly from the model structure and the assumptions about data. Firstly, the simple structure of GAIN is not able to effectively complete the imputation task with high data missing rate. To solve this problem, MisGAN (Li et al., 2019), with two pairs of generators and discriminators, Generative Adversarial Multiple Imputation Network (GAMIN) (Yoon & Sull, 2020), which used the confidence prediction method, and the Generative Adversarial Guider Imputation Network (GAGIN) (Wang et al., 2022) consisting of three models, was proposed. Secondly, GAIN did not show satisfactory performance when being applied to time series imputation tasks. Therefore, Two-stage GAN (Andreini et al., 2021) and Multivariate Time Series GAN (MTS-GAN) (Luo et al., 2018) based on time series imputation were first proposed. End-to-end GAN(E2GAN) (Luo et al., 2019) and Inverse Mapping GAN (IMGAN) (Wu et al., 2022) were subsequently proposed to improve efficiency and performance. Thirdly, the theory guarantees of GAIN based on MCAR assumption, which is not always true since the lost data may depend on observed variables (MAR assumption) or even unobserved variables (MNAR assumption). Fang et al. extended the theoretical results of GAIN to MAR, and eliminated the need for hint mechanism (Fang & Bao, 2022). Finally, by adjusting loss function and the structure of GAIN, the imputation performance of GAIN can be further improved, and the common problems of GAN itself, such as gradient vanishing of generator and model collapse, can be solved. WGAIN (Friedjungova et al., 2020) introduced wasserstein distance to loss function to solve the problem of model collapse. GRAPE (You et al., 2020) proposes a graph-based framework for data imputation. PC-GAIN (Wang et al., 2021) and HexaGAN (Hwang et al., 2019) introduced unsupervised learning to improve GAIN performance and stability. Conv-GAIN added the structure of the convolutional neural network (CNN) to GAIN (Adeli et al., 2021). In this paper, we proposes ClueGAIN, which combines transfer learning (TL) with GAIN to improve the performance on high missing rate data of GAIN. Although in many cases it may not be possible to obtain complete data that is exactly the same as the missing data to be repaired for training purposes (Yoon et al., 2018), data that are potentially similar to the target data are not necessarily difficult to obtain. These similar data can provide the model with some prior knowledge, the 'clues', about the target data through TL. In 1976, Stevo Bozinovski et al. first gave a mathematical and geometrical model of TL (Bozinovski & Fulgosi, 1976). Some subsequent studies combined GAN and TL for image classification (Cho et al., 2017; Li & Shen, 2018), while others applied TL alone for data imputation (Ma et al., 2020). However, TL was never combined with GAN to impute data, and we first propose this idea. In addition, we also discusses the possibility of using ClueGAIN to measure the degree of similarity between multiple data sets. This measure can be applied to look for similarity between biomedical data, such as the similarity between different genes, drugs and proteins. ## 2 Problem Formulation Consider a d-dimensional space \(\mathcal{X}=\mathcal{X}_{1}\times...\times\mathcal{X}_{d}\). Suppose that \(\textbf{X}=(X_{1},...,X_{d})\) is a random variable (either continuous or binary) taking values in \(\mathcal{X}\), whose distribution we will denote \(P(\textbf{X})\). Suppose that \(\textbf{M}=(M_{1},...,M_{d})\) is a random variable taking values in \(\left\{0,1\right\}^{d}\). We will call **X** the data vector, and **M** the mask vector. For each \(i\in\left\{1,...,d\right\}\) we define a new space \(\tilde{\mathcal{X}}_{i}=\mathcal{X}_{i}\cup\left\{*\right\}\) where \(*\) is simply a point not in any \(\mathcal{X}_{i}\), representing an unobserved value. Let \(\tilde{\mathcal{X}}=\tilde{\mathcal{X}}_{1}\times...\times\tilde{\mathcal{X}} _{d}\). We define a new random variable \(\tilde{\textbf{X}}=(\tilde{X}_{1},...,\tilde{X_{d}})\in\tilde{\mathcal{X}}\) in the following way: \[\tilde{X}_{i}=\begin{cases}X_{i}&\text{if }M_{i}=1\\ *&\text{otherwise}\end{cases} \tag{1}\] The distribution is denoted by \(P(\tilde{\textbf{X}})\). \(n\) i.i.d. copies of \(\tilde{\textbf{X}}\) are realized, denoted \(\tilde{x}^{1},...,\tilde{x}^{n}\) and we define the data set \(\mathcal{D}=\left\{(\tilde{x}^{i},m_{i})\right\}_{i=1}^{n}\)(Yoon et al., 2018). The transfer learning problem is given in terms of domains and tasks. Given a specific domain, \(\mathcal{O}=\left\{\tilde{\mathcal{X}},P(\tilde{\textbf{X}})\right\}\), a task, denoted by \(\mathcal{T}=\left\{\mathcal{X},h(P(\tilde{\textbf{X}}))\right\}\), consists of three components: a d-dimensional space \(\mathcal{X}\) mentioned above, a model \(h:\tilde{\mathcal{X}}\rightarrow\mathcal{X}\) and a sampler \(G\). The model \(h\) is used to produce a distribution \(h(P(\tilde{\textbf{X}}))\) that is closest to (in the best case the same as) the distribution \(P(\textbf{X})\) based on \(P(\tilde{\textbf{X}})\), and \(P(\tilde{\textbf{X}})\) is decided by the observed data set \(\mathcal{D}\). We can therefore sample data from \(h(P(\tilde{\textbf{X}}))\) in order to impute the missing values. Given a source domain \(\mathcal{O}_{S}\), a target domain \(\mathcal{O}_{T}\) and learning task \(\mathcal{T}_{T}\), where \(\mathcal{O}_{S}\neq\mathcal{O}_{T}\), transfer learning is used to help improve the learning of the target model \(h_{T}(\cdot)\) in \(\mathcal{O}_{T}\) using the knowledge in \(\mathcal{O}_{S}\). Furthermore, it is used on measure how close \(\mathcal{O}_{S}\) to \(\mathcal{O}_{T}\) by measuring the contribution of the knowledge in \(\mathcal{O}_{S}\) on learning the target model \(h_{T}(\cdot)\). Thus, the similarity between the true distribution \(P_{S}(\textbf{X})\) and \(P_{T}(\textbf{X})\) can be further inferred from \(P_{S}(\tilde{\textbf{X}})\in\mathcal{O}_{S}\) and \(P_{T}(\tilde{\textbf{X}})\in\mathcal{O}_{T}\). ## 3 Clue Generative Adversarial Imputation Nets In this section, we will go through the overall process of using ClueGAIN to impute missing data and measure the similarity between different data sets. ### Data Imputation ClueGAIN's data imputation process is divided into two steps, pre-training and fine-tuning. Therefore, the model requires two data sets, the source data set \(S\) and the target data set \(T\). We assume that the source data set is complete and similar to the target data set \(T\). #### 3.1.1 pre-training ClueGAIN, like GAIN, consists of a generator \(G\) and a discriminator \(D\). The generator \(G\), takes \(\tilde{\textbf{X}}\), **M** and a noise variable **Z**, as input and output \(text{b}fX\), a vector of imputations. Let \(G:\tilde{\mathcal{X}}\times\left\{0,1\right\}^{d}\times\left[0,1\right]^{d} \rightarrow\mathcal{X}\) be a function, and \(\textbf{Z}=(Z_{1},...,Z_{d})\) be d-dimensional noise. We define random variables \(\tilde{\textbf{X}},\tilde{\textbf{X}}\in\mathcal{X}\) by \[\tilde{\textbf{X}} =G(\tilde{\textbf{X}},\textbf{M},(\textbf{1}-\textbf{M})\odot \textbf{Z}) \tag{2}\] \[\tilde{\textbf{X}} =\textbf{M}\odot\tilde{\textbf{X}}+(\textbf{1}-\textbf{M})\odot \bar{\textbf{X}} \tag{3}\] The discriminator is a function \(D:\mathcal{X}\rightarrow\left[0,1\right]^{d}\) with the \(i\)-th component of \(D(\hat{x})\) corresponding to the probability that the \(i\)-th component of \(\hat{x}\) was observed. The training algorithm and loss function that discriminator \(D\) needs to optimize is the same as that of GAIN (Yoon et al., 2018). For \(\mathcal{L}:\left\{0,1\right\}^{d}\times\left[0,1\right]^{d}\times\left\{0,1 \right\}^{d}\to R\): \[\mathcal{L}_{D}(\textbf{m},\bar{\textbf{m}},\textbf{b})=\sum_{i:b_{i}=0}\left[m_ {i}log\hat{n}_{i}+(1-m_{i})log(1-\hat{m_{i}})\right] \tag{4}\] The training algorithm for \(G\) is also the same as that of GAIN (Yoon et al., 2018). However, the loss function of \(G\) is different from GAIN. In the pre-training stage, no matter \((m_{i}=1)\) or \((m_{i}=0)\), the loss function that G needs to optimize is always \(\mathcal{L}:R^{d}\times R^{d}\to R\) : \[\mathcal{L}_{G}(x_{i},{x_{i}}^{*})=\sum_{i=1}^{d}L_{G}(x_{i},{x_{i}}^{*}) \tag{5}\] where \[L_{G}(x_{i},{x_{i}}^{*})=\begin{cases}(x_{i}-x_{j})^{2}&\text{if }x_{i}\text{ is continuous}\\ -x_{i}log(x_{i}^{*})&\text{if }x_{i}\text{ is binary}\end{cases} \tag{6}\] This is because the source data set \(S\) is complete and the masked data (\(x_{i}\) when \(m_{i}=0\)) can be supplied to the generator for training. Therefore, the loss function for \(G\) can be just the reconstruction error of \(S\). This will not affect the training of discriminator \(D\) because the generator still try to minimize difference between the true values and generated values, while the discriminator still discriminates against the value of \(m_{i}\) based on hints and the data generated by generator. The whole process of pre-training is shown in figure 1. #### 3.1.2 Fine-tuning There are four methods for model fine-tuning. The first method is to retain hidden layers of both generator and discriminator, and only retrain the input and output layers on the target data set (the input and output layers need to be redefined and trained because the feature dimensions of the target and source data set may be different). The algorithm and loss function used for retraining are exactly the same as those used for GAIN (Yoon et al., 2018). In the second way, part of the hidden layers in the model are preserved (frozen during fine-tuning), and part of the hidden layers, input, and output layers are retrained. The third is to retrain all model parameters, but the pre-trained hidden layer parameters are taken as the initial parameters. The fourth is to redefine the number of network layers and neurons of the new model, and add the pre-trained hidden layers to the hidden layers of the new model. Which approach works best depends on the differences between the target and source data sets. The performance of these four methods are compared and reported in section 4 on the chosen data set. After fine-tuning, we can use the generator \(G\) to impute data for the target data set \(T\). ### Data Similarity Measurement As stated in section 2, by measuring how efficient the knowledge in \(\mathcal{O}_{S}\) contributes to the target model \(h_{T}(\cdot)\), we can get information about the similarity between \(\mathcal{O}_{S}\) to \(\mathcal{O}_{T}\), and hence infer the similarity between the true distribution \(P_{S}(\textbf{X})\) and \(P_{T}(\textbf{X})\). The contribution of source domain to target model can be measured by the difference of performance of ClueGAIN and GAIN. GAIN can be regarded as ClueGAIN without information about the source domain (since there is no pre-training stage). Therefore, if the performance of ClueGAIN is significantly higher than that of GAIN, if other conditions such as the number of layers and neurons in both models are the same, it indicates that the pre-training information of ClueGAIN plays a role in training the target model \(h_{T}(\cdot)\). This approach enables us to compare similarity between multiple data sets. For example, given a target data set and multiple other data sets, using the algorithm 1, we can determine which data set is most similar to the target data set. In biomedical research, it has the potential to help us explore the relationships between different genes, proteins, and drugs. For example, given the attributes of a set of oncogenes, we can determine which genes are similar to them using the algorithm 1. See section 5.1 for more discussion of applications. ``` Input: Multiple data set \((D_{1},...,D_{n})\) and target data set \(T\) Step 1: pre-train ClueGAIN on target data set \(T\) and save pre-trained parameters. Step 2: Mask a certain proportion of each data set in (\(D_{1}\),..., \(D_{n}\)). Step 3: Fine-tune ClueGAIN on each data set in (\(D_{1}\),..., \(D_{n}\)) separately by transferring the pre-trained parameters, impute the masked data, append each performance score to \(R_{1}list\). Step 4: Train \(n\) GAINs on each data set and impute the masked data, append each performance score to a list \(R_{2}list\). Step 5: for each \(R_{1}\in R_{1}list\) and each \(R_{2}\in R_{2}list\)do Append \(Score_{i}=R_{1}-R_{2}\) to \(Scorelist\) endfor Step 6: Find the highest score \(Score_{i}\) in \(Scorelist\) Output:\(D_{i}\) is most similar to \(T\) for \(i\in Score_{i}\) ``` **Algorithm 1** Data Similarity Measurement Algorithm ## 4 Experiments and Results This section discusses four different experiments and their results. These experiments were carried out mainly on two data sets, Cancer Patients DNA Sequence Dataset (CPDSD) (Rafay, 2020) and Breast Cancer Gene Expression - CuMiDa (BCGE) (Grisci, 2020). The CPDSD is a small data set containing 44 genes from 391 patients and a column of five different last labels. The projection of data in a two-dimensional space is shown in figure 2. BCGE is Figure 1: Pre-training Process of ClueGAIN a large data set containing 54,676 genes from 151 samples and a column of six different class labels. The projection of data on a two-dimensional space is shown in figure 3. These two data sets were chosen for two reasons. The first is that the two are potentially related because they are both genetic data sets about cancer patients. Secondly, CPDSD has a small amount of data while BCGE has a large amount. This corresponds to the real situation in the real world, where we may have access to sufficient data of a certain type, for example, a common disease, but data may be lacking or few for a new type, such as a rare or nascent disease. In the first experiment, we compare the influence of different fine-tuning methods on ClueGAIN's performance with a fixed missing rate (including the performance of GAIN as a benchmark), as mentioned in section 3.1.2. In the second experiment, we compare the performance of GAIN and ClueGAIN with different missing rates of data sets. The third experiment compares the prediction performance of GAIN and ClueGAIN. A final experiment reveals the feasibility of using the algorithm 1 to compare the similarity between different data sets. There are several points to note: * For the first three experiments, we conduct each experiment ten times as well and report either Root Mean Square Error(RMSE) or Area Under the Receiver Operating Characteristic Curve (AUROC) as the performance metric along with their standard deviations across the 10 experiments. * For all experiments, pre-training, if any, are carried out on BCGE and CPDSD is used for data imputation. * CPDSD itself has many missing values, which are concentrated in some of the (gene) columns. For the first and second experiments, these missing values will affect the evaluation of the model, so those gene columns with a large number of missing values are discarded in the first two experiments. ### Fine-tuning Methods Comparison As described in section 3.1.2, clue-gain has four methods in the fine-tuning stage. 1. Directly use all pre-trained hidden layers (and retrained input and output layers) parameters to perform data imputation. 2. Retrain all the hidden layer parameters using pre-trained parameters as initialization. 3. Preserve the pre-trained hidden layer parameters and add them to the newly trained hidden layers and neurons on the target data set for data imputation. 4. Freeze some of the pre-trained hidden layers and retrain the rest. To complete the comparative experiment, we construct six different neural networks, five ClueGAINs and GAIN as the benchmark. The first, second, and third ClueGAIN correspond to methods 1, 2, and 3 above respectively. The remaining two ClueGAINs correspond to the method 4, where ClueGAIN4 freezes the half of the hidden layers near the input (shallow layers) and ClueGAIN5 freezes the half of the hidden layers near the output (deep layers). This is because we want to explore the effect of freezing different part of the hidden layers on the model. For different types of data, freezing different layers may have different effects on model performance (Yosinski et al., 2014). Figure 3: Projection of Breast Cancer Gene Expression in Two-dimensional Space Figure 2: Projection of Cancer Patients DNA Sequence Dataset in Two-dimensional Space The generators and discriminators of all ClueGAINs (and GAIN) except the third ClueGAIN have four hidden layers, each of which has 10 neurons. Multiple hidden layers are added because we need the model to learn general characteristics of the source data set as well as specific characteristics of the target data set, which requires sufficient complexity of the model. The third ClueGAIN has four hidden layers (both generator and discriminator) in the pre-trained stage, and eight hidden layers in the fine-tuning stage. The first four hidden layers are frozen pre-trained hidden layers, and the last four hidden layers are trainable newly-added hidden layers. Table 1 shows the performance of the six models with missing rate 60\(\%\), 70\(\%\), 80\(\%\) and 90\(\%\) on CPDSD, respectively. It can be seen that nearly all ClueGAINs' RMSE are lower than that of GAIN for data with high missing rate. This may attribute to pre-training stage bringing prior knowledge to the model, which can ensure that, at the fine-tuning stage, the model still has a basic 'assumption' of the underlying distribution of the target data sets with high missing rate. For CPDSD, among the five ClueGAIN models, the best two models in this experiment are ClueGAIN1 and ClueGAIN5, which perform significantly better than GAIN in the data set with high missing rate. They are selected for further comparison in subsequent experiments. The worst model is ClueGAIN2, which sometimes performs worse than GAIN. ### ClueGAIN and GAIN in Different Missing Rate We now compare the RMSE of two selected ClueGAINs and GAIN at different missing rates of CPDSD. Figure 4 shows the changes of RMSE of ClueGAIN1, ClueGAIN5 and GAIN with data missing rate. According to the figure, the RMSE of GAIN increases rapidly with increasing missing rate. At about 50\(\%\) missing rate, it exceeds RMSE of two ClueGAINs. When comparing two ClueGAINs, ClueGAIN1 has a lower RMSE when the missing rate is less than 60\(\%\), but it exceeds ClueGAIN5's RMSE when the missing rate is greater than 60\(\%\). Interestingly, although ClueGAIN5 has the highest RMSE when the missing rate is less than 50\(\%\), its imputed data performs best on prediction task. Even when the missing rate is extremely low, it is as good as (or even better than) GAIN whose RMSE is lowest in this range (see section 4.3). This may be because the prior information brought by pre-training helps ClueGAIN5 achieve regularization, reducing the possibility or degree of overfitting. Moreover, RMSE, as a common metric for model performance, has some limitation for its application on data imputation (Boursalie et al., 2021). ### Prediction Performance We now compare two selected ClueGAINs against GAIN with respect to the accuracy of post-imputation prediction. For this purpose, we use AUROC as a performance measure. To be fair to all methods, we use the same predictive model (logistic regression) in all cases. Comparisons are made on CPDSD multi-class classification task and the results are reported in table 2 and 5. As figure 5 and table 2 shows, ClueGAIN5 performs optimally in this experiment. At low missing rate, its AUROC is almost the same as that of GAIN (even slightly higher than that of GAIN), and significantly higher than that of GAIN when missing rate is greater than 50\(\%\). It achieves a good result by freezing part of the hidden layers and preserving some prior information about the pre-training data, while leaving some hidden layers to learn the specificity of the target data. In contrast, ClueGAIN1's performance is mediocre, with poor performance at low missing rate and only slightly improved at high missing rate. This result can be attributed to the defect of ClueGAIN1's fine-tuning method, which retains all the hidden layers of pre-training and only retrains the input and output layers. As a result, there are not enough hidden neurons in the model to learn the features that are unique to the target data, which leads to the model being too 'rigid' and unable to fit the target data distribution well. data sets. Table 3 records how much the RMSE of ClueGAIN5 is lower than that of GAIN on each data set at 80\(\%\) missing rate, which used as the score for similarity measurement. As shown in the table, The score of CPDSD is the highest, and the score of the other three data sets is significantly lower and close to zero, which is in line with our expectation that CPDSD is most similar to BCGE. ## 5 Discussion ### Novelty and Contribution There are two main innovations in this study. The first innovation is that we proposed ClueGAIN, which combined transfer learning with GAIN for the first time, and better performance is achieved on high missing rate data sets. The success of this combination can be extended to other variations of GAIN mentioned in section 1. These variants have better performance or more applications than GAIN, and transfer learning may bring further improvement. More importantly, it also provides another idea for data imputation, that is, to search for complete data sets with high similarity to the target data set. A telling example is that in the early days of the COV-19 pandemic, there was a large number of missing values in patient data, either because of government controls or because of the difficulty of data collection. However, the data set for common pneumonia may be relatively sufficient and ClueGAIN pre-trained on it may be able to impute the data for COV-19 patients well. The second innovation is a new way to measure the similarity of different data sets. This similarity may indicate the underlying true distribution correlation between the two data sets. While traditional methods are usually based on statistics or mathematics, the method in this study is based on computational methods, which can be applied to two sets of data with large size differences. This approach has great potential for biomedical applications. For example, if two seemingly unrelated protein expression data sets can provide useful prior information on the missing data imputation task of each other, which probably means that there are some potential connections between them, such as overlapping genes regulating their expression, etc. ### Limitations and Future Studies There are still many limitations in this study that need to be solved and improved in future research. The first limitation is that ClueGAIN may not able to outperform GAIN on a low missing rate data set according to experiments. However, as mentioned in section 5.1, future studies can try to combine those well-behaved GAIN variants with transfer learning, for example HexaGAN (Hwang et al., 2019), which claimed better and more stable performance on specific data sets than GAIN. The second is that ClueGAIN relies too much on finding \begin{table} \begin{tabular}{l r r r r} \hline \hline \multicolumn{1}{c}{**Model**} & \multicolumn{1}{c}{\(60\%\)} & \multicolumn{1}{c}{\(70\%\)} & \multicolumn{1}{c}{\(80\%\)} & \multicolumn{1}{c}{\(90\%\)} \\ \hline ClueGAIN1 &.1694 (\(\pm\).0011) &.1709 (\(\pm\).0013) &.1929 (\(\pm\).0032) &.2305 (\(\pm\).0177) \\ \hline ClueGAIN2 &.1771 (\(\pm\).0025) &.1786 (\(\pm\).0033) &.1938 (\(\pm\).0030) &.2650 (\(\pm\).0142) \\ \hline ClueGAIN3 &.1759 (\(\pm\).0022) &.1772 (\(\pm\).0028) &.1891 (\(\pm\).0065) &.2659 (\(\pm\).0132) \\ \hline ClueGAIN4 &.1725 (\(\pm\).0020) &.1754 (\(\pm\).0025) &.1894 (\(\pm\).0024) &.2459 (\(\pm\).0269) \\ \hline ClueGAIN5 &.1701 (\(\pm\).0010) &.1755 (\(\pm\).0013) &.1852 (\(\pm\).0031) &.2170 (\(\pm\).0165) \\ \hline GAIN &.1762 (\(\pm\).0030) &.1906 (\(\pm\).0041) &.2357 (\(\pm\).0077) &.2611 (\(\pm\).0130) \\ \hline \hline \end{tabular} \end{table} Table 1: Imputation Performance (Average \(\pm\) Std of RMSE) Figure 5: AUROC of ClueGAINs and GAIN vs Missing Rate on CPDSD 'similar' data sets. On the one hand, there are real situations where we can never find a data set that is similar to the target data set. On the other hand, the'similarity' between the pre-training data set and the target data set is a general intuition in this study, and there is no clear mathematical definition, which may cause that two seemingly similar data sets found is not really similar. Future studies can try to reduce the dependence of ClueGAIN on complete,'similar' data sets. Also, using mathematical or statistical methods to measure the similarity of the two sets of data sets in advance, so as to prevent the failure of pre-training due to the large difference between the source data set and the target data set, will be helpful. The third limitation is the high time complexity of the data similarity comparison algorithm. Given \(n\) data sets, the algorithm needs to train \(2n+1\) models, which is not ideal for large data sets. Future research can try to reduce the complexity of the algorithm. In addition to improving the above three limitations, future research can focus on two areas. One is to pre-train ClueGAIN on a large number of different data sets. Given ClueGAIN's good performance with just a few data sets for parameter pre-training, this may lead to excellent performance just like BERT in the field of natural language processing Devlin et al. (2018). The second is to explore the reasons why different fine-tuning methods affect the performance of the model. In the field of image processing, the research of Jason Yosinski et al. Yosinski et al. (2014) has gradually explored the causes and consequences of these effects. However, in the field of data imputation, they are not yet clear and deserve further study. ## 6 Conclusion In this study, ClueGAIN was proposed based on GAIN. With the idea of layer-based transfer learning, ClueGAIN contains two stages, pre-training and fine-tuning. Through pre-training on a similar data set, prior knowledge of the target data set can be obtained, so as to better complete the imputation task of high missing rate data set. Our experiments proved that ClueGAIN performes better than GAIN in the data imputation task for data set with a high missing rate. In addition, ClueGAIN can be used to measure the similarity between different data sets, which may help to discover potential correlations. However, ClueGAIN still has some limitations, and there is still a lot of room to explore the application and performance of transfer learning combined with GAIN in data imputation.
2305.01928
Visual Transformation Telling
Humans can naturally reason from superficial state differences (e.g. ground wetness) to transformations descriptions (e.g. raining) according to their life experience. In this paper, we propose a new visual reasoning task to test this transformation reasoning ability in real-world scenarios, called \textbf{V}isual \textbf{T}ransformation \textbf{T}elling (VTT). Given a series of states (i.e. images), VTT requires to describe the transformation occurring between every two adjacent states. Different from existing visual reasoning tasks that focus on surface state reasoning, the advantage of VTT is that it captures the underlying causes, e.g. actions or events, behind the differences among states. We collect a novel dataset to support the study of transformation reasoning from two existing instructional video datasets, CrossTask and COIN, comprising 13,547 samples. Each sample involves the key state images along with their transformation descriptions. Our dataset covers diverse real-world activities, providing a rich resource for training and evaluation. To construct an initial benchmark for VTT, we test several models, including traditional visual storytelling methods (CST, GLACNet, Densecap) and advanced multimodal large language models (LLaVA v1.5-7B, Qwen-VL-chat, Gemini Pro Vision, GPT-4o, and GPT-4). Experimental results reveal that even state-of-the-art models still face challenges in VTT, highlighting substantial areas for improvement.
Wanqing Cui, Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
2023-05-03T07:02:57Z
http://arxiv.org/abs/2305.01928v2
# Visual Transformation Telling ###### Abstract In this paper, we propose a new visual reasoning task, called Visual Transformation Telling (VTT). This task requires a machine to describe the transformation that occurred between every two adjacent states (i.e. images) in a series. Unlike most existing visual reasoning tasks that focus on state reasoning, VTT emphasizes transformation reasoning. We collected 13,547 samples from two instructional video datasets, CrossTask and COIN, and extracted desired states and transformation descriptions to create a suitable VTT benchmark dataset. Humans can naturally reason from superficial states differences (e.g. ground wetness) to transformations descriptions (e.g. raining) according to their life experience but how to model this process to bridge this semantic gap is challenging. We designed TTNet on top of existing visual storytelling models by enhancing the model's state-difference sensitivity and transformation-context awareness. TTNet significantly outperforms other baseline models adapted from similar tasks, such as visual storytelling and dense video captioning, demonstrating the effectiveness of our modeling on transformations. Through comprehensive diagnostic analyses, we found TTNet has strong context utilization abilities, but even with some state-of-the-art techniques such as CLIP, there remain challenges in generalization that need to be further explored. ## 1 Introduction What comes to your mind when you are given a series of images, e.g. Figure 1? We may first notice the content of each image, then link these images in our mind, and finally conclude a series of events from images, i.e., the entire intermediate process of cooking noodles. In fact, this is a typical reasoning process from states (i.e., single images) to transformation (i.e., changes between images), as described in Piaget's theory of cognitive development [6, 39]. More specifically, children at the preoperational stage (2-7 years old) usually focus mainly on states and ignore the transformations between states, whereas the reverse is true for children at the concrete operational stage (7-12 years old). Interestingly, computer vision has evolved through a similar pattern of development. In the last few decades, image understanding, including image classification, detection, captioning, and question answering, mainly focused on visual states, and has been comprehensively studied, achieving satisfying results. Now is the time to pay more attention to visual transformation reasoning tasks. Recently, there have been some preliminary studies [17, 36] on transformation. For example, TVR [17] defines a transformation-driven visual reasoning task, where both initial and final states are given, and the changes of object properties including color, shape, and position are required to be obtained based on a synthetic dataset. However, the current studies on transformation reasoning remain limited in two aspects. Firstly, the task is defined in an artificial environment that is far from Figure 1: **Visual Transformation Telling (VTT).** Given states represented by images (constructed from videos), the goal is to reason and describe transformations between every two adjacent states. reality. Secondly, the definition of transformation is limited to predefined properties, which cannot be generalized well to unseen or new environments. As a result, the existing transformation reasoning task cannot meet the requirement of real-world applications. In addition, the lack of strong transformation reasoning ability will hinder more advanced event-level reasoning tasks, such as visual storytelling [49] and procedure planning [8], since transformation plays an essential role in these tasks. To address these limitations, we propose a new task called Visual Transformation Telling (VTT) in this paper. The major motivation behind VTT is to provide descriptions for real-world transformations. For example, given two images depicting dry and wet ground, respectively, the task is to describe the transformation accurately as "it rained", which precisely captures a cause-and-effect relationship. Therefore, the formal definition of VTT is to generate language sentences that describe the transformation for a given series of states, i.e. images. VTT poses specific challenges that differ from traditional visual reasoning tasks. The primary challenge is the _semantic gap_ between state differences and transformation descriptions. In the case of rain, the change in the wetness of the ground at the state level is merely a surface phenomenon, while the more fundamental thing is the induced transformation of raining that occurs behind the scenes. Humans can naturally reason from the state differences (i.e, ground witness) to transformation descriptions (i.e, rained) based on their life experience. However, modeling this process is not obvious and is challenging. It is worth noting that VTT is distinct from video description tasks such as dense video captioning [23] because videos depict the complete process of transformations, which reduces the challenge of reasoning. To facilitate the study of VTT, we collect 13,547 samples from two instructional video datasets, including CrossTask[62] and COIN [46, 47], which were originally used for evaluating step localization, action segmentation, and other video analysis tasks. These samples are suitable for modification to fit VTT, as the transformations mainly depict daily activities. Furthermore, some main steps required to accomplish a certain job were annotated in the data, including temporal boundaries and text descriptions. Therefore, we extracted key images from the video as input, and directly used their text labels of the main steps as transformation descriptions. Random data samples showed small state differences. We found inducing transformations from these states is feasible but challenging for humans. Inspired by the human cognitive process of transformation reasoning, we proposed a new encoder-decoder architecture model called TTNet, which models transformations on top of existing visual storytelling models [14, 21] to better capture and describe them. Our main idea is to enhance the model's ability to capture semantic-level differences in states and to fully utilize context to strengthen the reasoning capability of transformations. Specifically, we first utilize CLIP [40] as the image encoder to _semantize_ images into state representations. The semantic-level state differences are computed between every two adjacent states. Next, a transformer _contextualizes_ state representations, as well as corresponding difference features, into transformation representations. During training, in order to enable the model to better capture transformations using contextual information, we adopted a strategy similar to the training objective of BERT, namely, masked transformation modeling and topic prediction. Finally, another transformer _textualizes_ the transformation representations into descriptions. We use VTT to analyze TTNet and adapted classical models from similar tasks, including CST [14], GLACNet [21] from visual storytelling, and Densecap [20] from dense video captioning. TTNet significantly outperforms other baselines that do not model transformations specifically and demonstrates its strong ability to utilize context for reasoning transformations. However, we found current models exhibit limited generalization ability in both language composition and transformation combination, suggesting significant research potential in this direction. In summary, our main contributions are: 1) proposing a novel task called visual transformation telling and collecting the VTT dataset from instructional videos to emphasize the reasoning of transformation in real-world scenarios. 2) introducing TTNet, an innovative model that is state-difference-sensitive and transformation-context-aware, specifically designed for transformation reasoning. 3) conducting an extensive experimental analysis using the VTT dataset, showcasing the remarkable effectiveness of TTNet but highlighting limitations in the generalization of language composition and transformation combination. ## 2 Related Works VTT is a type of visual reasoning task so we first discuss the relationship between VTT and some typical visual reasoning tasks. CLEVR [19] and GQA [18] concentrate on object relation and logical reasoning. RAVEN [58] and V-PROM [48] concentrate on the induction and reasoning of graphic patterns. VCR [57] and Sherlock [16] test the machine's ability to learn commonsense knowledge to answer daily questions. These tasks mainly focus on state-level reasoning. In addition to these tasks, there is a series of works related to dynamic reasoning. Physical reasoning [2, 5, 13, 43, 55] evaluates the ability to learn physical rules from data to answer questions or solve puzzles. VisualCOMET [37] requires reasoning beyond the given state to answer what happened before and will happen next. Visual storytelling [37] requires logically telling a story from information-incomplete states. The field of visual reasoning tends to shift from static scenes to dynamic ones. While reasoning in dynamic scenes, state and transformation are both essential, we focus on transformation reasoning to better evaluate and improve this ability, which distinguishes VTT from state-only and more complex composite tasks. To the best of our knowledge, there are few studies on designing specific tasks for visual transformation reasoning. The only related work is TVR [17], which requires predicting a sequence of changes in properties (e.g. color) given the initial and final states. However, the synthetic scenario used in TVR is far from reality and the property changes it requires are not commonly used to describe transformations in everyday life. In contrast, VTT emphasizes event-level description, which is a more natural way of describing transformations. Visual storytelling [42, 49] also requires event-level description but mixes transformations with other content, making it difficult to evaluate transformation reasoning specifically. Visual abductive reasoning [26] has a similar core idea to VTT, which is to find the most likely explanation for incomplete observations. However, VTT aims to reason multiple logically related transformations from states, while their task only requires reasoning a single missing transformation from multiple transformations. Procedure planning [8] aims to complete a job given states, while VTT focuses on explaining transformations between states, which has wider scenarios, such as explaining the wet ground with rain. Furthermore, the requirement for natural language generation in VTT leads to different evaluations and unique challenges, such as generalization on language compositions and transformation combinations. Finally, walkthrough planning [8] has a different target, which is to predict intermediate states. Another related topic is visual description. Tasks that describe a single image include image captioning [12, 24], dense image captioning [20], and image paragraphing [22], which vary in the level of detail required. Tasks that describe videos include video description [52], video paragraph description [56], grounded video description [61], and dense video captioning [23] start to describe events rather than a single state. For example, dense video captioning asks to predict temporal boundaries of key events and describe them. However, these tasks do not explicitly require reasoning about transformations since they provide the full process of transformation throughout frames. ## 3 Visual Transformation Telling ### Task Definition Visual transformation telling aims to test machines' ability to reason and describe transformations from a sequence of visual states, i.e., images. Formally, \(N+1\) images \(S=\{s_{n}\}_{n=1}^{N+1}\) are provided, which are _logically related_ and _semantically distinct_. Logically related means that these images are associated with a particular event, e.g., completing a job, while semantically distinct implies substantial changes that are meaningful to people, i.e. transformation. The objective is then to reason \(N\) transformations \(T=\{t_{n}\}_{n=1}^{N}\) between every two adjacent images and describe them in natural language, such that \(s_{1}\to t_{1}\to s_{2}\rightarrow\cdots\to t_{n}\to s_{n+1}\) is logically sound. ### VTT Dataset To create a dataset that covers a wide range of real-world transformations, we chose instructional videos as our resources, as they contain many everyday activities. Specifically, we selected two typical instructional video datasets, i.e. CrossTask [62] and COIN [46, 47], to construct our data. Figure 1 illustrates an instructional video from COIN on cooking noodles and how we transformed their annotation into our VTT dataset. We can see that the video is segmented into multiple main steps, each annotated with precise temporal boundaries and text labels. We directly used their text labels as transformation descriptions and extracted states based on the temporal boundaries. For the first transformation, we used the first frame of the corresponding step segment as its start state and the last frame as its end state. For the remaining transformations, the end state is extracted in the same way, while the start state shares Figure 2: Distributions of VTT samples. (a) Category. (b) Words. (c) Transformation length (top), sentence length (bottom). the end state of the previous transformation. We manually checked the quality of random samples and found that transformations could be reasoned out from states most of the time. Using this method, we collected 13,547 samples with 55,482 transformation descriptions from CrossTask and COIN, forming our new data for VTT. Figure 2 shows the distribution of the sample categories, keywords, transformation length, and sentence length. From the category distribution and the word cloud, we can see that the VTT data covers a wide range of daily activities, like dish, electrical application, gadgets, etc. Furthermore, the distribution of transformation length shows diversity, with most samples containing about 2-5 transformations. The average sentence length is around 2-6, indicating that short descriptions make up the majority. In addition to state images and transformation descriptions, each sample in VTT also comes with coarse-grained category labels (e.g., dish) and fine-grained topic labels (e.g., boil noodles). More details on the construction of VTT and dataset statistics are provided in the supplementary. The dataset will be made publicly available. ## 4 Method Our TTNet is inspired by human's cognitive process of transformation and existing visual storytelling models [14, 21]. In this section, we first introduce the problem formulation and the basic structure of TTNet. Then we describe how we model transformation by enhancing the model's ability to capture semantic-level differences with difference sensitive encoding, and fully utilize context to strengthen transformation reasoning with masked transformation model and auxiliary learning. **Problem Formulation.** The basic idea of solving VTT is to find a parameterized model \(f_{\theta}\) to estimate the conditional probability \(p(T|S;\theta)=p(\{t_{j}\}_{j=1}^{N}|\{s_{i}\}_{i=1}^{N+1};\theta)\), where \(s_{i}\in\mathbb{R}^{C\times W\times H}\) is a state represented as an image and \(t_{j}=\{x_{j,l}\}_{l=1}^{L}\) is a sentence of length \(L\). The conditional probability can also be written as auto-regressively generating \(N\) sentences: \[p(T|S;\theta)=\prod_{j=1}^{N}\prod_{l=1}^{L}p(x_{j,l}|x_{j,<l},\{s_{i}\}_{i=1} ^{N+1};\theta) \tag{1}\] **Base structure of TTNet.** Inspired by humans and existing visual storytelling models, the first step in TTNet is independent recognition, where each image is understood independently. To achieve this, an **image encoder**\(f_{\text{state}}\) is introduced to _semantize_ each image into a vector, resulting in a set of state representations \(V=\{v_{i}\}_{i=1}^{N+1}=\{f_{\text{state}}(s_{i})\}_{i=1}^{N+1}\). The next step is to associate these states together to form a complete understanding of the event. To reflect this process, a **context encoder** is used. This encoder, which can be a bi-directional RNN or a transformer encoder, is denoted as \(f_{\text{trans}}\) and _contextualizes_ the state representations to obtain transformation representations \(C=\{c_{i}\}_{i=1}^{N+1}=\{f_{\text{trans}}(i,V)\}_{i=1}^{N+1}\). The final step is to describe the transformations based on the existing understanding. In TTNet, this is achieved using a **transformation decoder**\(f_{\text{text}}\), which can be an RNN or a transformer decoder. This decoder _textualizes_\(N\) transformation representations into separate descriptions \(T=\{t_{i}\}_{i=1}^{N}=\{f_{\text{text}}(c_{i+1})\}_{i=1}^{N}\), in an auto-regressive manner. Empirically, it was found that adding the transformation representation to the word embedding in each step is better than using it as the prefix token. The training objective is to reduce the gap between generated transformations and ground truth transformations \(T^{*}=\{t_{i}^{*}\}_{i=1}^{N}\) by minimizing the negative log-likelihood loss, where \(t_{i}^{*}=\{x_{i,l}^{*}\}_{l=1}^{L}\) is the ground truth description Figure 3: **The architecture of TTNet. Images are first _semantized_ into state representations in the image encoder, then _contextualized_ to be transformation representations in the context encoder, and finally _textualized_ into text by the transformation decoder. To better modeling transformation, difference sensitive encoding is used to capture semantic-level differences, masked transformation model and auxiliary learning are used to fully utilize context to strengthen transformation reasoning.** of the \(i_{\text{th}}\) transformation. \[\mathcal{L}_{\text{text}}=-\sum_{i=1}^{N}\sum_{l=1}^{L}\log p_{\theta}(x_{i,l}^{* }|x_{i,<l}^{*}) \tag{2}\] Next, we introduce three strategies we used to model transformation, and we called the model that does not use these strategies as TTNetbase. **Difference Sensitive Encoding.** To bridge the semantic gap between state differences and transformation descriptions, the first step is to enable the model to accurately identify and capture the variations between states. However, capturing differences is challenging since adjacent states often exhibit minimal variation at the pixel level. This is mainly because the scene remains almost unchanged before and after the transformation, and only certain attributes of the transformed object have changed. Our intuition to solve this problem is that despite the minimal differences between states at the pixel level, there are often significant semantic differences. Therefore, we first choose CLIP [40] as our image encoder to extract state representations, due to CLIP's strong semantic representation ability trained on large-scale unsupervised data. Then, we compute semantic difference features between adjacent states by subtracting the current state and the previous state representations \(\Delta V=\{v_{i}-v_{i-1}\}_{i=1}^{N+1}\), where \(v_{0}=v_{N+1}\). In TTNet, we feed both state representations and the semantic difference features into the context decoder. To make the model able to distinguish these two kinds of features, we initialize two learnable types of embeddings and add them to the corresponding features. **Masked Transformation Model.** After identifying state differences, the next challenge is to efficiently reason about the underlying transformations. For humans, one common approach is to fully utilize the context to aid reasoning rather than focusing solely on adjacent states. Therefore, we chose the transformer [50] as the backbone of the context encoder, given its well-known ability to encode contextual information. However, in our initial experiments, we found TTNetbase failed to fully utilize context information when reasoning about transformations. A typical example is shown in Figure 4, where TTNetbase mistakenly identified an orange as an egg due to their similarities in the image. Nevertheless, such ambiguity can be resolved by incorporating other correct transformations. Hence, the question becomes how to enhance the model's ability to leverage contextual information. Inspired by BERT objectives, we proposed two strategies, including the masked transformation model (MTM) and auxiliary learning. Similar to the masked language model [10], the intuition behind MTM is that one transformation can be reasoned from nearby transformations. Specifically, during training, 15% of the features fed into the context encoder, including state representations and semantic difference features, are randomly masked. Empirically, we found using MTM with a 50% probability works better. **Auxiliary Learning.** Following the target of fully utilizing context information, another strategy is focused on the global representation. BERT applied the objective of next sentence prediction (NSP) but this is not suitable for our task. However, we found humans usually try to guess the category or topic before describing transformations, e.g. cooking noodles. Therefore, we set another objective that requires TTNet to predict the category and topic from the global representation during training. Two additional cross-entropy losses \(\mathcal{L}_{\text{category}}\) and \(\mathcal{L}_{\text{topic}}\) can be computed from these two classification problems. The final training loss becomes a combination of \(\mathcal{L}_{\text{text}}\), \(\mathcal{L}_{\text{category}}\), and \(\mathcal{L}_{\text{topic}}\), with adjustment factor \(\alpha\) and \(\beta\): \[\mathcal{L}=\mathcal{L}_{\text{text}}+\alpha\mathcal{L}_{\text{category}}+ \beta\mathcal{L}_{\text{topic}}. \tag{3}\] ## 5 Experiments In this section, we first introduce our empirical setups including baseline methods and evaluation metrics. Then we demonstrate the main empirical results on VTT, including both quantitative and qualitative results. After that, we show extensive analyses of how well models utilize contextual information and generalize to unseen cases. ### Empirical Setups **Baseline Models.** For the baseline models, we selected two classic methods, CST [14] and GLACNet [21], winners of the visual storytelling challenge [33]. Visual storytelling generates \(N\) descriptions from \(N\) images. CST contextualizes image features by LSTM and then generates descriptions with separate LSTMs for each image. GLACNet mixtures global LSTM features and local image features into context features and then generates descriptions with a shared LSTM decoder. When generating VTT descriptions, only the last \(N\) context features were used. We also compared with DenseCap [20], a method for dense video captioning. Dense video captioning aims to describe a series of events in a video and requires predicting temporal Figure 4: A failure case from TTNetbase which has the potential to be corrected by utilizing context information. boundaries for events. DenseCap [20] integrates past and future information into image features to capture the context information. Although there are many advanced methods for dense video captioning, they highly rely on fine-grained video features, which are not suitable for our task. All methods were closely implemented as per the original paper. For a fair comparison, we also provided baseline models with the same image encoder with TTNet marked with '*'. The implementation details of TTNet as well as the baseline models are described in the supplementary. **Evaluation Metrics.** We selected automated metrics for evaluation, including BLEU@4 [35], CIDEr [51], METEOR [3], ROUGE-L [27], SPICE [1], and BERT-Score [60], following previous works on visual descriptions [23, 26, 49]. While we aimed to evaluate the logical consistency of generated transformation descriptions, automatic evaluation of content logical consistency remains a very challenging problem in the NLP field with no solution to date. Generally, only manual evaluation can be performed. Therefore, we asked 25 human annotators to assess the quality of transformation descriptions using a Likert scale ranging from 1 to 5 based on the following criteria: _fluency_, measuring how well-written the transformation is; _relevance_, assessing how relevant the transformations are to the image states; and _logical soundness_, evaluating how well the overall logic conforms to common sense. ### Main Experimental Results **Quantitative Results.** Table 1 summarizes the results of 7 models on the VTT dataset, including TTNet, TTNetbase, CST and its CLIP version, GLACNet and its CLIP version DenseCap. From the results, TTNet outperforms other models on most metrics by a large margin, e.g. CIDEr is 11% higher than the second best model, i.e. TTNetbase. This large improvement indicates the three strategies we used for modeling transformation are effective since they are the only differences between TTNet and TTNetbase. Further comparing human metrics between them, the main strength of TTNet is the much stronger overall logic of the generated descriptions, while the relevance is only slightly better and the fluency is about the same. In our supplementary, we show detailed ablation studies on three strategies. When Comparing TTNetbase with GLACNet*, the performance difference is small, likely due to their similar design philosophy. However, TTNetbase converges faster during training due to the transformer's more efficient capture of contextual information compared to LSTM. In contrast, the performance gap between CST*, GLACNet*, and Densecap* is significant, despite all using CLIP. The difference lies in how they encode context and generate text. LSTM in encoding context captures more information than past and future attention features, thus GLACNet* outperforms DenseCap* with higher relevance and logical soundness scores. During text generation, GLACNet inputs the transformation representation with the previous word at all LSTM steps, whereas CST only uses the transformation representation as the initial state of LSTM. This small difference significantly impacts fluency and is why GLACNet* outperforms CST*. We believe this is because each step of the generation process includes transformation representation, allowing for more complete utilization of information. **Qualitative Results.** We present two examples from the VTT test data in Figure 5 that involve sowing and pasting a car sticker. In both cases, TTNet correctly reasons all transformations, while DenseCap and GLACNet fail to generate associative or coherent descriptions. In the sox case, DenseCap and GLACNet failed to identify the true actions or entities, e.g., sow, soil, and cover. We believe the dramatic change in the camera perspective makes DenseCap and GLACNet focus on incorrect areas of the pictures, leading to incorrect transformations. In contrast, TTNet is robust to noticing the correct transformed objects. In the paste car sticker case, the entire pasting process is long and the difference between images is small, with only a small area of the sticker being changed. CST and GLACNet describe multiple repeating transformations in the wrong place, resulting in an overall incoherent process. The success of TTNet suggests it is sensitive to small differences and can \begin{table} \begin{tabular}{l|c c c c c c|c c c} \hline \hline Model & B@4 & M & R & C & S & BS & Flu. & Rel. & Logic. \\ \hline CST & 10.09 & 11.39 & 25.98 & 43.22 & 9.28 & 16.30 & - & - & - \\ CST* & 13.96 & 19.21 & 38.11 & 84.60 & 21.85 & 25.66 & 2.04\({}^{\dagger}\) & 3.16\({}^{\dagger}\) & 2.96\({}^{\dagger}\) \\ GLACNet & 42.77 & 45.26 & 52.98 & 381.48 & 45.33 & 60.12 & - & - & - \\ GLACNet* & 55.24 & 59.48 & 66.25 & 508.18 & 60.21 & 71.13 & 4.75 & 3.82\({}^{\dagger}\) & 3.78\({}^{\dagger}\) \\ DenseCap* & 48.25 & 52.00 & 59.79 & 439.68 & 53.73 & 66.30 & 4.74 & 3.67\({}^{\dagger}\) & 3.59\({}^{\dagger}\) \\ \hline TTNetbase & 55.68 & 60.47 & 67.05 & 515.12 & 61.45 & 72.22 & **4.79** & 4.04 & 3.95\({}^{\dagger}\) \\ TTNet & **61.22** & **66.31** & **71.84** & **570.63** & **66.20** & **76.25** & 4.78 & **4.10** & **4.11** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on VTT evaluated using B@4(BLEU@4), M(METEOR), R(ROUGE-L), C(CIDEr), S(SPICE), BS(BERT-Score), Flu.(Fluency), Rel.(Relevance), and Logic.(Logical Soundness). * indicates the use of CLIP image encoder for a fair comparison. \(\dagger\) indicates TTNet significantly (\(p<0.05\)) outperforms corresponding models on human evaluation metrics. leverage the context to aid transformation reasoning. ### Diagnostic Analyses We learned from the human transformation reasoning process that modeling transformations involves three core issues, including state representation, context modeling, and transformation reasoning. The main challenge of _state representation_ is to extract effective semantic information from images for further reasoning. CLIP has been widely validated to have powerful semantic representation ability [30, 34, 40, 54]. We also compared state-of-the-art image encoders in the supplementary and ViT-L/14 from the CLIP family became our final choice for most experiments. The challenge of _context modeling_ is to fully leverage all the state information and transformations in other steps to reason the current transformation. In Section 5.3.1, we first analyze the importance of context for VTT and then test how well models utilize context. The primary challenge of _transformation reasoning_ is to generalize to novel combinations of single transformations or fine-grained elements (e.g. actions and entities) unseen during training. In Section 5.3.2, we test how models perform on unseen transformation combinations and language compositions. #### 5.3.1 Analyses on Context Modeling **Analyzing Context Importance for VTT.** To determine the importance of the context for VTT, we evaluated models in an independent setting where each transformation could only be reasoned from two adjacent states, without accessing other states. If context were not important, the performance of models would remain unchanged. However, Table 2 shows all four models experienced a significant performance drop. For example, TTNet's CIDEr score decreased by approximately 39%, indicating the crucial role of context in transformation reasoning. We also retrained TTNet on data constructed following the independent setting, and while performance improved, there remained a considerable gap compared to fully accessing context, further demonstrating the importance of context for VTT. **Assessment on Utilizing Context.** Having established the importance of context, it is important to test models' ability to utilize it. We examined two settings where the \begin{table} \begin{tabular}{l c c} \hline \hline Model & Normal & Adjacent States Only \\ \hline CST* & 84.90 & 49.80 \\ DenseCap* & 439.53 & 295.75 \\ GLACNet* & 508.19 & 268.49 \\ TTNet & **570.63** & 349.96 \\ \hline TTNet (retrain) & - & **459.84** \\ \hline \hline \end{tabular} \end{table} Table 2: Models perform worse with only adjacent states in terms of CIDEr score and re-training on them still falls short of the normal setting. Figure 5: Qualitative comparison on the VTT test data. Above: sow. Below: paste car sticker. Figure 6: TTNet performs most robustly when reasoning on partial context (some states are missing). provided states gradually decreased. The basic idea is that models with strong context utilization ability can compensate for missing information by relying on context. In the "randomly mask one" setting, only one state in each sample was masked, while in the "start & end only" setting, only start and end states are provided. Figure 6 demonstrates TTNet has the highest robustness as more states are missing, highlighting its exceptional ability to utilize context for transformation reasoning. Comparing TTNet to two of its variants, one without MTM and one without semantic difference features, we concluded that both MTM and semantic difference features contribute to context utilization, with the latter having a greater impact. #### 5.3.2 Analyses on Transformation Reasoning **Assessment on Reasoning Unseen Transformation Combinations.** A robust transformation reasoning system should be able to generalize to unseen transformation combinations, where individual transformations have been seen during training, but certain combinations have not. This often occurs when there are multiple ways of achieving the same task such as cooking noodles. In VTT, more than half of the combinations in the test set are not present in the training set (532 seen vs. 559 unseen). To evaluate how well models can reason about unseen transformation combinations, we divided the test set into two splits: "seen" (combinations appeared in the training set) and "unseen" (new combinations). As shown in Table 3, all models perform significantly worse on the unseen combinations than on the seen ones, with TTNet's logical soundness dropping by roughly 10% (from 4.29 to 3.86), showcasing the challenge of generalization. The performance gap between TTNet, TTNetBase, and DenseCap* on the unseen split is less significant than the gap on the seen split, implying that our strategies for modeling transformation primarily help with reasoning seen transformation combinations, while providing little benefit for reasoning unseen combinations. **Assessment on Reasoning Unseen Language Compositions.** A robust transformation reasoning system should also be able to generalize to unseen language compositions, where individual words such as entities and actions have been seen during training, but their combinations have not. For example, successfully reasoning the unseen transformation "pour coffee" when only "pour milk" and "make coffee" appeared in the training set. According to our statistics, VTT has a high proportion of shared vocabulary, this is the major reason that VTT is designed as a natural language generation task rather than a classification task, as models have a better chance of learning common patterns from transformations with shared words. To evaluate model generalization to new language compositions, we evaluated models on several manually labeled samples from "related" tasks in CrossTask. In the example shown in Figure 7, transformations for the topic _Make Bicerin_ have not appeared in VTT but are composed with seen words. However, all models failed to generate new descriptions and instead produced existing descriptions that matched the states as closely as possible. This indicates a significant limitation in the models' ability to generalize to new language compositions. ## 6 Conclusion and Discussion This paper introduces visual transformation telling (VTT), a new visual reasoning task that focuses on reasoning transformations between states in a series of images, which is a crucial cognitive skill for humans. To the best of our knowledge, this is the first real-world application for transformation reasoning by defining transformation descriptions as output. We built the VTT dataset using 13,547 samples collected from CrossTask and COIN. To model transformation reasoning, we developed TTNet by enhancing the model's state-difference sensitivity and transformation-context awareness. Experiments show the effectiveness of TTNet in terms of natural language generation metrics and human evaluations, especially in utilizing context for transformation reasoning. However, our analyses also revealed the limitations of current transformation reasoning models in generalizing to unseen transformation combinations and language compositions. We attribute this issue to the limited size of the VTT dataset, which contains only 861 unique words, 853 unique descriptions, and 6618 unique transformation com \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Seen} & \multicolumn{4}{c}{Unseen} \\ \cline{2-9} Model & C & Flu. & Rel. & Logic. & C & Flu. & Rel. & Logic. \\ \hline CST* & 0.99 & 1.95 & 3.22 & 3.00 & 0.73 & 2.17 & 3.08 & 2.91 \\ GLACNet* & 6.21 & 4.80 & 3.90 & 3.91 & 4.11 & 4.69 & 3.70 & 3.59 \\ DenseCap* & 5.16 & 4.72 & 3.66 & 3.61 & 3.75 & 4.76 & 3.68 & 3.57 \\ \hline TTNetBase & 6.02 & 4.80 & 4.08 & 4.00 & 4.40 & **4.77** & **3.99** & **3.88** \\ TTNet & **7.01** & **4.81** & **4.23** & **4.29** & **4.59** & 4.74 & 3.93 & 3.86 \\ \hline \hline \end{tabular} \end{table} Table 3: Models including TTNet perform worse on unseen transformation combinations. Figure 7: Models fail to describe unseen transformations composed by seen words. binations. This is insufficient to cover diverse real-life transformations and their combinations. While scaling the dataset has been shown to improve generalization ability, according to the previous successful cases such as GPT-3 [7] and CLIP [40], CrossTask and COIN are already the largest video datasets that can provide transformation annotations, and scaling the dataset by annotating more samples is too costly. Therefore, we propose leveraging large-scale video-caption datasets such as HowTo100M [31] to learn transformation reasoning with the main challenge being how to extract transformation-related information from messy video caption data. Additionally, advanced modeling techniques such as modeling the transformation process with casuals may also help mitigate the limited data issue. ## Acknowledgement This work was supported by the National Key R&D Program of China under Grant 2022YFB3103704, in part by the National Natural Science Foundation of China (NSFC) under Grant 62276248, and in part by Beijing Academy of Artificial Intelligence (BAAI) under Grant BAAI2020ZJ0303.
2303.03269
Prolegomena to the Formalisation of Aristotle's Topics
The present paper aims at providing some preliminary logical, linguistic and philosophical considerations on Aristotle's Topics. A deeper and more thorough investigation will be the object of a future paper.
Clarence Protin
2023-03-06T16:36:02Z
http://arxiv.org/abs/2303.03269v4
# Logic and Semiotics in Aristotle's Topics ###### Abstract We give a first sketch and outline of our approach to a first-order formalisation of Aristotle's Topics. ## Philosophical Preliminary The medieval scholastics used to distinguish between primary and secondary intensions. Consider the sentence : the term Man is subordinate to the term Animal. When using the expression 'term Man' we are not referencing a concrete man or even a collection of men. Nor are we referencing an inscription or physical or perceptual sequence of tokens. Rather we are referencing a complex consisting both of linguistic and semantic data and their relationship. In this paper we take the approach that such complexes can be formalised as linguistic data (in a suitable ideal language) together with a collection of first-order predicates applying to such data which formalise both grammatical and semantic information and relations. We are inspired by the work of Richard Milton Martin and David Parsons' account of the Inscriptional Meta-Language with the difference that we do no think of linguistic entities as concrete 'inscriptions' but in a slightly more abstract and ideal sense which yet preserves all the crucial formal combinatoric structure of concrete inscriptions. ## 1 Introduction One of the distinguishing features of 20th-century philosophy is the focus on symbolic logic and its use as a tool for the analysis of language. But was this really something revolutionary and new? Philosophy is expressed in spoken or written language. It is generally believed that such expression must consist primarily of what are called pursuasive or valid arguments, be these carried out in the form of debate or as a single text/discourse. Thus it is quite natural that the preliminary step before engaging in philosophy is to inquire what a valid or pursuasive argument consists of, as well as what a specifically philosophical argument is. This question is general enough to transcend the boundaries between what was later called 'logic' and 'linguistics'. To put it in another way there is no doubt that to the modern philosopher the study of this question will involve just as much the 'philosophy of logic' as the 'philosophy of language' (which in fact already overlap). A remarkable feature of Aristotle's Topics is it is entirely dedicated to this radical question and presents us with the first systematic attempt carried out in ancient times to furnish a detailed answer to this question in which logical, linguistic and semantic concerns are woven into a systematic whole. The Topics is focused on the theory of definition. Also the Topics divides all subjects of discussion into ethics, physics and logic1. Footnote 1: the kinds of definition considered by Aristotle would correspond in modern terms to Mathematics, Ethics, Psychology, Physics, Logic, Ontology and Epistemology. Ancient grammar (or ancient linguistics) for all its alleged unsophistication and naivete presents us, as we shall see, with a fresh and unprejudiced approach to language. A paper by Bobzien and Shogry2 documents illuminating connections between quantifier logic and grammar in the case of Stoic philosophy. Footnote 2: [https://ora.ox.ac.uk/objects/uuid:b51729b1-6891-4b09-98ed-6d0b6b47bc3c/files/sdf65v7975](https://ora.ox.ac.uk/objects/uuid:b51729b1-6891-4b09-98ed-6d0b6b47bc3c/files/sdf65v7975) The text of the Topics is difficult and at places corrupt. It is obvious that different sections where added at different times and that we cannot hope for complete consistency in the definition of some of the key terms. The Topics have been traditionally dismissed as an immature and somewhat undigested work which was superseded by the more profound and rigorous theories of the Analytics. The Topics are based on a fairly elaborate system of primitive grammatical, semantic and logical concepts. These include of course the famous five predicabilia (genus, species, difference, property and accident) and the ten categories. Following [16] we consider a topic to be a universally quantified proposition taken as an axiom which is instantiated in accordance to the needs of a debate or argument, either for proof or refutation (generally the form of the topic adequate for refutation follows from applying modern logical rules to the topic). All topics must be expressed in terms of the aforementioned system of primitive concepts. For Aristotle the original meaning of'syllogism' was simply a logical rule. Topics are logical-grammatical-semantic axioms which are converted into logical rules. The goal of this work is to bring to light these primitive concepts and to express formally all the topics and also to attempt to reduce the total system of topics to slighter fewer ones. Inspired by the work Richard Martin (and also to a certain extent the work of M. Malink on the First Analytics) we take a first-order approach rather than a second-order or more complex approach. Connections can be made with certain formalist or mathematically inspired schools of modern linguistics as well to structuralist semiotics and cognitive science. The rest of this work is dedicated to showing that the theory of the five predicabilia is very rich philosophically and linguistically and still of relevance to modern thought3. Footnote 3: see our previous work [15] for further discussions. We propose a detailed comparison with Hegel's theory of the concept (3rd part of the Science of Logic), with Husserl's Logical Investigations and with modern approaches to the organisation of semantic memory and formal ontologies. But the most important and interesting aspect of the Topics is the idea of formal rule-based (scientific, ethical, philosophical) debate - similar to a game like Chess - which includes both formal logic and formalised semantic and syntactic aspects of natural language. It could be hoped that this aspect might inspire us to elaborate more refined forms of debate based on better logico-linguistic standards and thereby serve scientific and socio-cultural progress. Ancient Greek philosophers loved formal argumentation and debate and there is evidence that a massive amount of arguments were compiled during the eras of the late Academia and first Stoic schools, as testified by accounts of the lives of Chrysippus and Carneades. Ancient philosophy at its best was all about critical and formal argumentation and debate, not system-building, myth-making or apologetics in service of the powers that be. ## 2 Aristotelic Grammar Unfortunately Aristotle's view of grammar as well as of many fundamental questions concerning terms and concepts is only documented in a few scattered passages. We know more about Stoic grammar and grammar-based logic. For our purposes the basic linguistic entity is that of sentence or protasis. A sentence is made up of two linguistic expressions connected by a certain propositional relation. The two expressions are called subject and predicate depending on their position in the propositional relation. Only noun phrases can be a subject but adjectival and verbal constructions can be predicates. These can becomes subjects after a certain syntactic transformation. The most primitive concept of'verb' appears to have been that of predicate4. A 'term' oros is what can be a subject. However there is also a sense in which a verb is term to which is added a temporal aspect. The Stoic concept of kategorema seems to fit more exactly the concept of predicate. A predicate can be considered in itself, as unsaturated by any concrete subject. Footnote 4: in some Eastern languages verbs and adjectives seem to be less differentiated than in Western languages. In Plato's _Sophist_ there is discussion concerning'sentences', 'nouns' and'verbs', sentences being combinations of nouns and verbs, and all sentences must have a'subject'. Plato gives the example 'Theaetetus, with whom I am now speaking, is flying'. This illustrates how Plato's concept of 'noun' corresponds to our 'noun phrase' and very likely to the Aristotelic 'oros': The formal 'grammar' required the Topics is rather sophisticated and if we wish to take a first-order approach then remarkably we are lead in the direction of modern sententialist formalisations of natural language such as that of Richard Martin. From now on we work in the language of classical first-order logic with equality. The variables of our logic range over a class of linguistic expressions which are to be take as being constructed from ideal linguistic tokens. This class is to include nominal, adjectival, adverbial and verbal constructions5 but not for instance isolated articles, prepositions, etc. We assume that there are only a finite number of possible linguistic expressions and that each one of these is represented by a distinct constant Footnote 5: Aristotle probably would accept that these are the class of expressions which mean something. (Ax1) \[x=c_{1}\lor x=c_{2}\lor...\lor x=c_{n}\] Since we will be dealing with the natural languages ancient Greek and modern English we will not use constants but cornered brackets enclosing expressions in these languages. For instance \({}^{\neg}Socrates^{\neg}\) and \({}^{\neg}the\,man\,runs^{\neg}\). Our first-order language will contain a finite collection of grammatical predicates \(G_{1},....,G_{g}\) which express purely syntactic (and morphological) properties of linguistic expressions. Instead of using a number we will often use a mnemonic tag. For instance \(G_{sen}(x)\) for the grammatical predicate that expresses that \(x\) is a sentence and \(G_{term}(x)\) the one that expresses that \(x\) is a term. \(G_{at}(x)\) expresses that \(x\) is an atomic term and \(G_{inc}(x,y)\) that the expression \(x\) occurs within \(y\). The grammatical predicate \(G_{rest}(x,y,z)\) will be important further ahead. It says that the noun-phrase \(z\) is the result of modifying the noun-phrase \(x\) by the adjectival form of the noun \(y\). For instance we have \(G_{rest}(^{\neg}animal^{\neg},^{\neg}rationality^{\neg},^{\neg}rational \,animal^{\neg})\). Aristotle makes frequent use of the adverbialisation and adjectivisation functions \(G_{adv}(x,y)\) and \(G_{adj}(x,y)\) in various topics. It appears that Aristotle conceived the verb as being a temporalisation of a term. Besides purely syntactic predicates we have semantic predicates (which overlap somewhat with the logical, ontological and epistemic domain). We have a finite collection of semantic relations \(S_{1},...,S_{s}\) with distinguished binary relation \(O(x,y)\) called _contrariety_, \(x\multimap y\) the _semantic dependence_ of \(x\) on \(y\), \(\circled{(x)}\) representing terms admitting more or less6, a quaternary predicate representing _analogy_ (or homology or similitude) as \(x\) is to \(y\) so \(z\) is to \(w\) and written \(\frac{x}{y}=\frac{z}{w}\). One of the most important semantic predicates is \(G_{pol}(x)\) which expresses that \(x\) is polysemic or ambiguous. We also have a predicate for _metaphoric_ predication \(\approx\). Footnote 6: to which we add a predicate representing ‘having a medium’ \(\bot x\) and \(y\) being a medium between \(x\) and \(z\)\(M(x,y,z)\). Also \(x\gtrdot y\) represents ‘better known’. The famous ten categories are represented by unary predicates \(C_{1},...,C_{10}\). It seems reasonable that only terms can fall under one of the categories. However other grammatical categories can easily be brought within the division of the categories as well. The obvious properties of the ten categories are (Ax2) \[\neg(C_{i}(x)\&C_{j}(x))\quad i\neq j\] (Ax3) \[C_{1}(x)\lor C_{1}(x)\lor...\lor C_{10}(x)\] \(C_{1}\) represents the category of _ousia_. For Aristotle terms in the category of _ousia_ do not semantically depend on terms in other categories. \[C_{1}(x)\rightarrow(\neg C_{1}(y)\rightarrow\neg x\multimap y)\] Cf. 128a25. ## 3 The Five Predicabilia As expected predication will correspond to a binary predicate. The five predicabilia will correspond to different binary predicates. Aristotle is not consistent in his definition of 'accident'. We must distinguish between separable and inseparable accident. In fact the first distinction we make between different forms of predication is modal. We have necessary predication \(x\in y\) and contingent predication \(x\ominus y\) which is to be understood as \(y\) can both hold and not hold of \(x\) (bicontingent in the sense of [14]). (Ax4) \[x\in y\leftrightarrow\neg x\ominus y\] Being a genus and species are correlatives. We denote this relation by \(x\prec y\) which can be read \(x\) is a species of genus \(y\) or \(y\) is a genus of species \(x\). The most basic properties which are both explicit and implicit throughout the Topics are (Ax5) \[x\prec y\&y\prec z\to x\prec z\] (Ax6) \[x\prec y\&x\prec z\rightarrow(y\prec z\lor z\prec y)\] (Ax7) \[x\not\prec x\] Because of Axiom 1 there can be no infinite chains \(..\prec x_{1}\prec x_{2}\prec x_{3}\prec...\) extending in either direction and so we have the ascending and descending chain conditions. It follows that if \(x\prec y\) for some \(y\) then there is a \(z\) such that \(x\prec z\) and there is no \(w\) such that \(x\prec w\prec z\). We then write \(x\triangleleft z\). Formally \[x\triangleleft z\leftrightarrow\neg\exists y.x\prec y\prec z\] In this situation \(x\) is called the immediate species of \(z\) and \(z\) the proximate genus of \(x\). Property is denote by \(xIy\) (\(I\) is for Greek _idion_) and read \(y\) is the property of \(x\). Genealogically definition and property seem to share a common origin. Inseparable accident is denoted by \(x\Sigma y\), \(y\) is the inseparable accident of \(x\). We must not confuse separable accident \(x\ominus y\) with inseparable accident \(x\Sigma y\). An attentive reading of Chapter VI reveals that 'difference' should correspond to two distinct binary predicates \(\Delta_{1},\Delta_{2}\). Indeed Aristotle speaks of the (immutable, essential) difference of a given species, of coordinate differences and finally of the various differences that belong to a given genus. Let the first relation be denoted by \(x\Delta_{1}z\) and the third by \(y\Delta_{2}z\). We can define coordinate differences as follows \[z_{1}\Delta_{3}z_{2}\leftrightarrow\exists y.y\Delta_{2}z_{1}\&y\Delta z_{2}\] The following are clear in the Topics (Ax8) \[\neg(x\bigcirc_{1}y\&x\bigcirc_{2}y)\quad\bigcirc_{1},\bigcirc_{2}\in\{\prec, \Delta_{1},I,\Sigma\},\bigcirc_{1}\neq\bigcirc_{2}\] (Ax9) \[x\bigcirc y\to x\in y\quad\bigcirc\in\{\prec,\Delta_{1},I,\Sigma\}\] There is a problem that Aristotle seems in certain passages to make the five predicabilia exhaustive for all forms of predication. But this cannot be the case (unless it is understood in the sense of atomic building blocks) for it does not include definition. We will discuss \(\Delta_{2}\) further ahead. For now we note that \(y\Delta_{2}z\to z\to y\&\neg y\multimap z\). Now we are equipped to define definition according to the most mature view in the Topics: \[xDy\leftrightarrow\exists a,b.G_{rest}(a,b,y)\&x\triangleleft a\&x\Delta_{1}b \&a\Delta_{2}b\] How do we define _infima species_ and individuals? Note that for Aristotle \(\ulcorner Socrates^{\top}\prec\ulcorner man^{\top}\). We could define individual \[\iota(x)\leftrightarrow\neg y.y\prec x\] and perhaps add a category condition. Then we would attempt to define infimate species as \[\wedge(x)\leftrightarrow\forall y.y\prec x\rightarrow\iota(y)\] but this leaves out the important condition that individuals in an infima species cannot differ from each other specifically. That is, \(x\) cannot have any differences: \[\wedge(x)\leftrightarrow\neg\exists z.x\Delta_{2}z\&\forall y.y\prec x \rightarrow\iota(y)\] As we shall see an important property of genera is that if they have a difference then they have more than one \[x\Delta_{2}y\rightarrow\exists z.z\neq y\&x\Delta_{2}z\] which will also imply that a genus with a proximate species has more than one \[x\triangleleft y\rightarrow\exists z.z\neq x\&z\triangleleft y\] Finally we assume that we have distinct Platonic-Parmenidean constants \(\ulcorner being\urcorner\) and \(\ulcorner one\urcorner\) such that (Ax10) \[\forall x.x\in\ulcorner being\urcorner\&x\in\ulcorner one\urcorner\] Extensions and Finite Set Theory By Axiom 1 we can easily consider a theory of finite sets or lists of linguistic expressions. For \(x\) we consider the finite set of all constants \(c_{i}\) such that \(c_{i}\in x\) and we denote this set by \(\{z:z\in x\}\) and we call it the _extension_ of \(x\). We can define extensional containment \(x\subset_{e}y\) and equality \(x=_{e}y\) in the expected way. Aristotle seems in various passages to hint at this modern concept of extension with his expression _pleon legesthai_, being said of more. ### Topics III, 1 \(x\prec\ulcorner good\urcorner\) and \(G_{rest}(y,x,z)\) then \(x>z\) (more elligible). Desirable for something. ### Topics IV, 1 \[x\not\prec z\&x\prec y\to y\not\prec z \tag{1}\] \[(x\ominus y\lor x\Sigma y)\to x\not\prec y \tag{2}\] \[x\prec y\to C_{i}(x)\leftrightarrow C_{i}(y),\quad i=1,...,10 \tag{3}\] \[x\prec y\to y\notin x \tag{4}\] \[x\prec y\&z\in x\to z\in y \tag{5}\] \[x\prec z\rightarrow\exists y.y\triangleleft z\&x\preceq y\lx@note{footnote}{for $\neg\iota(x)$} \tag{6}\] \[x\prec y\to x\subset_{e}y \tag{7}\] \[x\prec z\&x\prec y\&\wedge(y)\rightarrow(\forall w.w\prec y\to w \prec z) \tag{8}\] Vocabulary used in this chapter: _pleasure, good, white, snow, soul, self-moving, swan, science, beautiful, double, multiple, locomotion, alteration, movement, origin, principle, opinionable, being, one, line, indivisible_. ### Topics IV, 2 \[(\exists z.x\prec z\&z\not\prec y\&y\not\prec x)\to x\not\prec y \tag{9}\] \[(\exists z.y\prec z\&x\not\prec z)\to x\not\prec y \tag{10}\] \[x\prec y\to y\not\prec x \tag{11}\] \[x\prec z\&y\triangleleft z\&(\forall w.(w\neq y\&w\triangleleft z)\to x \not\prec w)\to x\triangleleft y \tag{12}\] \[(x\not\prec z\&y\prec z)\vee(w\in x\&x\notin z)\to x\not\prec y \tag{13}\] \[yDv\&x\prec y\to x\in v\&(w\in x\to w\in v) \tag{14}\] \[x\Delta_{1}y\to x\not\prec y\&y\not\prec x \tag{15}\] \[x\prec y\to x\subsetneq_{e}y \tag{16}\] \[x\Delta_{1}y\to x\subset_{e}y \tag{17}\] \[x\Delta_{1}y\to(x\prec z\to z\notin y) \tag{18}\] \[(\forall z.x\Delta_{2}z)\to y\notin z)\to y\not\prec x \tag{19}\] _pleon te gar to genos tes diaphragas dei legesthai_ 123a7 \[x\Delta_{2}z\to z\subsetneq_{e}x \tag{20}\] \[x\prec y\to x\multimap y\&\neg x\ominus y \tag{21}\] **Topics IV, 3** \[x\in y\&O(y,y^{\prime})\to x\not\prec y^{\prime} \tag{22}\] \[(x\prec y\to x\notin z)\&w\in z\to x\prec y \tag{23}\] \(\ulcorner\, soul\urcorner\in\ulcorner\,life\urcorner\) Also for all \(N\prec\ulcorner\,number\urcorner\) we have \(N\notin\ulcorner\,life\urcorner\). Hence we cannot have \(\ulcorner\,soil\urcorner\,\prec\,\ulcorner\,number\urcorner\,\). \[x\prec y\to\exists z.z\neq x\&z\prec y \tag{24}\] \[x\prec y\to\neg x\approx y \tag{25}\] \[\neg\exists x^{\prime}.O(x,x^{\prime})\&y\prec x\to(O(y,y^{\prime})\to y^{ \prime}\prec x) \tag{26}\] \[x\prec y\&O(x,x^{\prime})\&O(y,y^{\prime})\to x^{\prime}\prec y^{\prime} \tag{27}\] \[(\neg\exists y.x\prec y)\&O(x,x^{\prime})\to\neg\exists y.x^{\prime}\prec y \tag{28}\] \[x\prec y\to(\bot(x)\leftrightarrow\bot(y)) \tag{29}\] \[x\prec y\&x^{\prime}\prec y\&M(x,z,x^{\prime})\to z\prec y \tag{30}\] \[x\prec y\&G_{adv}(x,x^{\prime})\&G_{adv}(y,y^{\prime})\to x^{\prime}\in y^{\prime} \tag{31}\] (31') \[x\prec y\&G_{adj}(x,x^{\prime})\&G_{adj}(y,y^{\prime})\to x^{\prime}\in y^{\prime}\] **Topics IV,4** \[\frac{x}{y}=\frac{x^{\prime}}{y^{\prime}}\&y\prec y^{\prime}\to x\prec x^{\prime} \tag{32}\] \[x\prec y\&G_{adv}(x,x^{\prime})\&G_{adv}(y,y^{\prime})\to x^{\prime}\prec y^{\prime} \tag{33}\] **Topics IV, 5** Here we introduce the semantic relation \(S_{loc}\) expressing inherence of one term in another. Also \(x\epsilon y\) expressing that \(x\) is a part of \(y\). Also \(S_{mod}(x,y,z)\) expresses that \(x\) is \(z\) only according to aspect \(y\). For example, animals are visible only according to their bodies. \[{}^{\sqcap}heksin^{\sqcap}\not\prec^{\sqcap}energetic^{\sqcap}\not\prec^{ \sqcap}neregia^{\sqcap}\not\prec^{\sqcap}heksin^{\sqcap}\] \[{}^{\sqcap}dunamis^{\sqcap}\not\prec^{\sqcap}energetic^{\sqcap}\not\prec^{ \sqcap}dunamis^{\sqcap}\] Aristotle appears to state that is \(a\prec b\) implies that \(a\) is the cause of \(b\). \[S_{loc}(x,y)\&y\prec z\to S_{loc}(x,z)\] \[x\prec y\to x\in z\rightarrow\neg G_{rel}(y,w,z)\] \[x\epsilon y\to y\not\prec x\] What is censurable is not dunamis. \[x\prec y\rightarrow\neg x\Delta_{1}y\&x\Delta_{1}y\to x \not\prec y\] \[S_{mod}(x,y,z)\rightarrow\neg x\prec z\] **Topics IV, 6** \(xS_{sup}y\) means \(x\) is superior to \(y\). \(x\biguplus y\) means \(x\) is inherent in \(y\). \[\wedge(y)\to x\not\prec y\] \[\forall x.x\in y\to z\not\prec y\&\neg zDy\] \[x\biguplus y\to y\not\prec x\] \[xS_{sup}x^{\prime}\&yS_{sup}y^{\prime}\&O(x,x^{\prime})\&O(y,y^{\prime}) \rightarrow(x\prec y\leftrightarrow x^{\prime}\prec y^{\prime})\] \[x\prec y\rightarrow(\&x\leftrightarrow\odot y)\] \[x\prec y\&x\biguplus z\to y\biguplus z\] **Topics V, 2** \[xIy\rightarrow(\forall w.G_{inc}(w,y)\to w\gtr x)\] \[xIy\rightarrow\neg G_{pol}(y)\&\forall z.G_{inc}(z,y)\rightarrow \neg G_{pol}(z)\] \[xIy\rightarrow\neg G_{pol}(x)\] \[xIy\rightarrow\forall z,w.G_{inc}(z,y)\&G_{inc}(w,y)\to z=w\] \[xIy\to G_{1}(w,v,y)\rightarrow\exists u.u\notin v\] \[xIy\to G_{and}(u,v,y)\rightarrow\neg xIu\&\neg xIv\] **Topics V, 3** \[G_{inc}(x,y)\rightarrow\neg xIy\] \[G_{inc}(w,y)\&w\prec x\rightarrow\neg wIy\] \[G_{inc}(w,y)\&(O(w,x)\lor w\multimap x)\rightarrow\neg xIy\] \[x\ominus y\rightarrow\neg xIy\] \[G_{inc}(w,y)\&S_{sens}(w)\rightarrow\neg xIy\] \[xDy\rightarrow\neg xIy\] **Topics V, 5** \[S_{ess}(x,y)\rightarrow\neg xIy\] \[S_{syn}(x,y)\rightarrow\neg xIy\] \[S_{hom}(x)\&y\epsilon x\rightarrow(xIz\leftrightarrow yIz)\] **Topics V, 6** \[O(x,x^{\prime})\&O(y,y^{\prime})\&xIy\to x^{\prime}Iy^{\prime}\] **Topics V, 7** \[G_{adv}(x,x^{\prime})\&G_{adv}(y,y^{\prime})\&xIy\to x^{\prime}Iy^{\prime}\] **Topics V, 8** \[S_{more}(a,x,x^{\prime})\&S_{more}(b,y,y^{\prime})\&\neg x^{\prime}Iy^{\prime} \rightarrow\neg xIy\] **Topics VI, 1** * \[w\in y\&w\notin x\rightarrow\neg xDy\] \[(G_{res}(x,y,z)\to w\nless x)\rightarrow\neg wDz\] \[x\neq_{e}y\rightarrow\neg xDy\] \[\neg S_{ess}(x,y)\rightarrow\neg xDy\] **Topics VI, 3** \[\forall v.w\in v\&G_{inc}(w,x)\rightarrow\neg yDx\] \[v\prec x\&G_{inc}(v,y)\rightarrow\neg xDy\] \[G_{inc}(v,w)\&xDv\rightarrow\neg xDw\] \[xDy\rightarrow\forall z,w.G_{inc}(z,y)\&G_{inc}(w,y) \to z=w\] **Topics VI, 4** \[xDy\&G_{inc}(w,y)\to w>x\] \[xDy\&G_{inc}(z,y)\to x\neq z\] **Topics VI, 5** \[xDy\rightarrow\exists w,v.G_{res}(w,v,y)\&x\prec w\] \[xDy\&G_{res}(w,v,y)\&x\prec w\to x\triangleleft w\] **Topics VI, 6** A term has a difference iff the difference term belongs to some genus \[\exists y.x\Delta_{1}y\leftrightarrow\exists z.y\Delta_{2}z\] We write \(\Delta(x)\) if \(x\) satisfies any one of these equivalent conditions. A difference always has a coordinated difference: \[\Delta(x)\rightarrow\exists y.\Delta_{3}(x,y)\] Coordination is transitive, symmetric and \(\neg\Delta_{3}(x,x)\). If a difference belongs to a genus than so must its coordinates \[y\Delta_{2}z\&\Delta_{3}(z,x)\to y\Delta_{2}x\] \[G_{res}(x,y,z)\&x\Delta_{2}y\&v\ominus y\rightarrow\neg vDz\] \[G_{res}(x,y,z)\&x\Delta_{2}y\&(x\in y\lor x\in v)\rightarrow\neg vDz\] \[x\Delta_{2}y\to y\varsubsetneq_{e}x\] Here is Brunschweig's view on a difficult topic in this section[2]: _(...)selon lui, les Platoniciens, partisans de l'existence des Idees. Le lieu perd son efficacite si 1'on traite le genre comme il faut le traiter a ses yeux, a savoir precisement non pas comme une chose individuelle, numeriquement une, mais comme un universel : toute longueur particuliere est bien soit sans largeur, soit ayant de la largeur ; mais la longueur en tant que genre est un universel, qui peut sans contradiction interne regrouper des longueurs particulieres sans largeur et d'autres ayant de la largeur (b28-29). Cette critique est fondamentale, on le sait, dans l'opposition d'Aristote a Platon._ If a difference is given by negation they there are only two opposite coordinates. \[x\Delta_{2}y\&z\Delta_{2}y\to x\prec z\lor z\prec x\] \[x\Delta_{2}y\&z\Delta_{1}y\to z\prec x\] But two subordinate genera of the same genus can have the same difference. Biped for flying and terrestrial animals, the common genus being animal. Essence is invariant under spatial and temporal translation. Aristotle anticipates modern physics. Difference cannot allow more or less \[x\Delta_{2}y\rightarrow\neg\otimes y\] The difference of a relative must be relative: \(x\Delta_{1}y\&S_{rel}(x)\to S_{rel}(y)\) ### Topics VI, 7 Confusing capacity with wanting to so something. \[xDy\rightarrow(\otimes x\leftrightarrow\otimes y)\] There is also the notion of two things increasing at the same same time \(S_{sim\otimes}(x,y)\) \(S_{more}(fire,flame,light)\) flame is more fire than light but \(S_{more}(finestparticles,light,flame)\). Aristotle appears to reject De Morgans law \(\neg(A\lor B)\leftrightarrow\neg A\&\neg B\) in 146a. Beauty = Pleasing to Sight or to the Ear. But what he means is perhaps \(xDy\&xDz\). ### Topics VI, 8 The predicate \(G_{rel}(x)\) means that \(x\) is a relative. \(G_{rel2}(x,y,z)\) means that an expression is a relative saturated by an expression. **Topics VI, 11** Interesting topics about complex terms and definitions which seems to reflect at places a much earlier phase of the Topics in which definition and property were not yet fully differentiated. **Topic VI, 12** Error of too vague saturation involving \(G_{rel2}(x,y,z)\). For instance saying that medicine is the science of everything that is. **Topics VI, 13** We need additional semantic primitives \(G_{\&}(x,y)\), \(S_{prod}(x,y,z)\). At 150a Aristotle forsakes the requirement that a definition be comprised of genus and difference. Aristotle admits that what corresponds to modern 'negation' is anti-monotonic, if \(A\) implies \(B\) then \(\neg B\) implies \(\neg A\). The semantic relation of opposition functions in a different way. It is, as we have seen, monotonic for \(\prec\). If we denote the unique \(x^{\prime}\) such that \(O(x,x^{\prime})\) by \(x^{\circ}\) then \(x\prec y\) implies \(x^{\circ}\prec y^{\circ}\). Classical negation intertwines with classical conjunction via the De Morgan laws \(\neg(A\&B)\leftrightarrow\neg A\vee\neg B\). But Aristotle states clearly that for the operation of 'this and that' (which we denote by \(x+y\)) we have that \((x+y)^{\circ}=x^{\circ}+y^{\circ}\). The outstanding problem of this section involves how finite sets (or pairs) are formed and how predicates are (mereologically) applied to them, more specifically predicates of the form \(A+B\). In the example of justice Aristotle seems to implicitly accept two definitions: \[\{x_{1},x_{2}\}\in y\leftrightarrow x_{1}\in y\lor x_{2}\in y\] \[x\in(y+z)\leftrightarrow x\in y\&x\in z\quad\mbox{for $x$ not a set}\] Assume as in Aristotle's example that \(x_{1}\in y,x_{1}\in z^{\circ},x_{2}\in y^{\circ},x_{2}\in z\). Then using the second definition we get that \[\{x_{1},x_{2}\}\in(y+z)\leftrightarrow\{x_{1},x_{2}\}\in y\&\{x_{1},x_{2}\}\in z\] which by the first definition is equivalent to \[(x_{1}\in y\lor x_{2}\in y)\&(x_{1}\in z\lor x_{2}\in z)\] which obtains. And in the same way we can show that \[\{x_{1},x_{2}\}\in(y+z)^{\circ}=y^{\circ}+z^{\circ}\] Thus the pair of men in Aristotle's example would be both just and unjust. Ancient mereology apparently distinguished different degrees of 'holism', from a mere abstract'set' in which they is no difference between a set and the sum of its part to products in which the elements are changed and incorporated or integrated into a living whole. **Topics VI, 14** Need primitives \(S_{comp}(x,y,z)\), \(S_{comp1}(x,y,z,w)\).
2305.07381
Novel bribery mining attacks in the bitcoin system and the bribery miner's dilemma
Mining attacks allow adversaries to obtain a disproportionate share of the mining reward by deviating from the honest mining strategy in the Bitcoin system. Among them, the most well-known are selfish mining (SM), block withholding (BWH), fork after withholding (FAW) and bribery mining. In this paper, we propose two novel mining attacks: bribery semi-selfish mining (BSSM) and bribery stubborn mining (BSM). Both of them can increase the relative extra reward of the adversary and will make the target bribery miners suffer from the bribery miner dilemma. All targets earn less under the Nash equilibrium. For each target, their local optimal strategy is to accept the bribes. However, they will suffer losses, comparing with denying the bribes. Furthermore, for all targets, their global optimal strategy is to deny the bribes. Quantitative analysis and simulation have been verified our theoretical analysis. We propose practical measures to mitigate more advanced mining attack strategies based on bribery mining, and provide new ideas for addressing bribery mining attacks in the future. However, how to completely and effectively prevent these attacks is still needed on further research.
Junjie Hu, Chunxiang Xu, Zhe Jiang, Jiwu Cao
2023-05-12T11:17:57Z
http://arxiv.org/abs/2305.07381v1
# Novel Bribery Mining Attacks in the Bitcoin System ###### Abstract Mining attacks allow adversaries to obtain a disproportionate share of the mining reward by deviating from the honest mining strategy in the Bitcoin system. Among them, the most well-known are selfish mining (\(SM\)), block withholding (\(BWH\)), fork after withholding (\(FAW\)) and bribery mining. In this paper, we propose two novel mining attacks: bribery semi-selfish mining (\(BSSM\)) and bribery stubborn mining (\(BSM\)). Both of them can increase the relative extra reward of the adversary and will make the target bribery miners suffer from the "bribery miner dilemma". All targets earn less under the Nash equilibrium. For each target, their local optimal strategy is to accept the bribes. However, they will suffer losses, comparing with denying the bribes. Furthermore, for all targets, their global optimal strategy is to deny the bribes. Quantitative analysis and simulation have been verified our theoretical analysis. We propose practical measures to mitigate more advanced mining attack strategies based on bribery mining, and provide new ideas for addressing bribery mining attacks in the future. However, how to completely and effectively prevent these attacks is still needed on further research. Bitcoin, blockchain, mining attacks, selfish mining, block withholding, fork after withholding, bribery mining. 2019 ac transferred or paid for by real owners. In the Bitcoin system, participants (miners) can get rewards by adding transaction records to the ledger (blockchain), which requires miners to solve cryptographic puzzles as a proof of work (PoW) [36]. The first miner to solve the puzzle and generate a valid block can obtain block rewards (6.25 Bitcoins in 2023). The process of miners solving cryptographic puzzles and generating blocks is called "mining process". When two or more blocks are generated and published simultaneously in the system (due to network communication delay), forking occurs. To maintain consistency, one of the brunches will be selected by the system and eventually become the main chain. Once miners on other branches receive the longest chain, they will shift their attention and mining power to the main chain. In the Bitcoin system, the difficulty of solving cryptographic puzzle is adjusted per two weeks to maintain the average generation time of blocks as a constant (10 minutes). However, due to the current mining power's hash rate exceeding \(3.3\times 10^{20}\) Hash/s [35], it probably takes a single miner several months or even years to solve a password puzzle [2]. Therefore, to attain stable income, miners tend to unit to form a miner pool. Most mining pools have a pool manager responsible for assigning work and rewards. When a mining pool finds a block, the miners in the mining pool will share rewards in terms of their contributions (the number of shares submitted). Since cryptocurrencies have monetary value, they naturally become a valuable target for attack. Although the design of Bitcoin ensures security, previous studies have shown that adversaries can increase their rewards when deviating from honest mining strategies, such as selfish mining [8], block with holding (\(BWH\)) [20], fork after withholding (\(FAW\)) [24], and bribery attacks [26]. In selfish mining attacks, adversaries intentionally hide discovered blocks to form a private chain and continue to mine on the private chain. When a block is generated on the public chain, adversaries selectively publish blocks on the private chain, and get disproportionate rewards by wasting the mining power of honest miners. Semi-selfish mining (\(SSM\)) [18] is a mining strategy constructed on the basis of \(SM\) which divides mining power into two parts. The consumptions of two parts of mining power are similar to selfish and honest pools, respectively. Most of mining power is applied to mining on the private chain while the other small portion is utilized to mine on public chain. The design of \(SSM\) can significantly reduce the system forking rate while only slightly reducing the profit of selfish miners. Briefly, \(SSM\) can balance benefit and forking rate. In the \(BWH\) attack, the adversaries divides their mining power into innocent pool and infiltration pool. When infiltration pool finds a valid block (full proof of work, FPoW), he withholds it and continues to submit other shares (partial proof of work, PPoW) to obtain the share reward. [20] has shown that \(BWH\) attacks are more profitable than honest mining (\(HM\)) when adversaries segment their mining power appropriately. However, when two pools use \(BWH\) attacks against each other (both pools have lower reward than \(HM\)), they will encounter the "miner's dilemma". The design principle of \(FAW\) attack is similar to \(BWH\) attack. More specifically, the only difference is that in \(BWH\) attack, the adversary will discard the discovered FPoW, while in \(FAW\), the attacker will reserve this FPoW. When other miners (not in the victim pool) find a valid block, the adversary will release and submit the previously reserved FPoW, causing a fork (similar to \(SM\)) to win in the forking competition and obtain share reward. Compared with \(BWH\), \(FAW\) can get more reward while avoiding the miner's dilemma. In bribery mining attack, once forking occurs, the adversary will try to win in the forking competition by bribing part of honest miners (called target bribery pool) to extend its branch and paying the bribe to the target bribery pool to obtain higher profits. In this paper, we propose two novel strategies of mining attack to increase the reward of the adversary. Moreover, We model multi-target bribery pools and prove target pools would suffer "the bribery miner's dilemma" in \(BSSM\) and \(BSM\). Finally, we put forward practical measures to mitigate the high-level attacks based on bribery mining. However, how to prevent such attacks completely remains an unresolved issue. We summarize our contributions as follows: * Adversaries can get higher reward through bribery attacks in semi-selfish mining attack and stubborn mining attack. We discussed the situation where adversaries launch bribery attacks. In a forking competition situation, adversaries can bribe other honest miners to extend the attacker's branch, increasing the probability of successful forking competition and hence obtaining higher profits. * We further proposed bribery semi-selfish mining (\(BSSM\)) and bribery stubborn mining (\(BSM\)). \(BSSM\) combines bribery mining and \(SSM\). Simulation experiment results indicate that \(BSSM\) can result in 6% relative extra reward for adversaries in comparison with \(SSM\) with the same chain growth rate. * The target bribery pools will suffer the "briery miner's dilemma" in \(BSSM\) and \(BSM\) under the multi-target bribery pool model. On the one hand, from the perspective of each target bribery pool, his optimal strategy is to accept bribes and extend attacker's branch. However, he will suffer losses if all target bribery pools reject bribes. On the other hand, from the standpoint of target bribery pools, their optimal strategy is to reject bribes. * We proposed practical countermeasures to mitigate higher-level bribery attacks, and provided new ideas for mitigating bribery mining in the future. ## 2 Preliminaries ### Bitcoin Background **Mining Process.** The issuance process of Bitcoin is implemented by the Bitcoin system generating a certain number of Bitcoins as rewards for miners, in which miners play the role of currency issuers. The process of generating new blocks is also known as mining. All Bitcoin transactions need to be packaged into blocks and recorded in the ledger. The miner who first finds the nonce that meets the difficulty requirements can get the coinbase reward. The mining process motivates miners to maintain the security of blockchain. The total number of bitcoins was initially set to 21 million. Each miner who publishes a block can get 50 Bitcoins as a coinbase reward initially, which halves per 4 years. It is expected that the coinbase reward will no longer be able to be further subdivided until 2104, which results in completing the issuance of all Bitcoins. **Forks.** When multiple miners broadcast the blocks discovered by them simultaneously, blockchain forking occurs, since other miners will consider the first received valid block as the header [33]. One branch will compete successfully thus becoming the main chain eventually. Miners who publish blocks on the main chain will obtain corresponding coinbase rewards, while others will not get any rewards. Note that forks may also occur intentionally, such as \(SM\) attack [8] or \(FAW\) attack [24]. **Mining Pool.** With the increasing investment of mining power in Bitcoin, the probability of a miner discovering a valis block becomes extremely small. Nowadays, miners tend to participate in an organization called mining pool. In general, a mining pool consists of a pool manager and multiple peer miners. All participants collaborate to solve the same cryptographic puzzle. Once the mining pool generates a valid block successfully, participants will share rewards according to the distribution protocol, such as Pay Per Share (PPS), Pay Per Last N Shares (PPLNS), Pay Proportionally (PROP) [3] and so on. In theory, the rewards of miners are proportional to their mining power directly. Therefore, miners who participate to the mining pool can reduce the difference in profits significantly. Currently, most of the blocks in Bitcoin are generated by mining pools, such as AntPool [4], Poolin [5], and F2Pool [6]. ### Related Work **Selfish Mining.** Attackers can generate a fork through selfish mining (\(SM\)) intentionally to obtain additional rewards [7, 8]. Specifically, in \(SM\) attack, adversaries hide discovered blocks intentionally, forming a private chain and continuing to extend it. Once a new valid block is generated in public chain, attackers selectively publish blocks on the private chain, and obtain disproportionate rewards by wasting the mining power of honest miners. It is expected that the motivation to mine will rely more on transaction fees rather than block rewards due to the continuous decline in coinbase rewards. Once the transaction volume of Bitcoin decreases, these transaction fees will not be enough to compensate miners for their investment in computing resources. Consequently, some miners may stop mining temporarily, which will threaten the security of Bitcoin system. [9] introduces the incentive mechanism of Bitcoin when the total computing power of the system decrease. [16] expands the underlying model of \(SM\) attack, further optimizes the upper bound of optimal strategy rewards, and lowers the minimum threshold for obtaining extra returns from \(SM.\)[17] supplements the action space of \(SM,\) models as Markov Decision Process (MDP), and pioneers a new technology to solve the nonlinear objective function of MDP, resulting in a more powerful \(SM\) strategy. Under the same assumption, relevant studies conduct a series of discussions on the mining strategies of rational mining pools [10,11,12,13]. [14] provides some simulation results when involving multiple independent selfish mining pools or stubborn mining pools. [15] theoretically studies the equilibrium of multiple independent selfish mining pools. [37] focuses on the classic selfish mining attacks in the blockchain, explores the strategies to deal with the attacks from the perspective of game theory, and further depicts the equilibria state of the system under the competition of various strategies. However, due to the high forking rate caused by \(SM,\) these attacks are not practical. Once honest miners discover abnormal forking rate, they may exit the blockchain system. \(SM\) attack is no longer meaningful with the departure of honest miners. [18] proposes semi-selfish mining (\(SSM\)) attack, which can achieve a balance between revenue and forking rate. [19] proves that honest miners do not choose to advocate for \(SSM\) attack without been detected. \(Bwh\) Attacks can adopt \(Bwh\) attack to destroy rewards for the victim pool [20,21]. Attackers divide their mining power into innocent mining pool and infiltration mining pool. When infiltration pool finds a valid block (full proof of work, FPoW), he withholds it and continues to submit other shares (partial proof of work, PPoW) to obtain the share reward. The victim mining pool will never get rewards from the attacker's infiltration mining. Hence, the victim pool will suffer losses. Other miners, including innocent mining pool of adversary, will gain more rewards for the loss of the victim pool. [22] indicates that when attackers partition their mining power correctly, \(BWH\) attack is more profitable than \(HM.\) However, when multiple independent pools adopt \(BWH\) attack against each other (all pools have lower returns than \(HM\)), they will encounter the "miner's dilemma" [23]. \(FAW\) Attacks. \(FAW\) attack combines \(SM\) and \(BWH\) attacks [24]. In brief, \(BWH\) attackers will discard the discovered FPoW, while in \(FAW,\) the attackers will reserve the FPoW. When other miners (not in the victim pool) find a valid blocks, the adversary will release and submit the previously reserved FPoW, causing a fork (similar to \(SM\)) to win in the forking competition and obtain share reward. In other cases, \(FAW\) attack strategy is consistent with \(BWH.\)\(FAW\) can get more rewards and avoid miner's dilemma compared with \(BWH.\) Attackers may succeed in forking competition, thereby obtaining the share reward. When attacker's branch is never selected as the main chain, \(FAW\) will degenerate into \(BWH.\) Attackers with lower mining power will always fall into the miner's dilemma and lose profits when two attackers use \(FAW\) attacks against each other, which is independent of their network environment. Conversely, attackers with higher mining power may avoid the miner's dilemma and gain higher profits, which is related to their network environment. [25] combines mining power adjustment strategies with \(FAW\) attack (\(PAW\)), allowing attackers to adjust mining power dynamically between innocent mining and infiltration mining. Therefore, attackers can always increase their profits by allocating more mining power to more attractive mining strategies. **Bribery Attacks.** Bribery attacks can increase the probability of the attacker's branch being selected as the main chain in forking competition [26]. Bribery attacks can only help the attacker win in the forking competition rather than bringing any profit to the attacker. Attackers can adopt origina3l bribery attack to win in forking competition, without obtaining any extra reward, instead. Therefore, original bribery attacks are always considered to combine with other attacks, such as double spending attack [27]. Bribery attack can be launched in a less visible way [28]. [25] combines bribery attack with \(SM.\) It indicates that compared with \(SM,\) bribery selfish mining (\(BSM\)) could bring 10% extra rewards to attackers. However, \(BSM\) may cause the "venal miner's dilemma". [29] proposes an optimal \(BSM\) to avoid the "venal miner's dilemma", where miners are considered perfectly rational. Attackers have lower mining power thresholds when making extra profits compared with \(SM.\)[30] proposes a mixed scenario where attackers alternate their strategy between \(BWH,\ FAW,\) and \(PAW.\) The mixed strategy is proved to be much higher in revenue than \(HM.\) ## 3 Threat Model and Assumption ### Threat Model An adversary can be an individual miner, or a mining pool formed by a collection of miners. Honest miners are profit-driven and could adopt the optimal mining strategy to increase their own profits without launching any mining attacks. Besides, adversaries can create different identities through sybil attacks and participate in multiple open mining pools with different accounts and IDs. Meanwhile, the adversary's mining power is limited to avoid 51% attack. He can allocate their mining power to innocent mining pool (similar to \(HM\) strategy), selfish mining pool (similar to \(SM\) strategy), or other mining attack strategies. More specifically, in \(BSSM\) model, the adversary allocates their mining power to innocent mining pool and selfish mining pool. In \(BSM\) model, adversaries only adopt \(SM.\) Finally, the adversary can create sybil nodes in the network to prioritize the propagation of their generated blocks, which increases the probability of selecting the attacker's branch as the main chain when forking occurs. ### Assumption To simplify our analysis, we make some reasonable assumptions. Our assumptions are similar to those of other selfish mining attacks, such as selfish mining [8], stubborn mining [16], semi-selfish mining [18] and bribery attacks [26, 34]. 1. We normalized the total mining power of the system to 1. The (normalized) mining power of adversary is a value greater than 0 but less than 0.5, which is designed to avoid 51% attacks. 2. Miners are profit-driven. Honest miners can adopt the optimal mining strategy they consider to increase their profits, but will not launch mining attacks. This is reasonable because miners are honest but selfish. When the blockchain forks and the lengths of each branch are equal, miners could choose any branch. 3. There are no unintentional forks in the Bitcoin system. This assumption is rational because the probability of unintentional forks occurring in the Bitcoin system can be negligible, approximately 0.41% [31]. Therefore, combined with Assumption 1, the expected reward for a miner is equal to the probability of finding a valid block in each round. Due to the exponential distribution of the time for miners to find a valid block [32], average value is inversely proportional to their mining power, the probability of miners finding a valid block is equal to their normalized mining power. 4. We will normalize the coinbase reward for finding a valid block to 1 instead of 6.25 Bitcoins. In our analysis, miner's rewards are expected as well as normalized. ## 4 Observation and Motivation ### Semi-selfish Mining In semi-selfish mining, the adversary allocates mining power to the honest pools (similar to the honest mining strategy: mining as individual honest miners) and the selfish pools (similar to the selfish mining strategy: mining as selfish miners). In each round, the probability of honest pools generating a valid block is \(\rho\alpha,\) and the probability of selfish pools generating a valid block is \((1-\rho)\alpha.\) Therefore, the probability of other pools generating a valid block is \(1-\alpha.\) The state transition process of semi-selfish mining is shown in Figure 1. The meanings of states \(0,0^{\prime},1,2,3,4,...\) are exactly the same as the states in selfish mining. On the basis, the states \(1^{\prime},2^{\prime},3^{\prime},4^{\prime},...\) indicate that the last block in the public chain is generated by the adversary through honest pools, where the specific number represents the length of the private chain that the adversary reserves or hides. Actually, there is a certain problem in analyzing the rewards of adversary while modeling semi-selfish mining, which ignores the specific situations in which adversary may receive rewards. For example, when an attacker finds a valid block through honest pools, he will publish the block on the public chain and two blocks that are reserved (hidden) by selfish pools at once. The adversary will receive two block rewards regardless of which chain wins eventually (with probability \(\alpha\rho\)). In \(BSSM\) reward analysis, we will revise this issue, as detailed in Section 6.2. ### Stubborn Mining Stubborn mining extends the underlying model of selfish mining attacks. Its mining strategy is more "stubborn", which does not easily give up when leading, falling behind, and advancing together. In each round, the probability of selfish pools generating a valid block is \(\alpha\), and the probability of other pools generating a valid block is \((1-\alpha)\). In addition, when the blockchain forks and the lengths of two branches are equal (one is a private chain of selfish pools, and the other is a public chain of other honest mining pools), the probability of other pools discovering a valid block and publishing it on the private chain of adversary is \(\gamma(1-\alpha)\). Correspondingly, the probability of other pools publishing the block to public chain is \((1-\gamma)(1-\alpha)\). Stubborn mining introduces three strategies by varying the degree of stubbornness of adversaries, which is designated as lead stubborn mining, equal-fork stubborn mining, and trail stubborn mining. The state transition process of three strategies of stubborn mining is shown in Figure 2. To simplify our analysis, we only discuss lead stubborn mining strategy. The meanings of states \(0,0^{\prime},1,2,3,...\) are exactly the same as the states in selfish mining. The states \(1^{\prime},2^{\prime},3^{\prime},...\) indicate that the blockchain forks and the lengths of two branches are equal (one is an adversary's private chain, and the other is a public chain of other honest mining pools), where the specific number indicates the length of hidden private chain of adversaries. Figure 1: The state transition process of semi-selfish mining Figure 2: The state transition process of lead stubborn mining, equal-fork stubborn mining, and trail stubborn mining ### 4.3 Bribery Attack When the blockchain forks and the lengths of two branches are equal, the adversaries may bribe some honest miners in other pools, which brings about the selfish branch of adversary a higher probability of successful competition thus becoming the main chain eventually. The process of bribery attack as shown in Figure 3. The part of bribed honest miners is called the target bribery pools. The reason why the target bribery pools are willing to accept bribes from the adversary is that the attackers will give a portion of the bribery money to the target bribery pools, which ensures that the total reward for the target bribery pools accepting bribes and expanding adversary's branches is no less than refusing bribes. Furthermore, the reward of adversary increases as the probability of the adversary's branch eventually becoming the main chain increases. When the adversaries choose to provide appropriate bribe money, they can obtain higher rewards than honest mining. More specific, **(1)** when adversaries or target bribery pools find a valid block, they will publish it on private chain of adversary. Adversary's private chain wins and becomes the main chain with probability \((\alpha+\beta^{b})\). **(2)** When other pools find a valid block, if they publish it on the public chain of other pools, other pools' public chain wins and becomes the main chain with probability \((1-\gamma)(1-\alpha-\beta^{b})\). **(3)** If they publish it on the private chain of adversary, adversary's private chain wins and becomes the main chain with probability \(\gamma(1-\alpha-\beta^{b})\). ## 5 Bribery Semi-Selfish Mining (\(Bssm\)) ### 5.1 Overview We introduce bribery semi-selfish mining (\(Bssm\)) attack that combines bribery attack with semi-selfish mining. In the observation of bribery attack in Section 4.3, we point out that when the blockchain forks and the lengths of private branch of adversaries and public branch of other pools are equal, the adversaries may bribe some honest miners in other pools, increasing the probability of the private branch of adversary becoming the main chain. Therefore, \(Bssm\) combines bribery attack with semi-selfish mining, which could increase the reward of adversary by adding bribery transactions on adversary's private branch. Similar to \(SSM\), adversary allocates mining power to the honest pools and selfish pools. We adopt \(a\) to represent all adversary pools, \(a_{i}\) to represent adversary's honest pools, and \(a_{s}\) to indicate adversary's selfish pools. Accordingly, we use \(b\) to represent target bribery pools, and \(o\) to indicate other pools. When \(a_{s}\) finds a valid block, he will reserve it. When another miner \((o,\ b,\text{ or }a_{i})\) finds a valid block and publish it on public chain, adversaries will release a reserved block on the private chain at once, which brings about forking. \(b\) will choose to mine on public branch (denying bribes) or mine on private branch of adversary (accepting bribes). Once \(b\) chooses to expand private branch, he will claim to adversary that he accepts bribes. Otherwise, \(b\) cannot claim to accept bribes from adversary. After the end of each round, adversary pays bribes to \(b\) who accepts bribes. ### 5.2 Modeling \(Bssm\) **State Transitions and probability.** We model the state transition process of \(Bssm\) as shown in Figure 4. The meanings of states \(k(k\geq 0)\) are exactly the same as the states in selfish mining. The states \(k^{\prime}(k\geq 1)\) indicate that the latest block on public chain is generated by \(a_{i}\), and the private chain is reserved by \(a_{s}\) before the block, where the number \(k\) represents the difference between the length of the private chain and the public chain. More specifically, the length of the private chain reserved by \(a_{s}\) is \((k+1)\). Note that the difference between states Figure 2: The process of bribery attack \(k^{\prime}(k\geq 1)\) and states \(k(k\geq 1)\) is that the former does not release the first reserved block on private chain. The reason is that the latest block on public chain in states \(k^{\prime}(k\geq 1)\) is generated by the adversary, while the latest block on public chain in states \(k(k\geq 1)\) is generated by \(o.\) States \(0^{\prime}_{0},\ 0^{\prime}_{b},\) and \(0^{\prime}_{a}\) represent the bribery initiation stage, where two branches of equal length appear in the system. In detail, state \(0^{\prime}_{0}\) indicates that two branches are formed by \(a\) and \(o.\) State \(0^{\prime}_{b}\) represents that two branches are formed by \(a\) and \(b.\) State \(0^{\prime}_{a}\) represents two branches are formed by \(a_{s}\) and \(a_{l}.\) Next, we will discuss each state transition and probability in detail, as shown in Appendix A. According to Figure 4 of the state transition process of \(BSSM,\) we obtain the following equations: \[\begin{array}{l}\begin{cases}p_{0}=(1-\alpha+\rho\alpha)p_{0}+(1-\alpha)(p_{ 2}+p_{2^{\prime}})+p_{0^{\prime}_{0}}+p_{0^{\prime}_{b}}+p_{0^{\prime}_{a}}\\ p_{1}=(1-\rho)\alpha p_{0}\\ p_{1^{\prime}}=\rho\alpha(p_{2}+p_{2^{\prime}})\\ p_{0^{\prime}_{a}}=(1-\alpha-\beta^{b})(p_{1}+p_{1^{\prime}})\\ p_{0^{\prime}_{a}}=\rho\delta^{b}(p_{1}+p_{1^{\prime}})\\ p_{0^{\prime}_{a}}=\rho\alpha(p_{1}+p_{1^{\prime}})\\ p_{k}=(1-\rho)\alpha p_{k-1}+(1-\alpha)\big{(}p_{k+1}+p_{(k+1^{\prime})}\big{)}, \text{when }k\geq 2\\ p_{k^{\prime}}=(1-\rho)\alpha p_{(k-1)^{\prime}}+\rho\alpha(p_{k+1}+p_{(k+1^{ \prime})}),\text{when }k\geq 2\\ \sum_{k=0}^{+\infty}p_{k}+\sum_{k=1}^{+\infty}p_{k^{\prime}}+p_{0^{\prime}_{0 }}+p_{0^{\prime}_{b}}+p_{0^{\prime}_{a}}=1\end{array} \tag{1}\] **Reward.** We conduct a detailed analysis of the whole possible events (when a new block is generated). In \(BSSM,\) when adversaries have a certain block advantage through selfish mining, it does not mean that the adversary's private branch will win in the competition eventually, which is the most significant difference between \(BSSM\) and \(SM.\) It is precisely for this reason that the difficulty of analyzing rewards for \(a,\ b,\) and \(o\) has greatly increased. We observe from Figure 4 that states \(k(k\geq 2)\) and states \(k^{\prime}(k\geq 2)\) will eventually transition to state \(2\) with probability \(\frac{1-\alpha}{1-\alpha+\rho\alpha}\) or state \(2^{\prime}\) with probability \(\frac{\rho\alpha}{1-\alpha+\rho\alpha}.\) Therefore, based on states \(2\) and \(2^{\prime},\) we analyze the winning probability of private chain of \(a\) and public chain of \(o\) respectively in states \(k(k\geq 2)\) or \(k^{\prime}(k\geq 2).\) Before analysis, we need to add two entities \(P^{p}_{b}\) (represents the winning probability of public branch of \(o\) in states \(k(k\geq 2)\) or \(k^{\prime}(k\geq 2))\) and \(P^{s}_{b}\) (represents the winning probability of private branch of \(a\) in states \(k(k\geq 2)\) or \(k^{\prime}(k\geq 2)).\) Figure 4: The state transition process of \(BSSM\) We observe event \(0^{\prime}_{b}\) in Figure 5: **(1)** when \(o\) finds a valid block, he will publish it on public branch with probability \((1-\gamma)(1-\alpha-\beta^{b})\) (public branch wins) or publish it on private branch with probability \(\gamma(1-\alpha-\beta^{b})\) (private branch wins); **(2)** when \(b\) finds a valid block, he will publish it on public branch with probability \(\beta^{b}\) (public branch wins); **(3)** when \(a_{s}\) finds a valid block, he will publish it on private branch with probability \((1-\rho)\alpha\) (private branch wins); **(4)** when \(a_{t}\) finds a valid block, he will publish it on public branch with probability \(\rho\alpha\) (public branch wins). Similarly, we observe event \(0^{\prime}_{b}\): **(1)** when \(o\) or \(b\) finds a valid block, they will publish it on public branch with probability \((1-\gamma)(1-\alpha-\beta^{b})+(1-\gamma)\beta^{b})\) (public branch wins), or publish it on private branch with probability \((\gamma(1-\alpha-\beta^{b})+\gamma\beta^{b})\) (private branch wins); **(2)** when \(a_{s}\) finds a valid block, he will publish it on private branch with probability \((1-\rho)\alpha\) (private branch wins); **(3)** when \(a_{t}\) finds a valid block, he will publish it on public branch with probability \(\rho\alpha\) (public branch wins). Finally, we observe event \(0^{\prime}_{a}\): **(1)** when \(o\) or \(b\) finds a valid block, they will publish it on public branch with probability \(((1-\gamma)(1-\alpha-\beta^{b})+(1-\gamma)\beta^{b})\) (public branch wins), or publish it on private branch with probability \((\gamma(1-\alpha-\beta^{b})+\gamma\beta^{b})\) (private branch wins); **(2)** when \(a_{s}\) finds a valid block, he will publish it on private branch with probability \((1-\rho)\alpha\) (private branch wins); **(3)** when \(a_{t}\) finds a valid block, he will publish it on public branch with probability \(\rho\alpha\) (public branch wins). \begin{table} \begin{tabular}{c c c c} \hline State \(s\) & State \(\hat{s}\) & \(\frac{P_{0^{\prime}_{s}}}{1-\alpha+\rho\alpha}\) & \(\frac{\rho\alpha}{1-\alpha-\beta^{b}+\beta^{b}+\rho\alpha}\) \\ \(k(k\geq 2)\) and \(k^{\prime}(k\geq 2)\) & \(0^{\prime}_{o}\) & \(\frac{\rho\alpha}{1-\alpha+\rho\alpha}\) & \(\frac{1-\alpha-\beta^{b}}{1-\alpha-\beta^{b}+\beta^{b}+\rho\alpha}\) \\ \(k(k\geq 2)\) and \(k^{\prime}(k\geq 2)\) & \(0^{\prime}_{b}\) & \(\frac{\rho\alpha}{1-\alpha+\rho\alpha}\) & \(\frac{\beta^{b}}{1-\alpha-\beta^{b}+\beta^{b}+\rho\alpha}\) \\ \hline \end{tabular} \end{table} Table 1: The state transitions of bribery initiation stage in \(Bssm\) Figure 5: Possible events in \(Bssm\) Based on Figure 4, we can get the state transitions of bribery initiation stage in Table 1. Furthermore, we obtain the winning probability \(P_{b}^{s}\) of private branch and \(P_{b}^{p}\) of public branch in states \(k(k\geq 2)\) and \(k^{\prime}(k\geq 2)\) as follows: \[\begin{array}{l}P_{b}^{p}=P_{0_{b}^{s}}\left((1-\gamma)(1-\alpha-\beta^{b})+ \beta^{b}+\rho\alpha\right)+P_{0_{a}^{s}}\left((1-\gamma)(1-\alpha-\beta^{b})+ (1-\gamma)\beta^{b}+\rho\alpha\right)\\ \quad+P_{0_{a}^{s}}\left((1-\gamma)(1-\alpha-\beta^{b})+(1-\gamma)\beta^{b}+ \rho\alpha\right)\\ P_{b}^{s}=\dfrac{1-\alpha}{1-\alpha+\rho\alpha}+P_{0_{a}^{s}}(\gamma(1-\alpha- \beta^{b})+(1-\rho)\alpha)+P_{0_{a}^{s}}(\gamma(1-\alpha-\beta^{b})+\gamma \beta^{b}+(1-\rho)\alpha)\\ \quad+P_{0_{a}^{s}}(\gamma(1-\alpha-\beta^{b})+\gamma\beta^{b}+(1-\rho)\alpha) \end{array} \tag{2}\] Observing Figure 5, we continue to analyze the rewards of each event. For event 0: **(1)** when it transitions to event 0-1, \(a\) gets 1 reward (probability \(\rho\alpha\)); **(2)** when it transitions to event 0-2, the rewards of \(a\), \(o\) and \(b\) are determined later (probability \((1-\rho)\alpha\)); **(3)** when it transitions to event 0-3, \(o\) gets 1 reward (probability \((1-\alpha-\beta^{b})\)); **(4)** when it transitions to event 0-4, \(b\) gets 1 reward (probability \(\beta^{b}\)). For event 0-4, \(b\) and \(b\) are \(\rho\alpha\)(\(\gamma(1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 0-2, \(b\) gets 2 rewards (probability \((1-\gamma)(1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 0-3, \(a\) gets 2 rewards (probability \((1-\rho)\alpha\)); **(4)** when it transitions to event 0-5, \(a\) and \(o\) get 1 reward (probability \(\rho\alpha\)); **(5)** when it transitions to event 0-5, \(a\) and \(o\) get 1 reward (probability \(\gamma(1-\alpha-\beta^{b})\)). For event 0-5, \(a\) and \(b\) gets 2 rewards (probability \((1-\gamma)(1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 0-2, \(a\) and \(b\) get 1 reward (probability \(\beta^{b}\)); **(3)** when it transitions to event 0-3, \(a\) gets 2 rewards (probability \((1-\rho)\alpha\)); **(4)** when it transitions to event 0-5, \(a\) and \(o\) get 1 reward (probability \(\rho\alpha\)); **(5)** when it transitions to event 0-6 and \(b\) chooses to accept the bribes, \(a\) and \(b\) get 1 reward (probability \(\beta^{b}\)). For event 0-4: **(1)** when it transitions to event 0-1, \(a\) gets 2 rewards (probability \(\rho\alpha+(1-\rho)\alpha\)); **(2)** when it transitions to event 0-2, \(a\) and \(b\) get 1 reward (probability \(\beta^{b}\)); **(3)** when it transitions to event 0-3, \(a\) and \(o\) get 1 reward (probability \((1-\alpha-\beta^{b})\)). For event 1': **(1)** when it transitions to event 1'-1, \(a\) gets 1 reward (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 1'-2, \(a\) gets 1 reward (probability \(\beta^{b}\)); **(3)** when it transitions to event 1'-3, \(a\) gets 1 reward (probability \(\rho\alpha\)); **(4)** when it transitions to event 1'-4, the rewards of \(a\), \(o\) and \(b\) are determined later (probability \((1-\rho)\alpha\)). For event 2': **(1)** when it transitions to event 2'-1, \(a\) gets 3 rewards (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 2'-2, \(a\) gets 3 reward (probability \(\beta^{b}\)); **(3)** when it transitions to event 2'-3, \(a\) gets 1 reward (probability \(\rho\alpha\)); **(4)** when it transitions to Event 2'-4, the rewards of \(a\), \(o\) and \(b\) are determined later (probability \((1-\rho)\alpha\)). For event 3': **(1)** when it transitions to event 3'-1, \(a\) gets (\(1+P_{b}^{s}\)) rewards, \(o\) gets \(P_{b}^{p}\) reward (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 3'-2, \(a\) gets (\(1+P_{b}^{s}\)) rewards, \(b\) gets \(P_{b}^{p}\) reward (probability \(\beta^{b}\)); **(3)** when it transitions to event 3'-3, \(a\) gets 1 reward (probability \(\rho\alpha\)); **(4)** when it transitions to event 2'-4, the rewards of \(a\), \(o\) and \(b\) are determined later (probability \((1-\rho)\alpha\)). The reward analysis of events \(k^{\prime}(k>3)\) is similar to event 3'. For event 1: regardless of whether event 1 transitions to event 1-1 (probability \((1-\alpha-\beta^{b})\)), event 1-2 (probability \(\beta^{b}\)), event 1-3 (probability \(\rho\alpha\)), or event 1-4 (probability \((1-\rho)\alpha\)), the rewards of \(a\), \(o\) and \(b\) are determined later. For event 2: **(1)** when it transitions to event 2-1, \(a\) gets 2 rewards (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 2-2, \(a\) gets 2 rewards (probability \(\beta^{b}\)); **(3)** when it transitions to event 2'-4, the rewards of \(a\), \(o\) and \(b\) are determined later (probability \((1-\rho)\alpha\)). The reward analysis of events \(k^{\prime}(k>3)\) is similar to event 3'. For event 1: regardless of whether event 1 transitions to event 1-1 (probability \((1-\alpha-\beta^{b})\)), event 1-2 (probability \(\beta^{b}\)), event 1-3 (probability \(\rho\alpha\)), or event 1-4 (probability \((1-\rho)\alpha\)), the rewards of \(a\), \(o\) and \(b\) are determined later. For event 2: **(1)** when it transitions to event 2-1, \(a\) gets 2 rewards (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 2-2, \(a\) gets 2 rewards (probability \(\beta^{b}\)); **(3)** when it transitions to event 2-3 (probability \(\rho\alpha\)) or event 2-4 (probability \((1-\rho)\alpha\)), the rewards of \(a\), \(o\) and \(b\) are determined later. For event 3: **(1)** when it transitions to event 3-1, \(a\) gets \(P_{b}^{s}\) reward, \(o\) gets \(P_{b}^{p}\) reward (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 3-2, \(a\) gets \(P_{b}^{s}\) reward, \(b\) gets \(P_{b}^{p}\) reward (probability \(\beta^{b}\)); **(3)** when it transitions to event 3-3 (probability \(\rho\alpha\)) or event 3-4 (probability \((1-\rho)\alpha\)), the rewards of \(a\), \(o\) and \(b\) are determined later. The reward analysis of events \(k^{\prime}(k>3)\) is similar to event 3'. For event 1: regardless of whether event 1 transitions to event 1-1 (probability \((1-\alpha-\beta^{b})\)), event 1-2 (probability \(\beta^{b}\)), event 1-3 (probability \(\rho\alpha\)), or event 1-4 (probability \((1-\rho)\alpha\)), the rewards of \(a\), \(o\) and \(b\) are determined later. For event 2: **(1)** when it transitions to event 2-1, \(a\) gets 2 rewards (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 2-2, \(a\) gets 2 rewards (probability \(\beta^{b}\)); **(3)** when it transitions to event 2-2, \(a\) gets 2 rewards (probability \(\beta^{b}\)); **(3)** when it transitions to event 2-3 (probability \(\rho\alpha\)) or event 2-4 (probability \((1-\rho)\alpha\)), the rewards of \(a\), \(o\) and \(b\) are determined later. For event 3: **(1)** when it transitions to event 3-1, \(a\) gets \(P_{b}^{s}\) reward, \(o\) gets \(P_{b}^{p}\) reward (probability \((1-\alpha-\beta^{b})\)); **(2)** when it transitions to event 3-2, \(a\) gets \(P_{b}^{s}\) reward (probability \(\beta^{b}\)); **(3)** when it transitions to event 3-3 (probability \(\rho\alpha\)) or event 3-4 (probability \((1-\rho)\alpha\)), the rewards of \(a\), \(o\) and \(b\) are determined later. The reward analysis of events \(k(k>3)\) is similar to event 3. \[\begin{array}{l}R_{a}=p_{0}\cdot\rho a+p_{o_{b}^{*}}\cdot\left((1-\rho)\alpha \cdot 2+\rho a+\gamma(1-\alpha-\beta^{b})\right)\\ \quad+p_{o_{a}^{*}}\cdot\left((1-\rho)\alpha\cdot 2+\rho a+\gamma(1-\alpha- \beta^{b})+\beta^{b}\right)\\ \quad+p_{o_{a}^{*}}\cdot\left((\rho\alpha+(1-\rho)\alpha)\cdot 2+\beta^{b}+(1- \alpha-\beta^{b})\right)\\ \quad+p_{1^{*}}\cdot\left((1-\alpha-\beta^{b})+\beta^{b}+\rho\alpha\right)+p_{ 2^{*}}\cdot\left((1-\alpha-\beta^{b})\cdot 3+\beta^{b}\cdot 3+\rho\alpha\right)\\ \quad+\sum_{l=3}^{+\infty}p_{l^{*}}\cdot\left((1-\alpha-\beta^{b})\cdot(1+P_{ b}^{s})+\beta^{b}\cdot(1+P_{b}^{s})+\rho\alpha\right)\\ \quad+p_{2}\cdot\left((1-\alpha-\beta^{b})\cdot 2+\beta^{b}\cdot 2\right)+\sum_{ l=3}^{+\infty}p_{l^{*}}\cdot\left((1-\alpha-\beta^{b})\cdot P_{b}^{s}+\beta^{b} \cdot P_{b}^{s}\right)\end{array} \tag{4}\] Obviously, \(R_{a}\) is an increasing function with \(\gamma.\) That is to say, bribing more targets can bring more rewards to adversary. When considering the bribes (a fraction \(\varepsilon\) of the total system reward), the \(a\)'s reward \(R_{a}^{B}\) is: \[R_{a}^{B}=(1-\varepsilon)R_{a} \tag{5}\] Obviously, \(R_{a}^{B}\) is a decreasing function with \(\varepsilon.\) That is to say, paying more bribes to target can bring less rewards to adversary. Accordingly, when \(b\) chooses to accept the bribes and we consider the bribes, the \(b\)'s reward \(R_{b}^{B}\) is: \[\begin{array}{l}R_{b}^{B}=p_{0}\cdot\beta^{b}+p_{o_{b}^{*}}\cdot\left((1- \gamma)(1-\alpha-\beta^{b})+\beta^{b}\cdot 2+\rho\alpha\right)+p_{o_{a}^{*}} \cdot\beta^{b}+p_{o_{a}^{*}}\cdot\beta^{b}\\ \quad+\sum_{l=3}^{+\infty}p_{l^{*}}\cdot\beta^{b}\cdot P_{b}^{p}+\sum_{ l=3}^{+\infty}p_{l}\cdot\beta^{b}\cdot P_{b}^{p}+\varepsilon\cdot R_{a}\end{array} \tag{6}\] Obviously, \(R_{b}^{B}\) is an increasing function with \(\varepsilon.\) That is to say, paying more bribes to targets can bring more rewards to \(b.\) Finally, when \(b\) chooses to accept the bribes and we consider the bribes, the \(o\)'s reward \(R_{o}^{B}\) is: \[\begin{array}{l}R_{o}^{B}=p_{0}\cdot(1-\alpha-\beta^{b})+p_{o_{b}^{*}}\cdot \left((1-\gamma)(1-\alpha-\beta^{b})+\gamma(1-\alpha-\beta^{b})\right)\\ \quad+p_{o_{a}^{*}}\cdot\left((1-\gamma)(1-\alpha-\beta^{b})\cdot 2+\rho \alpha+\gamma(1-\alpha-\beta^{b})\right)+p_{o_{a}^{*}}\cdot(1-\alpha-\beta^{b} )\\ \quad+\sum_{l=3}^{+\infty}p_{l^{*}}\cdot(1-\alpha-\beta^{b})\cdot p_{b}^{p}+ \sum_{l=3}^{+\infty}p_{l}\cdot(1-\alpha-\beta^{b})\cdot P_{b}^{p}\end{array} \tag{7}\] Obviously, \(R_{o}^{B}\) is a decreasing function with \(\gamma.\) That is to say, bribing more targets can bring less rewards to \(o.\) Similarly, we consider when \(b\) chooses to deny the bribes, the \(a\)'s reward \(R_{a}^{B^{\prime}}\) is: \[\begin{array}{l}R_{a}^{B^{\prime}}=p_{0}\cdot\rho\alpha+p_{o_{b}^{*}}\cdot \left((1-\rho)\alpha\cdot 2+\rho\alpha+\gamma(1-\alpha-\beta^{b})\right)\\ \quad+p_{o_{a}^{*}}\cdot\left((1-\rho)\alpha\cdot 2+\rho\alpha+\gamma(1- \alpha-\beta^{b})\right)\\ \quad+p_{o_{a}^{*}}\cdot\left((\rho\alpha+(1-\rho)\alpha)\cdot 2+\beta^{b}+(1- \alpha-\beta^{b})\right)\\ \quad+p_{l^{*}}\cdot\left((1-\alpha-\beta^{b})+\beta^{b}+\rho\alpha\right)+p_{ 2^{*}}\cdot\left((1-\alpha-\beta^{b})\cdot 3+\beta^{b}\cdot 3+\rho\alpha\right)\\ \quad+\sum_{l=3}^{+\infty}p_{l^{*}}\cdot\left((1-\alpha-\beta^{b})\cdot(1+P_{ b}^{s})+\beta^{b}\cdot(1+P_{b}^{s})+\rho\alpha\right)\\ \quad+p_{2}\cdot\left((1-\alpha-\beta^{b})\cdot 2+\beta^{b}\cdot 2\right)+\sum_{ l=3}^{+\infty}p_{l}\cdot\left((1-\alpha-\beta^{b})\cdot P_{b}^{s}+\beta^{b} \cdot P_{b}^{s}\right)\end{array} \tag{8}\] Accordingly, when \(b\) chooses to deny the bribe, the \(b\)'s reward \(R_{b}^{B^{\prime}}\) is: \[\begin{array}{l}R_{b}^{B^{\prime}}=p_{0}\cdot\beta^{b}+p_{o_{b}^{*}}\cdot \left((1-\gamma)(1-\alpha-\beta^{b})+\beta^{b}\cdot 2+\rho\alpha\right)\\ \quad+p_{o_{a}^{*}}\cdot\beta^{b}+p_{o_{a}^{*}}\cdot\beta^{b}+\sum_{ l=3}^{+\infty}p_{l^{*}}\cdot\beta^{b}\cdot P_{b}^{p}+\sum_{l=3}^{+\infty}p_{l} \cdot\beta^{b}\cdot P_{b}^{p}\end{array} \tag{9}\] Finally, when \(b\) chooses to deny the bribes, the \(o\)'s reward \(R_{a}^{B^{\prime}}\) is: \[\begin{array}{l}R_{o}^{B^{\prime}}=p_{0}\cdot(1-\alpha-\beta^{b})+p_{o_{b}^{*}} \cdot\left((1-\gamma)(1-\alpha-\beta^{b})+\gamma(1-\alpha-\beta^{b})\right)\\ \quad+p_{o_{a}^{*}}\cdot\left((1-\gamma)(1-\alpha-\beta^{b})\cdot 2+\beta^{b}+\rho \alpha+\gamma(1-\alpha-\beta^{b})\right)\\ \quad+p_{o_{a}^{*}}\cdot(1-\alpha-\beta^{b})+\sum_{l=3}^{+\infty}p_{l^{*}}\cdot( 1-\alpha-\beta^{b})\cdot P_{b}^{p}+\sum_{l=3}^{+\infty}p_{l}\cdot(1-\alpha- \beta^{b})\cdot P_{b}^{p}\end{array} \tag{10}\] Theorem 5.1. Once launching \(B\)SSM, the target \(b\) can always obtain a higher reward when he chooses to accept the bribes at the bribery initiation stage. PROOF. Comparing the \(b\)'s reward \(R_{b}^{B}\) when \(b\) chooses to accept the bribes (Equation (6)) with the \(b\)'s reward \(R_{b}^{B^{\prime}}\) when \(b\) chooses to deny the bribes (Equation (9)), we could derive \(R_{b}^{B}\geq R_{b}^{B^{\prime}}\) since \(0\leq\varepsilon\leq 1\) and \(R_{a}>0\). Once \(a\) adopts \(0<\varepsilon\leq 1\), we could derive \(R_{b}^{B}>R_{b}^{B^{\prime}}\). Therefore, extending \(a\)'s private branch is always the optimal strategy at the bribery initiation stage. THEOREM 5.2. _Once launching \(BSSM\), \(o\) is always forced to suffer losses when \(b\) chooses to accept the bribes at the bribery initiation stage._ PROOF. Comparing the \(o\)'s reward \(R_{o}^{B}\) when \(b\) chooses to accept the bribes (Equation (7)) with the \(o\)'s reward \(R_{o}^{B^{\prime}}\) when \(b\) chooses to deny the bribes (Equation (10)), we could derive \(R_{o}^{B}>R_{o}^{B^{\prime}}\) since \(\beta^{B}>0\). Therefore, when \(b\) chooses to accept the bribes at the bribery initiation stage, \(o\) is always forced to suffer losses. THEOREM 5.3. _Once launching \(BSSM\), \(a\) can obtain a higher reward than that in \(SSM\) when he pays proper bribes._ PROOF. The rewards in \(SSM\) are the same as the rewards in \(BSSM\) when the target \(b\) chooses to deny the bribes. Therefore, in order to obtain higher rewards, it is necessary for \(a\) to ensure that \(R_{a}^{B}>R_{a}^{B^{\prime}}\).Comparing the \(a\)'s reward \(R_{a}^{B}\) when \(b\) chooses to accept the bribes (Equation (5)) with the \(a\)'s reward \(R_{a}^{B^{\prime}}\) when \(b\) chooses to deny the bribes (Equation (8)), we could derive: \[R_{a}^{B}>R_{a}^{B^{\prime}}\Rightarrow\varepsilon<\frac{p_{0_{o}^{b}}\cdot \beta^{b}}{p_{o_{o}^{b}}\cdot\beta^{b}+R_{a}^{B^{\prime}}} \tag{11}\] The upper bound of \(a\)'s reward is \(R_{a}\) in Equation (4) when \(\varepsilon=0\). **Chain Growth Rate.**[8, 16, 18] indicates that the attack strategy based on selfish mining can lead to a decrease in the growth rate of the main chain. We note that the main chain here refers to the public chain generated by honest miners, rather than the private chain reserved by adversary. When adversary releases multiple reserved blocks, which makes the private chain longer than the public chain, and the private chain becomes the main chain eventually. According to the definition of the main chain growth rate, we calculate the main chain growth rate for \(SM\), \(SSM\), and \(BSSM\) respectively: \[\begin{cases}gr_{sm}=\alpha\cdot 0+(1-\alpha)\cdot 1\\ gr_{ssm}=(1-\rho)\alpha\cdot 0+(1-\alpha)\cdot 1+\rho\alpha\cdot 1\\ gr_{bssm}=(1-\rho)\alpha\cdot 0+(1-\alpha-\beta^{b})\cdot 1+\rho\alpha\cdot 1+ \beta^{b}\cdot 1\end{cases} \tag{12}\] THEOREM 5.4. _Once launching \(BSSM\), the chain growth rate of \(BSSM\) and \(SSM\) are equal, and both are greater than the chain growth rate of \(SM\)._ PROOF. Observing \(gr_{sm}\), \(gr_{ssm}\), and \(gr_{bssm}\), we could derive \(gr_{bssm}-gr_{sm}=gr_{bssm}-gr_{sm}=\rho\alpha\), which means \(gr_{bssm}=gr_{ssm}>gr_{sm}\) since \(\rho\alpha>0\). **Quantitative Analysis and Simulation.** Previous studies have shown that \(SM\) can lead to a decrease in block generation rate (i.e., \(R_{a}^{B}+R_{a}^{B}+R_{b}^{B}\leq 1\) and \(R_{a}^{B^{\prime}}+R_{b}^{B^{\prime}}+R_{b}^{B^{\prime}}\leq 1\)). Therefore, we first normalize the relative reward entity \(\tau(\frac{R_{a}^{B}}{R_{a}^{B}+R_{a}^{B}+R_{b}^{B}}\cdot\frac{R_{a}^{B^{ \prime}}}{R_{a}^{B^{\prime}}+R_{b}^{B^{\prime}}+R_{b}^{B^{\prime}}})\). Additionally, we use a specific example to demonstrate the adversary's relative extra reward when launching \(BSSM\). Similar to [24], we adopt expected relative extra reward (RER) to evaluate \(BSSM\). RER can be expressed as: \[RER_{\tau}^{S_{1}S_{2}}=\frac{R_{\tau}^{S_{1}}-R_{\tau}^{S_{2}}}{R_{\tau}^{S_{ 2}}} \tag{13}\] \(\tau\) represents an entity, which could be adversary (\(a\)), target bribery (\(b\)) pool or other pool (\(o\)). \(S_{1}\) and \(S_{2}\) represent different strategies, which include honest mining (\(H\)), semi-selfish mining (\(SSM\)), \(b\) accepts the bribes in bribery semi-selfish mining (\(BSSM\)), \(b\) denotes the bribes in bribery semi-selfish mining (\(BSSM\)), selfish mining (\(SM\)), \(b\) accepts the bribes in bribery stubborn mining (\(BSSM\)) and \(b\) denies the bribes in bribery stubborn mining (\(BSM^{\prime}\)). Therefore, \(RER_{\tau}^{S_{1}}\) indicates the RER of entity \(S_{1}\) when adopting mining strategy \(\tau\). Obviously, \(RER_{a}^{H}=\alpha\), which means the adversary's pool (\(a\)) who possesses mining power of \(\alpha\) could obtain the RER of \(\alpha\). First, we consider the RER of \(a,\ o\) and \(b\) in different strategies (accepting the bribes or denying the bribes) when \(\beta^{b}=0.1,\ \varepsilon=0.02\) and \(\rho=0.1.\) Fig 6-a, b, c shows the RER of \(a,\ o\) and \(b\) when accepting the bribes comparing with denying. As we expect, without considering \(\gamma,\ a\) can obtain higher RER when he possesses less mining power. More specifically, the left side of the solid line indicates that \(BSSM\) is the dominant strategy, while the right side of the solid line indicates that \(BSSM^{\prime}\) is the dominant strategy. Similar to our analysis results, the winning area of \(BSSM\) as the dominant strategy is greater than that of \(BSSM^{\prime}\) as the dominant strategy. Based on THEOREM 5.3, \(a\) can use a smaller \(\varepsilon\) to expand the winning area in \(BSSM\) (Fig 6-a). In addition, once launching \(BSSM,\ o\) will always suffer losses (Fig 6-b). Fig 6-c shows that accepting bribes and expanding the private branch of adversary is always the optimal strategy for the target \(b\) in bribery initiation state, which is consistent with THEOREM 5.1. Fig 6-d illustrates that no matter how much mining power the adversary possesses, \(a\) prefers to launch \(BSSM\) rather than adopt \(SSM.\) In detail, launching \(BSSM\) will harm the profits of \(o,\) and is beneficial for \(b\) to obtain higher RER. Adversaries with less mining power are more likely to get higher RER by launching \(BSSM,\) regardless of \(\gamma.\) This result indicates that the large mining pools lack sufficient motivation to launch \(BSSM.\) Furthermore, we consider the RER of \(a\) in different strategies (\(BSSM\) or \(BSSM^{\prime}\)) when \(\rho=0.1,\ \beta^{b}=0.1\) or \(0.3,\) comparing with \(H,\ SM\) and \(SSM.\) We observe Figure 7, which indicates that adversary will definitely obtain higher rewards compared with denying the bribes when the target \(b\) chooses to accept the bribes, regardless of \(\alpha\) and \(\gamma.\) More specifically, Fig 7-a and b show the \(RER_{a}^{BSSM,H}\) and \(RER_{a}^{BSSM^{\prime},H}\) when \(\beta^{b}=0.1\) or \(0.3.\) A larger \(\gamma\) will result in higher rewards for adversary, regardless of whether the target \(b\) accepts or denies the bribes. Fig 7-c and d show the \(RER_{a}^{BSSM,SM}\) and \(RER_{a}^{BSSM^{\prime},SM}\) when \(\beta^{b}=0.1\) or \(0.3.\) Adversaries with small mining power could obtain higher rewards compared with \(SM\) when launching \(BSSM\) or \(BSSM^{\prime}.\) In other words, the large mining pools have no motivation to launch \(BSSM\) or \(BSSM^{\prime}.\) Fig 7-e and f show the \(RER_{a}^{BSSM,SSM}\) and \(RER_{a}^{BSSM^{\prime},SSM}\) when \(\beta^{b}=0.1\) or \(0.3.\) Similarly, adversaries with small mining power are more profitable in launching \(BSSM\) or \(BSSM^{\prime}\) compared with \(SSM.\) The RER of adversaries will decrease when \(\alpha\) and \(\gamma\) increase. The reason is that \(\gamma\) represents the proportion of \(o\) choosing to extend the private branch of adversaries. More specifically, we further consider the RER of \(\mathbf{a},\ \mathbf{o}\) and \(\mathbf{b}\) when the target \(\mathbf{b}\) chooses to accept the bribes compared with denying in Figure 8. The left side of the solid line in Fig 8-a indicates that adversaries can obtain higher rewards by adopting \(\ BSSM\) compared with \(\ BSSM^{\prime}\). Conversely, the right side of the solid line represents that \(\ BSSM^{\prime}\) is the optimal strategy for adversary. Besides, the RER of adversaries will increase when \(\rho\) (the proportion of adversary adopting honest mining) decreases. Fig 8-b indicates that once the target \(\mathbf{b}\) chooses to accept the bribes, \(\mathbf{o}\) will suffer losses. Similarly, Fig 8-c shows that the target \(\mathbf{b}\) prefers accepting the bribes to denying, which means choosing to accept the bribes is always the optimal strategy. The simulation results are completely consistent with our previous theoretical analysis of the RER of \(\mathbf{a},\ \mathbf{o}\) and \(\mathbf{b}\). Finally, we consider the chain growth rate when adversaries adopt different strategies (\(SM\) or \(BSSM\)). More specifically, Figure 9 shows \(BSSM\) and \(SM\)'s chain growth rate when \(\rho=0.1\) or \(0.3\). As expected, the higher mining power adversaries possess, the smaller the chain growth rate. This is because there is an inverse correlation between the mining power of \(\mathbf{o}\) and \(\mathbf{a}\). The growth rate of the main chain mainly depends on \(\mathbf{o}\)'s ability to discover a new block. In addition, a larger \(\rho\) will increase the probability of adversary generating a new block in the main chain. The above simulation results are consistent with our previous theoretical analysis of chain growth rate. Figure 7: \(\mathbf{RE}_{\mathbf{a}}\) when \(\mathbf{\beta^{b}=0.1}\) or \(\mathbf{\beta^{b}=0.3}\). ## 6 Bribery Stubborn Mining (\(Bsm\)) ### Overview We introduce bribery stubborn mining (\(BSM\)) attack that combines bribery attack with stubborn mining, which could increase the reward of adversary by adding bribery transactions on adversary's private branch. In \(BSM\), the adversaries adopt selfish mining with the whole mining power. We adopt \(a\) to represent all adversary pools. Once \(a\) finds a valid block, he will reserve it and form private chain. However, when another miner (\(o\) or \(b\)) finds a valid block, he will publish it on the public chain, and then \(a\) will release a reserved block at once, which brings about forking. \(b\) will choose to mine on public branch (denying bribes) or mine on private branch of adversary (accepting bribes). The bribery payment process is similar to \(BSSM\). ### Modeling \(Bsm\) **State Transitions and probability.** We model the state transition process of \(BSM\) as shown in Figure 10. The meanings of states \(k(k\geq 0)\) and states \(k^{\prime}(k\geq 1)\) are exactly the same as the states in stubborn mining. States \(0^{\prime}_{0}\) and \(0^{\prime}_{b}\) represent the bribery initiation stage. More specifically, state \(0^{\prime}_{0}\) indicates that two branches are formed by \(a\) and \(o\). State \(0^{\prime}_{b}\) represents that two branches are formed by \(a\) and \(b\). Next, we will discuss each state transition and probability in detail, as shown in Appendix B. According to Figure 10 of the state transition process of \(BSM\), we obtain the following equations: \[\begin{cases}p_{0}=(1-\alpha)p_{0}+p_{0^{\prime}_{a}}+p_{0^{\prime}_{b}}\\ p_{0^{\prime}_{a}}=(1-\alpha-\beta^{b})(p_{1}+p_{1^{\prime}})\\ p_{0^{\prime}_{b}}=\beta^{b}(p_{1}+p_{1^{\prime}})\\ p_{1^{\prime}}=(1-\alpha)(p_{2}+p_{2^{\prime}})\\ p_{k}=\alpha p_{k-1},\text{when }k\geq 1\\ p_{k^{\prime}}=\alpha p_{(k-1)^{\prime}}+(1-\alpha)(p_{k+1}+p_{(k+1)^{\prime}}),\text{when }k\geq 2\\ \sum_{k=0}^{k+\infty}p_{k}+\sum_{k=1}^{\infty}p_{k^{\prime}}+p_{0^{\prime}_{a }}+p_{0^{\prime}_{b}}=1\end{cases} \tag{13}\] Figure 11: Possible events in \(BSM\) Figure 10: The state transition process of \(BSM\) **Reward.** We conduct a detailed analysis of the whole possible events. We observe from Figure 10 that they will eventually transition to state \(0^{\prime}_{b}\) with probability \((1-\alpha-\beta^{b})\) or state \(0^{\prime}_{b}\) with probability \(\beta^{b}\) whether how many block advantages the adversaries possess through selfish mining. Therefore, we need to analyze the winning probability of private chain of \(a\) and public chain of \(o\) respectively. Before analysis, we need to add two entities \(P^{p}_{b}\) (represents the winning probability of public branch of \(o\) and \(P^{s}_{b}\) (represents the winning probability of private branch of \(a\)). We observe event \(0^{\prime}_{b}\) in Figure 11: **(1)** when \(o\) finds a valid block, he will publish it on public branch with probability \((1-\gamma)(1-\alpha-\beta^{b})\) (public branch wins) or publish it on private branch with probability \(\gamma(1-\alpha-\beta^{b})\) (private branch wins); **(2)** when \(b\) finds a valid block, he will publish it on public branch with probability \(\beta^{b}\) (public branch wins); **(3)** when \(a\) finds a valid block, he will publish it on private branch with probability \(\alpha\) (private branch wins). Similarly, we observe event \(0^{\prime}_{b}\): **(1)** when \(o\) or \(b\) finds a valid block, they will publish it on public branch with probability \(((1-\gamma)(1-\alpha-\beta^{b})+(1-\gamma)\beta^{b})\) (public branch wins), or publish it on private branch with probability \((\gamma(1-\alpha-\beta^{b})+\gamma\beta^{b})\) (private branch wins); **(2)** when \(a\) finds a valid block, he will publish it on private branch with probability \(\alpha\) (private branch wins). For states \(k(k\geq 1)\) and states \(k^{\prime}(k\geq 1)\), they transition to state \(0^{\prime}_{b}\) with probability \(P_{0^{\prime}_{b}}=\frac{\beta^{b}}{1-\alpha-\beta^{b}+\beta^{b}}\), and transition to the state \(0^{\prime}_{o}\) with probability \(P_{0^{\prime}_{o}}=\frac{1-\alpha-\beta^{b}}{1-\alpha-\beta^{b}+\beta^{b}}\). We further derive the winning probability \(P^{s}_{b}\) of private branch and \(P^{p}_{b}\) of public branch in states \(k(k\geq 1)\) and \(k^{\prime}(k\geq 1)\) as follows: \[\begin{cases}P^{p}_{b}=P_{0^{\prime}_{b}}\left((1-\gamma)(1-\alpha-\beta^{b})+ \beta^{b}\right)+P_{0^{\prime}_{o}}\left((1-\gamma)(1-\alpha-\beta^{b})+(1- \gamma)\beta^{b}\right)\\ P^{s}_{b}=P_{0^{\prime}_{b}}(\gamma(1-\alpha-\beta^{b})+\alpha)+P_{0^{\prime}_ {o}}(\gamma(1-\alpha-\beta^{b})+\gamma\beta^{b}+\alpha)\end{cases} \tag{14}\] Observing Figure 11, we continue to analyze the rewards of each event. For event \(0\): **(1)** when it transitions to event \(0\)-\(1\), the rewards of \(a\), \(o\) and \(b\) are determined later (probability \(\alpha\)); **(2)** when it transitions to event \(0\)-\(2\), \(o\) get \(1\) reward (probability \((1-\alpha-\beta^{b})\)); **(3)** when it transitions to event \(0\)-\(3\), \(b\) gets \(1\) reward (probability \(\beta\)). For event \(0^{\prime}_{b}\): **(1)** when it transitions to event \(0^{\prime}_{b}\)-\(1\), \(o\) and \(b\) get \(1\) reward (probability \((1-\gamma)(1-\alpha-\beta^{b})\)); **(2)** when it transitions to event \(0^{\prime}_{b}\)-\(2\), \(b\) gets \(2\) rewards (probability \(\beta^{b}\)); **(3)** when it transitions to event \(0^{\prime}_{b}\)-\(3\), \(a\) gets \(2\) rewards (probability \(\beta^{b}\)); **(3)** when it transitions to event \(0^{\prime}_{b}\)-\(3\), \(a\) gets \(2\) rewards (probability \(\beta^{b}\)); **(3)** when it transitions to event \(0^{\prime}_{o}\)-\(3\), \(a\) gets \(2\) rewards (probability \(\alpha\)); **(4)** when it transitions to event \(0^{\prime}_{o}\)-\(5\) and \(b\) chooses to accept the bribes, \(a\) and \(b\) get \(1\) reward (probability \(\beta^{b}\)). For event \(1^{\prime}\): **(1)** when it transitions to event \(1^{\prime}\)-\(1\), \(a\) gets \(P^{s}_{b}\) reward, \(o\) gets \(P^{p}_{b}\) reward (probability \((1-\gamma)(1-\alpha-\beta^{b})\)); **(2)** when it transitions to event \(1^{\prime}\)-\(2\), \(a\) gets \(P^{s}_{b}\) reward, \(o\) get \(P^{p}_{b}\) reward (probability \(\beta^{b}\)); **(3)** when it transitions to event \(1^{\prime}\)-\(3\), \(a\) get \(1\) reward (probability \(\gamma(1-\alpha-\beta^{b})\)); **(4)** when it transitions to event \(1^{\prime}\)-\(4\), \(a\) get \(1\) reward (probability \(\beta^{b}\)); **(5)** when it transitions to event \(1^{\prime}\)-\(5\), the rewards of \(a\), \(o\) and \(b\) are determined later (probability \(\alpha\)). The reward analysis of events \(k^{\prime}(k>2)\) is similar to event \(1^{\prime}\). For event \(1\): the rewards of \(a\), \(o\) and \(b\) are determined later whether it transitions to event \(1\)-\(1\) (probability \(\alpha\)), event \(1\)-\(2\) (probability \((1-\alpha-\beta^{b})\)) or event \(1\)-\(3\) (probability \(\beta^{b}\)). The reward analysis of events \(k(k\geq 2)\) is similar to event \(1\). When \(b\) chooses to accept the bribes, the \(a\)'s system reward \(R_{a}\) is: \[\begin{split} R_{a}&=p_{0^{\prime}_{b}}\cdot(\alpha \cdot 2+\gamma(1-\alpha-\beta^{b}))+p_{0^{\prime}_{o}}\cdot(\alpha\cdot 2+\gamma(1-\alpha-\beta^{b})+ \beta^{b})\\ &+\sum_{i=1}^{+\infty}p_{i^{\prime}}\cdot\left((1-\gamma)(1- \alpha-\beta^{b})\cdot P^{s}_{b}+\gamma(1-\alpha-\beta^{b})+(1-\gamma)\beta^{b} \cdot P^{s}_{b}+\gamma\beta^{b}\right)\\ &=p_{0^{\prime}_{b}}\cdot(\alpha\cdot 2+\gamma(1-\alpha-\beta^{b}))+p_{0^{ \prime}_{o}}\cdot(\alpha\cdot 2+\gamma(1-\alpha-\beta^{b})+\beta^{b})\\ &+\sum_{i=1}^{+\infty}p_{i^{\prime}}\cdot\left(\gamma(1-P^{s}_{b}) +P^{s}_{b}\right)(1-\alpha)\end{split} \tag{15}\] When considering the bribes (a fraction \(\varepsilon\) of the total system reward), the \(a\)'s reward \(R_{a}^{B}\) is: \[R_{a}^{B}=(1-\varepsilon)R_{a} \tag{16}\] Accordingly, when \(b\) chooses to accept the bribes and we consider the bribes, the \(b\)'s reward \(R_{b}^{B}\) is: \[R_{b}^{B}=p_{0}\cdot\beta^{b}+p_{0_{b}^{\prime}}\cdot\left((1-\gamma)(1-\alpha- \beta^{b})+\beta^{b}\cdot 2\right)+p_{0_{b}^{\prime}}\cdot\beta^{b}+\varepsilon \cdot R_{a} \tag{17}\] Finally, when \(b\) chooses to accept the bribes and we consider the bribes, the \(o\)'s reward \(R_{o}^{B}\) is: \[R_{o}^{B}=p_{0}\cdot(1-\alpha-\beta^{b})+p_{0_{b}^{\prime}}\cdot\left((1-\gamma )(1-\alpha-\beta^{b})+\gamma(1-\alpha-\beta^{b})\right)\] \[+p_{0_{b}^{\prime}}\cdot\left((1-\gamma)(1-\alpha-\beta^{b})\cdot 2+\gamma(1- \alpha-\beta^{b})\right)\] \[+\sum_{l=1}^{+\infty}p_{l^{\prime}}\cdot\left((1-\gamma)(1-\alpha-\beta^{b}) \cdot p_{b}^{p}+(1-\gamma)\beta^{b}\cdot p_{b}^{p}\right) \tag{18}\] \[=p_{0}\cdot(1-\alpha-\beta^{b})+p_{0_{b}^{\prime}}\cdot(1-\alpha-\beta^{b})+p _{0_{o}^{\prime}}\cdot(2-\gamma)(1-\alpha-\beta^{b})\] \[+\sum_{l=1}^{+\infty}p_{l^{\prime}}\cdot\left((1-\gamma)(1-\alpha-\beta^{b}) \cdot p_{b}^{p}+(1-\gamma)\beta^{b}\cdot p_{b}^{p}\right)\] Similarly, we consider when \(b\) chooses to deny the bribes, the \(a\)'s reward \(R_{a}^{B^{\prime}}\) is: \[R_{a}^{B^{\prime}}=p_{0_{b}^{\prime}}\cdot(\alpha\cdot 2+\gamma(1-\alpha- \beta^{b}))+p_{0_{b}^{\prime}}\cdot\left(\alpha\cdot 2+\gamma(1-\alpha-\beta^{b})\right) \tag{19}\] \[+\sum_{l=1}^{+\infty}p_{l^{\prime}}\cdot\left((1-\gamma)(1- \alpha-\beta^{b})\cdot p_{b}^{s}+\gamma(1-\alpha-\beta^{b})+(1-\gamma)\beta^ {b}\cdot p_{b}^{s}+\gamma\beta^{b}\right)\] Accordingly, when \(b\) chooses to deny the bribe, the \(b\)'s reward \(R_{b}^{B^{\prime}}\) is: \[R_{b}^{B^{\prime}}=p_{0}\cdot\beta^{b}+p_{0_{b}^{\prime}}\cdot\left((1-\gamma )(1-\alpha-\beta^{b})+\beta^{b}\cdot 2\right)+p_{0_{b}^{\prime}}\cdot\beta^{b} \tag{20}\] Finally, when \(b\) chooses to deny the bribes, the \(o\)'s reward \(R_{o}^{B^{\prime}}\) is: \[R_{o}^{B^{\prime}}=p_{0}\cdot(1-\alpha-\beta^{b})+p_{0_{b}^{\prime}}\cdot \left((1-\gamma)(1-\alpha-\beta^{b})+\gamma(1-\alpha-\beta^{b})\right)\] \[+p_{0_{a}^{\prime}}\cdot\left((1-\gamma)(1-\alpha-\beta^{b})\cdot 2+\beta^{b}+ \gamma(1-\alpha-\beta^{b})\right) \tag{21}\] \[+\sum_{l=1}^{+\infty}p_{l^{\prime}}\cdot\left((1-\gamma)(1-\alpha-\beta^{b}) \cdot p_{b}^{p}+(1-\gamma)\beta^{b}\cdot p_{b}^{p}\right)\] THEOREM 6.1. _Once launching_\(BSM\), _the target_\(b\)_can always obtain a higher reward when he chooses to accept the bribes at the bribes initiation stage._ PROOF. Comparing the \(b\)'s reward \(R_{b}^{B}\) when \(b\) chooses to accept the bribes (Equation (17)) with the \(b\)'s reward \(R_{o}^{B^{\prime}}\) when \(b\) chooses to deny the bribes (Equation (20)), we could derive \(R_{b}^{B}\geq R_{o}^{B^{\prime}}\) since \(0\leq\varepsilon\leq 1\) and \(R_{a}>0\). Once \(a\) adopts \(0<\varepsilon\leq 1\), we could derive \(R_{b}^{B}>R_{b}^{B^{\prime}}\). Therefore, extending \(a\)'s private branch is always the optimal strategy at the bribery initiation stage. THEOREM 6.2. _Once launching_\(BSM\), \(o\)_is always forced to suffer losses when \(b\)_chooses to accept the bribes at the bribery initiation stage._ PROOF. Comparing the \(o\)'s reward \(R_{o}^{B}\) when \(b\) chooses to accept the bribes (Equation (18)) with the \(o\)'s reward \(R_{o}^{B^{\prime}}\) when \(b\) chooses to deny the bribes (Equation (21)), we could derive \(R_{o}^{B}>R_{o}^{B^{\prime}}\) since \(\beta^{b}>0\). Therefore, when \(b\) chooses to accept the bribes at the bribery initiation stage, \(o\) is always forced to suffer losses. THEOREM 6.3. _Once launching_\(BSM\), \(a\)_can obtain a higher reward than that in stubborn mining when he pays proper bribes._ PROOF. The rewards in stubborn mining are the same as the rewards in \(BSM\) when the target \(b\) chooses to deny the bribes. Therefore, in order to obtain higher rewards, it is necessary for \(a\) to ensure that \(R_{a}^{b}>R_{a}^{b^{\prime}}\).Comparing the \(a\)'s reward \(R_{a}^{B}\) when \(b\) chooses to accept the bribes (Equation (16)) with the \(a\)'s reward \(R_{a}^{B^{\prime}}\) when \(b\) chooses to deny the bribes (Equation (19)), we could derive: \[R_{a}^{B}>R_{a}^{B^{\prime}}\Rightarrow\epsilon<\frac{p_{0_{a}^{\prime}}\cdot \beta^{b}}{p_{0_{a}^{\prime}}\cdot\beta^{b}+R_{a}^{B^{\prime}}} \tag{22}\] The upper bound of \(a\)'s reward is \(R_{a}\) in Equation (15) when \(\epsilon=0\). **Quantitative Analysis and Simulation.** We use the RER in Equation (13) to evaluate \(BSM\). First, we consider the RER of \(a\), \(o\) and \(b\) in different strategies (accepting the bribes or denying the bribes) when \(\beta^{b}=0.1\), \(\epsilon=0.02\). Fig 12-a, b, c shows the RER of \(a\), \(o\) and \(b\) when accepting the bribes comparing with denying. As we expect, without considering \(\gamma\), \(a\) can obtain higher RER when he possesses less mining power. More specifically, the left side of the solid line indicates that \(BSM\) is the dominant strategy, while the right side of the solid line indicates that \(BSM^{\prime}\) is the dominant strategy. Obviously, the winning area of \(BSM\) as the dominant strategy is greater than that of \(BSM^{\prime}\). Based on THEOREM 6.3, \(a\) can use a smaller \(\epsilon\) to expand the winning area in \(BSM\) (Fig 12-a). In addition, once launching \(BSM\), \(o\) will always suffer losses (Fig 12-b). Fig 12-c shows that accepting bribes and expanding the private branch of adversary is always the optimal strategy for the target \(b\) in bribery initiation state, which is consistent with THEOREM 6.1. In detail, launching \(BSM\) will harm the profits of \(o\), and is beneficial for \(b\) to obtain higher RER. Adversaries with less mining power are more likely to get higher RER by launching \(BSM\), regardless of \(\gamma\). This result indicates that the large mining pools lack sufficient motivation to launch \(BSM\). Furthermore, we consider the RER of \(a\) in \(BSM\) when \(\beta^{b}=0.1\), \(\epsilon=0.02\), comparing with \(H\) and \(SM\) in Figure 13. We observe Fig 13-a, which shows that adversaries have an advantage in adopting \(BSM\) compared with \(SM\) in specific situations. More specifically, the upper side of the solid line indicates that adopting \(BSM\) is more profitable for adversaries, while the lower side of the solid line shows that launching \(SM\) is the optimal strategy. Fig 13-b depicts the RER of adversaries when \(\beta^{b}=0.1\), \(\epsilon=0.02\), comparing with \(H\). Furthermore, the left side of the solid line indicates that \(H\) is the optimal strategy, while the right side of the solid line shows that adopting \(BSM\) is more profitable for adversaries. As expected, adversaries with high mining power have more motivation to launch \(BSM\). Finally, we consider the RER of adversaries in different strategies (\(BSM\) and \(BSM^{\prime}\)) when \(\beta^{b}=0.1\) or \(\beta^{b}=0.3\), comparing with \(H\) in Figure 14. Figure 14 shows that choosing to accept the bribes is always the optimal strategy for adversaries. More specifically, Fig 14-a shows the \(RER_{a}^{BSM,H}\) and \(RER_{a}^{BSM^{\prime},H}\) when \(\beta^{b}=0.1\), and Fig 14-b shows the \(RER_{a}^{BSM,H}\) and \(RER_{a}^{BSM^{\prime},H}\) when \(\beta^{b}=0.3\). A larger \(\gamma\) will result in higher rewards for adversary, regardless of whether the target \(b\) accepts or denies the bribes, which is consistent with our theoretical analysis of \(BSM\). ## 7 The Bribery Miner's Dilemma In selfish mining and \(BSM\), adversary can get extra rewards by deliberately forking. Previous work has pointed out that the adversary's extra reward comes from the loss of \(b\) and \(o\)[8, 24, 25]. More detail, once \(o\) expands the private branch of \(a\) instead of the public branch of \(b\) (event \(0^{\prime}_{b}\)-3 in \(BSSM\) and \(BSM\)), \(b\) will suffer losses. Nevertheless, \(b\) cannot avoid losses regardless of the strategy adopted by the target \(b\). More specifically, whether target \(b\) suffers losses is controlled by the strategy of \(o\), rather than by themselves. Meanwhile, when \(b\) chooses to accept the bribes, \(o\) will suffer losses (event \(0^{\prime}_{b}\)-6 in \(BSSM\) and event \(0^{\prime}_{b}\)-5 in \(BSM\)). We have demonstrated that the optimal strategy for target \(b\) in \(BSSM\) and \(BSM\) is to accept the bribes and extend the adversary's private branch. Therefore, adversary can bribe multiple targets simultaneously, which increases the winning probability of private branch of \(a\). In this case, multiple targets who accept the bribes may fall into the "bribery miner's dilemma": all targets \(b\)s will suffer losses due to accepting the bribes (similar to the "miner's dilemma"). When the whole targets deny the bribes, they will obtain extra reward, comparing with adopting honest mining. But for each target \(b\), they would not choose to deny the bribes. This is because accepting the bribes is always a locally optimal strategy for \(b\) at bribery initiation stage. Therefore, we have a single Nash equilibrium for targets under \(BSSM\) and \(BSM\): all targets will choose to accept the bribes and expand adversary's branch at the bribery initiation stage. ### The "Bribery Miner's Dilemma" in \(BSSM\) In \(BSSM\), we consider two targets \(b_{1}\) and \(b_{2}\) with mining power \(\beta^{b}_{1}\) and \(\beta^{b}_{2}\). We set \(\alpha=0.3\), \(\beta^{b}_{1}=0.2\), \(\rho=0.1\) and \(\varepsilon=0.02\). We define target \(b_{i}\)'s winning condition is to obtain a higher reward than honest mining (i.e., \(RERE_{b_{1}}^{BSSM,H}>0\)). We calculate the rewards of \(b_{1}\), \(b_{2}\), \(a\) and \(o\) respectively in four cases (**(1)** both \(b_{1}\) and \(b_{2}\) accept the bribes; **(2)** \(b_{1}\) accepts the bribes but \(b_{2}\) denies; **(3)** \(b_{2}\) accepts the bribes but \(b_{1}\) denies; **(4)** both \(b_{1}\) and \(b_{2}\) deny the bribes). Figure 15 shows the RER and winning conditions for each target in terms of \(\beta^{b}_{2}\) and \(\gamma\). The left side of the solid line in Fig 15-a represents the winning condition of target \(b_{1}\), and the right side of solid line in Fig 15-b represents the winning condition of target \(b_{2}\). Fig 15-a indicates that target \(b_{1}\) will obtain extra reward while \(b_{2}\) will suffer losses when \(\beta_{2}^{b}\) is relatively small. Fig 15-b indicates that target \(b_{2}\) will obtain extra reward while \(b_{1}\) will suffer losses when \(\beta_{2}^{b}\) is relatively large. The area between two solid lines represents both \(b_{1}\) and \(b_{2}\) will suffer losses (i.e., they will encounter the "bribery miner's dilemma"). The RER of target \(b_{1}\) and \(b_{2}\) will not be greatly affected by \(\gamma\) whether \(\beta_{2}^{b}\) is large or small. This is because the change of \(\gamma\) will bring \(RER_{b_{i}}^{BSSM,H}\) to the same trend of change. For the adversary, when proper value of \(\beta_{1}^{b}\), \(\beta_{2}^{b}\), \(\rho\) and \(\varepsilon\), the adversary could obtain extra reward and make the target \(b_{1}\) and \(b_{2}\) fall into the "bribery miner's dilemma". We use a more intuitive example to demonstrate the "briery miner's dilemma" in \(BSSM\). We set \(\gamma=0\), \(\varepsilon=0.02\) and \(\rho=0.1\), and assume the mining power of \(\alpha\), \(b_{1}\) and \(b_{2}\) is 0.36, 0.29 and 0.27 respectively. The RER of \(b_{1}\) and \(b_{2}\) in four cases is presented in Table 2. For each target \(b_{i}\), their local optimal strategy is to accept the bribes. However, they will suffer losses, comparing with denying the bribes. Furthermore, for all targets, their global optimal strategy is to deny the bribes. ### The "Bribery Miner's Dilemma" in \(Bsm\) Similarly, in \(BSM\), we consider two targets \(b_{1}\) and \(b_{2}\) with mining power \(\beta_{1}^{b}\) and \(\beta_{2}^{b}\). We set \(\alpha=0.3\), \(\beta_{1}^{b}=0.2\) and \(\varepsilon=0.02\). We define target \(b_{i}\)'s winning condition is to obtain a higher reward than honest mining (i.e., \(RER_{b_{i}}^{BSM,H}>0\)). We calculate the rewards of \(b_{1}\), \(b_{2}\), \(a\) and \(o\) respectively in four cases. Figure 16 shows the RER and winning conditions for each target in terms of \(\beta_{2}^{b}\) and \(\gamma\). The left side of the solid line in Fig 16-a represents the winning condition of target \(b_{1}\), and the right side of solid line in Fig 16-b represents the winning condition of target \(b_{2}\). Fig 16-a indicates that target \(b_{1}\) will obtain extra reward while \(b_{2}\) will suffer losses when \(\beta_{2}^{b}\) is relatively small. Fig 16-b indicates that target \(b_{2}\) will obtain extra reward while \(b_{1}\) will suffer losses when \(\beta_{2}^{b}\) is relatively large. The area between two solid lines represents both \(b_{1}\) and \(b_{2}\) will suffer losses (i.e., they will encounter the "bribery miner's dilemma"). We use a more intuitive example to demonstrate the "briery miner's dilemma" in \(BSM\). We set \(\gamma=0\) and \(\varepsilon=0.02\), and assume the mining power of \(\alpha\), \(b_{1}\) and \(b_{2}\) is 0.36, 0.29 and 0.27 respectively. The RER of \(b_{1}\) and \(b_{2}\) \begin{table} \begin{tabular}{|c|c|c|} \hline Target \(b_{1}\) & Accept at bribery initiation stage & Deny at bribery initiation stage \\ \hline Accept at bribery initiation stage & (-0.3746\%, -0.9311\%) & (-6.5856\%, 6.4331\%) \\ \hline Deny at bribery initiation stage & (8.9069\%, -6.6833\%) & (3.1083\%, 1.1604\%) \\ \hline \end{tabular} \end{table} Table 2: RER of target (\(RER_{b_{i}}^{BSSM,H}\) and \(RER_{b_{i}}^{BSSM^{\prime}H}\)). (\(x\), \(y\)) indicate the RER of target \(b_{1}\) and target \(b_{2}\) respectively. Figure 16: RER and winning conditions in \(BSM\) in four cases is presented in Table 3. For each target \(b_{i}\), their local optimal strategy is to accept the bribes. However, they will suffer losses, comparing with denying the bribes. In general, for all targets, their global optimal strategy is to deny the bribes. ## 8 Discussion ### _Bribery Mining Countermeasure_ We present three countermeasures against bribery mining. First, once forking occurs, miners are supposed to choose the branch that they first detect, while ignoring the branch with conflicting transactions. For instance, when a miner detects the transaction \(T^{A}\), and then a fork with two branches occurs (containing transaction \(T^{A}\) and \(T^{B}\) respectively), he should expand the branch with \(T^{A}\). If all miners follow this mining strategy, bribery mining can be avoided effectively. However, this assumption is not realistic as miners can be selfish (they might choose another branch with \(T^{B}\) to obtain higher reward). Nevertheless, the more miners choose to follow this mining strategy, the \(\gamma\) smaller, which indicate less rewards for adversaries (the winning probability of adversary's branch decreases). Secondly, if the victims discover bribery mining in the system, they may be willing to spend money on counter-bribery to win in the forking competition. In general, any miner who obtains reward on the main chain rather than on the adversary's branch can adopt counter-bribery strategies. Meanwhile, the victims would spend no more than the full value of the transaction \(T^{A}\) to implement counter-bribery measures. Once the adversary wins in the competition, the victims will lose the full value of \(T^{A}\). Therefore, the adversaries have to pay the target \(b\) higher bribes, which makes bribery mining unprofitable. Finally, there are potential changes in the role of each miner or pool. The roles of \(a,\ o,\ b\) will constantly change over time. Any miner who obtains short-term profits as the adversary at the current moment may also suffer losses as the victim in the future. Therefore, if short-term bribery reward will harm miners' long-term profit potential, they should be motivated not to accept the bribes. ### _Dynamic Mining Strategy_ In the Bitcoin system, miners may adopt various mining strategies to increase their profits. It is difficult to predict the optimal mining strategy, but we can calculate the RER of each mining strategy. For instance, miners with smaller mining power have an advantage in adopting honest mining, comparing with selfish mining. Therefore, rational miners may dynamically adjust their mining strategies in different cases. ## 9 Conclusion We demonstrate that in PoW-based blockchain cryptocurrencies such as Bitcoin, mining attacks can be combined with bribery mining to further expand malicious mining strategies. In \(BSSM\), adversaries can obtain relative extra reward of 60% more than honest mining and increase the chain growth rate compared to selfish mining. In \(BSM\), adversaries can gain 2% relative extra reward than selfish mining. Both of them will make the targets suffer from the "target miner's dilemma". For each target, their local optimal strategy is to accept the bribes. However, they will suffer losses, comparing with denying the bribes. Furthermore, for all targets, their global optimal strategy is to deny the bribes. Quantitative analysis and simulation have been verified our theoretical analysis. We propose practical measures to mitigate more advanced mining attack strategies based on bribery mining, and provide new ideas for addressing bribery mining attacks in the future. However, how to completely and effectively prevent these attacks is still needed on further research.
2305.19801
Predicting protein stability changes under multiple amino acid substitutions using equivariant graph neural networks
The accurate prediction of changes in protein stability under multiple amino acid substitutions is essential for realising true in-silico protein re-design. To this purpose, we propose improvements to state-of-the-art Deep learning (DL) protein stability prediction models, enabling first-of-a-kind predictions for variable numbers of amino acid substitutions, on structural representations, by decoupling the atomic and residue scales of protein representations. This was achieved using E(3)-equivariant graph neural networks (EGNNs) for both atomic environment (AE) embedding and residue-level scoring tasks. Our AE embedder was used to featurise a residue-level graph, then trained to score mutant stability ($\Delta\Delta G$). To achieve effective training of this predictive EGNN we have leveraged the unprecedented scale of a new high-throughput protein stability experimental data-set, Mega-scale. Finally, we demonstrate the immediately promising results of this procedure, discuss the current shortcomings, and highlight potential future strategies.
Sebastien Boyer, Sam Money-Kyrle, Oliver Bent
2023-05-30T14:48:06Z
http://arxiv.org/abs/2305.19801v1
Predicting protein stability changes under multiple amino acid substitutions using Equivariant Graph Neural Networks ###### Abstract The accurate prediction of changes in protein stability under multiple amino acid substitutions is essential for realising true in-silico protein re-design. To this purpose, we propose improvements to state-of-the-art Deep learning (DL) protein stability prediction models, enabling first-of-a-kind predictions for variable numbers of amino acid substitutions, on structural representations, by decoupling the atomic and residue scales of protein representations. This was achieved using E(3)-equivariant graph neural networks (EGNNs) for both atomic environment (AE) embedding and residue-level scoring tasks. Our AE embedder was used to featurise a residue-level graph, then trained to score mutant stability (\(\Delta\Delta G\)). To achieve effective training of this predictive EGNN we have leveraged the unprecedented scale of a new high-throughput protein stability experimental dataset, Mega-scale. Finally, we demonstrate the immediately promising results of this procedure, discuss the current shortcomings, and highlight potential future strategies. ## 1 Introduction Protein stability is a crucial component of protein evolution (Godoy-Ruiz et al., 2004), it lies at the root of our understanding of many human diseases (Peng & Alexov, 2016) and plays a major role in protein design and engineering (Qing et al., 2022). Protein stability is typically represented as the change in free energy, \(\Delta G\), between the unfolded and folded states (Matthews, 1993) and is a global feature of a protein. A negative \(\Delta G\) of folding indicates an energetically favourable protein conformation; the greater the magnitude of a negative \(\Delta G\), the more stable the conformation. Mutations can alter the favourability of a protein fold, with even single amino acid substitution events potentially disturbing the native conformation of a protein (Stefl et al., 2013). For example, a substitution from threonine to methionine in 12/15-Lipoxygenase is a cited potential cause of hereditary cardiovascular diseases (Schurmann et al., 2011); the mutation disrupts a chain of stabilising hydrogen bridges, causing structural instability and reducing catalytic activity. The mutational effect on protein stability is the difference in free energy of folding between the wild type (WT) and mutant proteins, \(\Delta\Delta G\)(Matthews, 1993). Mutagenic effects on protein stability can be determined experimentally using thermostability assays, with \(\Delta\Delta G\) being inferred from differences between WT and mutant denaturation curves (Bommarius et al., 2006). However, these assays are labourious and expensive; to adequately assess mutational effects at a higher throughput rate, researchers have turned to computational methods. The established precedent for computational modelling of mutant stability is empirical physics-informed energy functions, which rely on physical calculations to infer \(\Delta\Delta G\)(Marabotti et al., 2021). For example, Rosetta (Kellogg et al., 2011; Das & Baker, 2008) employs Monte Carlo runs to sample multiple protein conformations and predicts folding free energy from physical characteristics. These characteristics of Lennard-Jones interactions, inferred solvation potentials, hydrogen bonding and electrostatics are common to other packages such as FoldX (Schymkowitz et al., 2005). While Molecular Dynamics software, such as Amber (Case et al., 2005), utilises these characteristics in force fields to explore protein conformational landscapes and calculate potential energies by resolving classical physics calculations. These physics-based models can provide scoring for both protein stability or mutation-induced change of protein stability, however, they are still not fully scalable to large data-sets given the computational expense necessary for each simulated prediction. For example, conformation sampling via Monte Carlo simulations in Rosetta requires extensive compute time. On the other hand, machine learning-based predictors and, more recently, Deep learning (DL) approaches have shown improved scalability and, in some cases, comparable accuracy with physics-based models (Iqbal et al., 2021). This work will continue to explore the advantages of an entirely data-driven DL approach for predicting protein stability changes under multiple amino acid substitutions. ## 2 Related Work In moving away from established molecular modelling approaches, machine learning methods EASE-MM (Folkman et al., 2016) and SAAFEC-SEQ (Li et al., 2021) leverage 1D sequences and protein evolutionary information to predict \(\Delta\Delta G\) with decision trees and Support Vector Machines, respectively. While ACDC-NN-Seq (Pancotti et al., 2021) explored utilising DL by applying Convolutional neural networks (CNNs) to protein sequences. As sequence data is more widely available than experimental structures, it is probable that the insight of these models into 3D structural characteristics, such as free energy of folding, is limited by their 1D representation. PON-tstab (Yang et al., 2018) implemented a combination of sequence and structure-based features in tabular format with random forests. DeepDDG (Cao et al., 2019) relies on tabular empirical features obtained from structure, such as solvent-accessible surface area, to predict stability with neural networks. However, tabular features engineered from structure are a restrictive depiction of protein geometry; graph-based approaches provide a promising alternative representation, with encouraging results when applied to protein structure prediction (Delaunay et al., 2022). In particular, three DL models; ThermoNet (Li et al., 2020); RASP Blaabjerg et al. (2022); and ProS-GNN (Wang et al., 2021), have engaged in combining the two physico-scales involved in understanding protein geometry: the atomic scale and the residue scale of interactions. Both ThermoNet and RASP learn a representation of the atomic environment (AE) around the pertinent (mutated) residue using 3D CNNs before passing this representation through a Multi-layer perceptron (MLP) to score the mutational effect on protein stability. While obvious similarities exist between those two models, they are very different at their core. ThermoNet determines the AE representations on the fly, utilising both WT and simulated mutant structures as inputs for the MLP in the same loop. RASP initially trains a self-supervised AE embedder on a masked amino acid task, then uses this embedding as input features for a coupled WT and mutant amino acid encoding to feed a MLP trained on stability scoring. Moreover, ThermoNet is trained on a rather small experimental data-set (n \(\sim 3,500\)), while RASP is trained on a large data-set (n \(\sim 10^{6}\) for the AE embedder and n \(\sim 2.5\times 10^{5}\) for scoring) of Rosetta simulated scores, making it an emulator of the physics-based score. The third DL approach, ProS-GNN (Wang et al., 2021), replaced the CNN ThermoNet atomic environment embedding layer with GNNs. ProS-GNN also shares with ThermoNet and other DL models like ACDC-NN the constraint of being antisymmetric to reversed mutation. The aforementioned state-of-the-art stability prediction models in the literature share the following caveats: 1. Their underlying architecture allows only single amino acid substitutions. 2. Big experimental data-sets with the necessary structural data for these models are lacking. Indeed, RASP is constrained to predicting on a fixed number of amino acid substitutions by the MLP scorer, which requires a fixed input shape; additional mutations increase the dimensions of the AE embedding to an incompatible size. In ThermoNet and ProS-GNN, the impossibility of decoupling the atomic and residue scales prohibits multiple amino acid substitutions; the required size of voxel or graph for multiple, even proximal, substitutions would be rapidly unmanageable. A solution for both caveats exists. The self-supervised AE embedder of RASP already decouples the atomic and residue scales, and GNNs allow for some flexibility in graph topology, enabling consideration of multiple residues rather than only the embedding of the residue of interest. Integrating the RASP AE embedder with a graph-based approach would enable scoring of multiple substitution events. On the experimental data front, a new data-set, Mega-scale (Tsuboyama et al., 2022), based on high-throughput protein stability measurements, was published in late 2022. With over 600,000 data points of single and double mutants spanning over 300 WT structures, it provides a consistent (in terms of experimental set-up) and large data-set, with the express purpose of training models to score the effects of single or double mutations on protein stability. In light of these observations, we contribute a JAX-implemented solution for resolving these constraints using two E(3) equivariant graph neural networks (EGNNs) (Garcia Satorras et al., 2021). The first EGNN is trained in a self-supervised way. The second is trained on the Mega-scale data set for scoring mutational effects on protein stability. ## 3 Method ### Atomic Environment (AE) Embedder We followed the RASP protocol to design and train our AE embedder in a self-supervised masked amino acid manner, with two key differences: 1. We used an EGNN (Figure 2) with its own set of graph features describing the AE (Figure 1) instead of a CNN. 2. We used a macro averaged F1 score as our metric on the validation set to select model parameters from the highest-performing epoch. The training and the validation sets are from the same data-set described in RASP (Blaabjerg et al., 2022). Our EGNN was built with layers described in Garcia Satorras et al. (2021), with an average message aggregation strategy (Equation 1). Recalling from Garcia Satorras et al. (2021) that \(\mathbf{h}^{\mathbf{l}}\) are node embeddings at layer \(l\) and \(\mathbf{x}^{\mathbf{l}}\) are coordinate embeddings at layer \(l\) (atoms coordinates), we defined the equivariant graph convolutional layer (EGCL), as they do, up to the use of this \(\frac{1}{N_{i}^{neighbors}}\) coefficient which allows the re-scaling of different messages according to the node of interest's number of neighbors (hence the average). As with their implementation, \(\phi_{e},\phi_{x},\phi_{h}\) are MLPs, \(a_{i;j}\) defines edge features between node \(i\) and \(j\), and finally, \(\mathfrak{N}(i)\) is the set of neighbors of node \(i\). \[\mathbf{m}_{i,j} =\phi_{e}(\mathbf{h}^{\mathbf{l}}_{i},\mathbf{h}^{\mathbf{l}}_{j},\left\|\mathbf{x}^ {\mathbf{l}}_{i}-\mathbf{x}^{\mathbf{l}}_{j}\right\|^{2},a_{i,j}) \tag{1}\] \[\mathbf{x}^{\mathbf{l+1}}_{i} =\mathbf{x}^{\mathbf{l}}_{i}+\frac{1}{N_{i}^{neighbors}}\times\sum_{j\neq i,j\in\mathfrak{N}(i)}(\mathbf{x}^{\mathbf{l}}_{i}-\mathbf{x}^{\mathbf{l}}_{j})\phi_{x}(\mathbf{m} _{i,j})\] \[\mathbf{m}_{i} =\frac{1}{N_{i}^{neighbors}}\times\sum_{j\neq i,j\in\mathfrak{N }(i)}\mathbf{m}_{i,j}\] \[\mathbf{h}^{\mathbf{l+1}}_{i} =\phi_{h}(\mathbf{h}^{\mathbf{l}}_{i},\mathbf{m}_{i})\] Node embeddings are passed sequentially through each \(N\) layer of the network. After each layer, node embeddings are copied, aggregated with an average graph level readout (global mean pooling) Equation 2, and saved. Finally, all the graph representations derived from the different layers are concatenated, Equation 2, to form the graph-level embedding \(\mathbf{h}_{\mathbf{G}}\) for the AE sub-graph of a residue \(G\), and processed through an MLP toward the desired prediction shape. \[\mathbf{h}_{\mathbf{G}}=\texttt{Concat}(\texttt{Average}(\{\mathbf{h}^{\mathbf{l}}_{i}|i\in G \})|l=0,...,N) \tag{2}\] For building the AE graph, we followed part of the RASP protocol: * We considered only atoms within a 9A radius of the C\({}_{\alpha}\) of interest. * We removed the atoms that were part of the residue of interest. Nodes are atoms featureised with a single number (atomic number). Edges are drawn between nodes if two nodes are within a 4A distance. Edges are featureised by a binary label distinguishing whether the edge is intra- or inter-residue, as well as 2 numbers encoding a notion of the typical distance between the two atoms linked by these edges: * The sum over the two atoms involved in the edge, of their covalent radius. * The sum over the two atoms involved in the edge, of their Van der Waals radius. In this particular instance of the model, distances between atoms are not directly encoded as an edge feature, but given the use of an EGCL, this distance is present as a distance vector (Equation 1) (rather than the usual scalar distance, hence the necessity of E(3) equivariance) by design. Finally, we trained the model on a classification task consisting of retrieving the amino acid around which the AE has been built. Model parameters were selected from the epoch with the best macro F1 scores on the validation set. A detailed description of the model in terms of its hyper-parameters is provided in the Appendix (Table 6). ### Mutant Stability Scoring We used the same model architecture as presented for the AE embedding (Figure 2), for the regression task of predicting \(\Delta\Delta G\). The set of hyper-parameters differs as described in the Appendix (Table 7). For this task, the graph is built at the residue-level with additional atomic-level features to Figure 1: Definition of the atomic environment (AE) graph. Figure 2: Backbone of the E(3) equivariant graph neural network (EGNN) used for both AE embedding and scoring tasks. The EGNN layer is the EGCL taken from (Garcia Satorras et al., 2021). bridge the gap between the two fundamental scales. Indeed, in this representation nodes are residues represented in terms of their spatial positioning by the residue mean atomic position coordinates. Nodes are featureised with the vector output of the previously trained AE embedder. More node features are included with an 11-dimensional representation of the physico-chemical properties of the WT amino acid (Kawashima et al., 2007; Xu et al., 2020), concatenated to the same representation, except for the mutated amino acid nodes. When a particular node is not mutated this concatenation is just twice the WT physico-chemical 11-dimensional representation. At the edge level, an edge is drawn between two nodes if the mean atomic position distance between the two nodes is within 9A. The graph is centered around the mutant residue and residues are added given the distance threshold up to n (here 1) edges away from the mutant nodes. In the case of multiple mutants, we allow the different graphs centered around their mutant node, to be disconnected from each other. Features for the edges follows a similar strategy to the atomic graph: * A single number to stipulate if the two residues are linked by a backbone, bound or not. * Two numbers to provide a specific scale for the distance between two WT residues: the sum of the residue side chain sizes, defined as (i) the maximum distance between the C\({}_{\alpha}\) and any atoms of the residue; (ii) the maximum distance between two atoms from the same residue. * The same two numbers are produced for mutants involved in the edges. When there are no mutants involved then they are just duplicated from the WT numbers. * The C\({}_{\alpha}\)/C\({}_{\alpha}\) distance and the mean atomic position/mean atomic position distance. Finally, to help the training while homogenizing the ranges and variance of the target variable (here the experimental \(\Delta\Delta G\)) we used a Fermi transform, also as described in RASP (Blaabjerg et al., 2022). Our loss function is a simple Root Mean Squared Error, and the best model as well as the best epoch is chosen based on the spearman rank correlation coefficient on the validation set. Figure 3: Definition of the residue sub-graph build around mutated residues. ## 4 Results ### Atomic Environment Embedder With our AE implementation, we reached a macro averaged F1 score of 0.63 on the training set and of 0.61 (accuracy = 0.65) on the validation set, which is comparable to the RASP 0.63 accuracy also on a similar but different validation set (we shuffled the structures), full results in Appendix Table 4. The confusion matrix on the validation set (Figure 9) is also provided in the Appendix and shows a variable but strong ability of the model to match ground truth. ### Mutant Stability Scorer Evaluation metrics for the different splits are available in Figure 10, as well as a description of the Mega-scale data-set in the Appendix: A. Given the unique qualities of the Mega-scale data-set, we decided to evaluate the model in what we believe is a more stringent way than simply looking at the Mega-Scale test split (metrics are provided for the split too). Indeed, the Mega-scale data-set only contains domains and not full proteins, and structures were resolved computationally using AlphaFold (Jumper et al., 2021). The Mega-scale data-set also only contains up to double mutations. Hence, we decided to evaluate our model on a more standard data-set with experimentally resolved entire protein structures: ThermoMutDB (Xavier et al., 2021) (a description of the ThermoMutDB data-set is also provided in the Appendix: A.3). Over the pooled ThermoMutDB data-set our scorer achieved RMSE = 2.288; Spearmanr = 0.311; Pearsonr = 0.251 Table 1. Interestingly the model seems to generalise well to structures with more than two mutations (Figure 4), for which it has not been directly trained. Spearman correlation, for example, spans a range between 0.159 and 0.381, for a number of mutations going from 1 to 3. At the level of individual structures (Figure 5), model performances can also vary quite drastically from WTs with at least 100 mutants(for an overview per pdbse see Figure 11). Finally, we also compared, for a subset of one-point mutations, our work to RASP (Figure 12). On pooled single mutations our proposed approach performs significantly worse (Pearsonr RASP = 0.53; Pearsonr for this work = 0.42 ). Yet our approach outperforms RASP for some pdbs, and suffers from the same drawbacks as RASP on some proteins; structures for which RASP poorly predicts mutational effect are also, with a few exceptions, poorly predicted by our method. Yet, overall our performance is still significant, even more so when put in perspective with the fact that RASP regression is an ensemble model. Figure 4: Evaluation of our scorer for differing numbers of mutations. Purple markers for Spearman rank correlation p-value\(<\)0.05 else orange. Marker size is proportional to the number of mutations. ## 5 Discussion These preliminary results show that the combination of decoupling of the atomic and residue scales, with the usage of an EGNN architecture, to allow flexibility on the number of mutations accessible to score, is promising. In realising this exploratory work we faced two main challenges: 1. The scorer had a tendency to over-fit the Mega-scale data-set. 2. The current choice of a threshold for the residue graph is constrained. We ended up choosing 9A, where depending on the residues, a typical length for such interactions could go to 16A or more (for two tryptophhans, given the max distance between their own atoms). But such a threshold would lead to a hyper-connected graph that would hinder the training. Generally speaking the graph building hyper-parameters, for example, the number of hops around the recovered nodes of interest (here one: neighbours one hop from the mutant nodes), would influence hyper-connectivity and our ability to not over-fit. We believe both of those problems, particularly the latter point, could be partially alleviated by finding a better encoding of meaningful distances of interaction, as well as including a more appropriate way to sum messages within the message passing loop (Ying et al., 2021). In terms of evaluating scoring performance, when exploring the ThermoMutDB data-set as a potential out-of-distribution test set, we realised that Mega-scale has a significant advantage compared to all of the other available data-sets; it is experimentally consistent, both for \(\Delta\Delta G\) measurement and the use of AlphaFold for structure prediction. This is not the case for ThermoMutDB which is an aggregate of results obtained with a variety of methods for both stability measurement and structure determination, making it a challenge to understand why and how the model is failing to give accurate predictions. Training on such a data-set which will, for example, not include certain types of interactions, such as inter-domain interactions, as well as not containing the inherent real noise of protein structure prediction, is advantageous for its consistency and an inconvenience for its representational inaccuracies when compared to more "realistic" data-sets. In terms of computational performance, as we are using GNNs, we recognize that we lose an important aspect of the RASP model which is Rapid by naming. Yet, as the most time-consuming part is the construction of the residue sub-graph (roughly 5 seconds for subgraphs of less than 96 nodes with 8 CPUs, an A100 GPU and vectorization/jit features within JAX) saving it once and slightly modifying it to include the specific mutations later on, makes the model very efficient at assessing scores for multiple combinations of mutants within a pre-defined set of positions. Finally, since we decoupled the atomic and the residue scales, it is now possible to swap elements from other successful models: for example ThermoNet. This exposes a new bottleneck, or rather a new further challenge, as it implies the creation of a new data-set including structures for each mutant present in the Mega-scale data-set. That would also have been the case if one wanted to include anti-symmetry properties within the model. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & **Training Set** & **Validation Set** & **Test Set** & **ThermoMutDB** \\ \hline **Spearman r** & 0.754 & 0.518 & 0.442 & 0.311 \\ \hline **Pearson r** & 0.758 & 0.562 & 0.412 & 0.251 \\ \hline **RMSE** & 0.794 & 0.740 & 0.935 & 2.288 \\ \hline \end{tabular} \end{table} Table 1: Evaluation metrics for the \(\Delta\Delta G\) scorer \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **Number of mutations** & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **9** \\ \hline **Spearman r** & 0.349 & 0.159 & 0.381 & 0.012 & 0.271 & 0.714 & -0.500 & 1.000 \\ \hline **Pearson r** & 0.342 & 0.077 & 0.346 & 0.079 & 0.202 & 0.613 & -0.242 & 1.000 \\ \hline **RMSE** & 2.109 & 2.571 & 2.378 & 1.972 & 2.448 & 1.965 & 1.588 & 0.810 \\ \hline \end{tabular} \end{table} Table 2: Metrics for our scorer across different numbers of mutations. ## 6 Conclusion In this work, we explored the possibility of using graph neural network models for scoring multiple substitution effects on protein stability. Our approach, based on the decoupling of atomic and residue scales by successively training two different scale-specific E-GNN models on massive experimental data-sets, shows promising results. Indeed, the model demonstrates an ability to predict effects of a variable number of mutations, even beyond what it has been trained on. Yet some key parameters of this modelling still need to be better understood; for example, a biologically reasonable edge distance threshold and an overall more appropriate way to handle connectivity in the created residue sub-graph. Figure 5: Evaluation of our scorer on individual structures, PDBs Burley et al. (2017), chosen with at least 100 occurrences in the ThermoMutDB test data-set. All four structures have a significant prediction correlation (p-value\(<\)0.05) and the marker size is proportional to the number of mutations in the experiment. Further results breakdown in Table 3. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **PDB ID** & **1BNI** & **ISTN** & **1VQB** & **1RX4** \\ \hline **Spearman r** & 0.503 & 0.462 & 0.519 & 0.479 \\ \hline **Pearson r** & 0.456 & 0.456 & 0.523 & 0.453 \\ \hline **RMSE** & 1.651 & 1.632 & 2.526 & 1.193 \\ \hline \end{tabular} \end{table} Table 3: Scorer performance metrics on proteins with over 100 data points, as shown in Figure 5.
2302.03626
Control of vortex chirality in a symmetric ferromagnetic ring using ferromagnetic nanoelement
Controlling the vortex chirality in ferromagnetic nanodots and nanorings has been a topic of investigation for the last few years. Many control methods have been proposed and it has been found that the control is related to the breaking of the circular symmetry. In this paper, we present a theoretical study demonstrating the control of chirality in ferromagnetic nanoring without directly breaking its symmetry, but instead by placing elongated ferromagnetic nanoelement inside the ring, Here, the stray magnetostatic field exerted by the asymmetrically placed nanoelement determines the movement of the domain walls upon remagnetization of the nanoring and the resulting chirality in the remanence. This approach allows the chirality of the vortex state to be controlled and also promises its control in a dense array of nanorings, thus suitable for spintronic and magnonic applications.
Uladzislau Makartsou, Mathieu Moalic, Mateusz Zelent, Michal Mruczkiewicz, Maciej Krawczyk
2023-02-07T17:31:39Z
http://arxiv.org/abs/2302.03626v1
# Control of vortex chirality in a symmetric ferromagnetic ring using ferromagnetic nanoelement ###### Abstract Controlling the vortex chirality in ferromagnetic nanodots and nanorings has been a topic of investigation for the last few years. Many control methods have been proposed and it has been found that the control is related to the breaking of the circular symmetry. In this paper, we present a theoretical study demonstrating the control of chirality in ferromagnetic nanoring without directly breaking its symmetry, but instead by placing elongated ferromagnetic nanoelement inside the ring. Here, the stray magnetostatic field exerted by the asymmetrically placed nanoelement determines the movement of the domain walls upon remagnetization of the nanoring and the resulting chirality in the remanence. This approach allows the chirality of the vortex state to be controlled and also promises its control in a dense array of nanorings, thus suitable for spintronic and magnonic applications. ## I Introduction An advantage of the soft ferromagnetic disks and rings over the monodomain nanoparticles for the development of magnetic memory and reprogrammable logic devices is vortex chirality.[1; 2; 3; 4; 5] The key factor determining the chirality of ferromagnetic nanorings (NRs) is the control of domain wall (DW) motion [6; 7]. Understanding and control of the transition process from an onion state (OS) to a vortex state (VS) opens opportunities not only in the study and development of applications based on the static magnetization configuration and dynamic properties related to the chirality of NRs [8; 9; 10] but may also contribute to devices based on DW dynamics in arrays of NRs [11; 12]. The transition from the OS to the VS can be induced by reducing the in-plane magnetic field. This results in moving head-to-head (HTH) or tail-to-tail (TTT) DWs perpendicular to the field direction towards either the left or right side of the ring. The direction of DWs movement is spontaneous and random, leading to the NR obtaining a flux-closure state with either clockwise (CW) or counterclockwise (CCW) chirality. This indicates that the DW motion and vortex chirality control presented in our paper can be applied to a wide variety of spintronic and magnonic applications. Chirality control usually involves breaking the circular symmetry of the NR, as a result of which the direction of movement of the DWs when interacting with an external magnetic field is determined by the energy difference between the asymmetric sides of the ring [13; 14]. For instance, as the asymmetry is introduced as a de-centered ring, i.e., one side of the ring has a larger width than the other, the direction of the external field applied in the plane parallel to these sides, determines the VS chirality [15; 16; 17]. Similarly to VS in full disks, deformation of edges, like notches, also allows to control chirality by causing energy splitting under in-plane applied magnetic fields [18; 5; 19]. Our approach proposes abandoning the direct change in a ring symmetry in favour of its altering by the magnetization of an asymmetrically located single-domain ferromagnetic nanoelement (NE) with high shape anisotropy, placed inside the NR. Its interaction with HTH-TTT DWs can provide the desired control of vortex chirality. The NE must have the following properties in order to prevent its magnetization from changing by external magnetic field and stray fields produced by NR: sufficiently strong anisotropy and sufficiently high magnetic moment. In numerical simulations, we found that an NE having a shape of ring segment and located asymmetrically within the NR would be most suitable. Chirality control by magnetostatic coupling with nanomagnets in ferromagnetic nanodisks has recently been implemented by placing rhombus elements near the disk edges [20]. Our approach adds additional flexibility to the process of controlling chirality and motion of DWs. Placing NE inside the ring can allow to create a dense array of NRs [21], and even an array of overlapping rings [12], with controlled chirality. This is important in the context of interconnected rings, which have recently been demonstrated to perform well in terms of reservoir computing system [12; 22]. Geometry and simulation method We study an isolated soft ferromagnetic Fe nanoring with an inner diameter \(d_{\rm in}=500\) nm, an outer diameter \(d_{\rm out}=800\) nm and thickness \(t=80\) nm. Such dimensions allow for VS stabilization at remanence. To control the magnetization chirality we placed the NE, made also from Fe, inside the ring at distance of 25 nm from the inner wall of the NR. The shape of the NE can be regarded as a part of an NR with a width of 25 nm and sharp ends. The structure under investigation is shown in Fig. 1(a). The shape of the NE gives shape anisotropy, with switching field around 126 mT, which is higher than the coercive field of the NR at 100 mT. In this paper, we compare 3 variants of the system: (1) non-NE configuration, i.e., the reference system, NR without the NE, (2) the parallel configuration with the external magnetic field parallel to the magnetization of the NE, and lastly (3) the anti-parallel configuration with the external field opposite to the magnetization of the NE. All simulations were carried out using MuMax3, a GPU accelerated micromagnetic simulation program [23]. To implement the system in the simulations, we discretized it with \(512\times 512\times 7\) cells with cell size \(\approx 1.57\times 1.57\times 11.42\) nm\({}^{3}\) for a total size of \(805\times 805\times 80\) nm\({}^{3}\) along the \(x\), \(y\) and \(z\) axes, respectively. Because the magnetization is not uniform throughout the thickness and it would not be practical to show all 7 layers used in simulations, we will present in figure only the average magnetization across the thickness rather than a selected \(z\)-layer. We used magnetic parameters from the experimental paper of Miyawaki, et al. [24]. These are: the saturation magnetization \(M_{\rm S}\)= 1600 kA/m, the uni-axial magnetocrystalline anisotropy along the \(z\)-axis (out-of-plane direction) with constant \(K\) = 47 kJ/m\({}^{3}\), and the exchange stiffness constant \(A_{\rm ex}\) = 21 pJ/m.[24] Using a Voronoi tessellation (see, Fig. 1(b)), we have added magnetic grains in the NR to show that our results are robust even for imperfect materials. The grains are 20 nm on average and each of them has a random value of \(M_{\rm S}\) obtained from a normal distribution of mean 1600 kA/m, and with a standard deviation \(\sigma_{M_{\rm S}}\) of 2% (32 kA/m). We also reduced the exchange coupling between grains uniformly by 5% as compared to \(A_{\rm ex}\) value. The introduction of this inhomogeneity results in a fully deterministic simulations for non-NE configuration, always leading to a single VS with CW or CCW chirality, for a given pseudo-random number defining grains and distribution of \(M_{\rm S}\) among the grains. The simulations for the statistical analysis (shown in Fig. 2) for the parallel [Fig. 2(b)] and non-NE configuration [Fig. 2(a)] were run as follows. First, we apply a global external magnetic field \(B_{\rm ext}\) of 1 T along the \(y\)-axis, which saturates the NR and the NE if present, then we decrease \(B_{\rm ext}\) by 1 mT steps until we reach 0 mT. For every step, we use the conjugate gradient method to find the ground state of the magnetization. For the antiparallel configuration [Fig. 2(c)], we could not apply such a strong external field as it would remagnetize the NE in the parallel configuration. So here, we start from \(B_{\rm ext}\) = 100 mT, which is sufficient to maintain the OS for the NR, but not enough to switch the NE. In this case, we also had to set the initial NE magnetization in the opposite direction to the external field to achieve the desired configuration. Then, the system was relaxed and finally we demagnetized the rings to 0 mT in 1 mT steps. ## III Results and discussion #### iii.0.1 Chirality control demonstration To show that the NE determines the final magnetization state of the NR, we conducted a statistical analysis of the remagnetization of the NR with decreasing the external magnetic field according to the procedure described in Sec. II. We ran 100 simulations for 3 configurations: non-NE configuration [Fig. 2(a)], parallel [Fig. 2(b)] and anti-parallel [Fig. 2(c)]. For each simulation, we used a different random seed which resulted in a different organisation of the grain structure. In Fig. 2(a) we observe that the CW to CCW states are obtained in 60% and 40% of cases, respectively. The statistics change significantly when we introduce NE. From Fig. 2(b) and (c) we see that we have full control of the VS chirality at remanence by the magnetization orientation of NE. 100% of the simulations show a CCW configuration with a NE magnetized parallel to \(B_{\rm ext}\), and 100% of the simulations show a CW configuration for the opposite case. Figures 2(d) and (e) show the effect of the magnetization saturation of the NE (\(M_{\rm S,NE}\)) on the chirality control for the parallel and anti-parallel configurations, respectively. We decreased the value of \(M_{\rm S,NE}\) from 1600 kA/m to 1200, 800 and 400 kA/m while keeping the value of \(M_{\rm S}\) in the NR unchanged, i.e., at \(M_{\rm S}=1600\)\(\pm 5\%\) kA/m. Figure 1: (a) Schematic representation of the system under consideration: the ferromagnetic nanoelement (NE) inside the ferromagnetic ring (NR). (b) The grains structure representation used in simulations. We run 100 simulations in each case recording the final states. For both configurations we observe that lowering \(M_{\rm S,NE}\) reduces the degree of chirality control. This indicates that to control VS chirality the NE needs to have sufficiently strong magnetic moment, which can be guaranteed by sufficiently large \(M_{\rm S}\) or volume of the NE. Interestingly, for the anti-parallel configuration shown in Fig. 2(e), the dependence is nonmonotonic. When the magnetization of NE is set to 800 kA/m, it causes the NE to switch its magnetization to be parallel to the nearest part of NR during the remagnetization process, resulting in the loss of chirality control. We observe that 40% and 60% of the VSs are CW and CCW, respectively (similarly to the non-NE configuration). However, when the magnetization of the NE is set to 400 kA/m, the in-plane magnetization changes to the out-of-plane magnetization. This is due to a perpendicular magnetic anisotropy which influence is enhanced with reduced \(M_{\rm S}\). This process needs a separate investigations which are beyond the scope of this paper. Figures 2(f) and (g) show the statistics of the chirality in dependence on the \(\sigma_{M_{\rm S}}\) distribution in the grains, which was determined using Voronoi tessellation. For both NE configurations, the effect of the degree of chirality control decrease as \(\sigma_{M_{\rm S}}\)increases. However, for antiparallel configuration this process is much slower in respect to the parallel configuration. Figs. 2(h) and (i) show the statistics in dependence on the separation between NE and NR. With increasing distance, the effect of chirality control decreases for both configurations but up to 150 nm we still have over 80% of control. This indicates that by selecting the position of the NE relative to the edge of the NR, we can tune the occurrence of chirality of a given type to a given probability. The results demonstrate that we have a stable systematic control of the VS chirality using the NE during the remagnetization process. To elucidate the chirality control mechanisms, we performed a detailed hysteresis loop analysis. #### ii.2.2 Remagnetization procedure NR and NE, made of the same material, exhibit different values of the switching field due to differences in shape anisotropy. This means that the magnetization reversal process of the NE will occur at different magnetic fields compared to the NR.[25] Furthermore, it is important to note that the coupled NR-NE system significantly alters the value of the NE switching field, it changes from \(H_{\rm S}\) = 94 mT to \(H_{\rm S}\) = 126 mT. We will analyse the hysteresis loop in the three scenarios related to the three configurations considered above. In the first scenario, we consider NR in the absence of NE (non-NE configuration). The hysteresis loop is shown in Fig. 3(a) with a blue solid line. We start simulations by applying a large field of -2000 mT along the -\(y\) direction, decreasing its magnitude to 0 mT, and then increasing it to the ring saturation in the opposite direction, i.e., 2000 mT. This process is then repeated by reversing the direction of the field. The HTH and TTT DWs appear when the magnitude of the field is less than 400 mT. As the field is decreased further, the DWs start to deform. The magnetization structures at 110 and 25 mT are shown in Fig. 3(b) and (c), respectively. The remagnetization of the right part of the ring occurs at -24 mT, and a CCW state stabilises at remanence, as shown in Fig. 3(d). However, the chirality of the ring depends on the random distribution of the grains, as shown in Fig. 2(a). Thus, for Figure 2: (a) The statistic of the 100 micromagnetic simulation results for the NR without the NE (non-NE configuration) with different random distribution of parameters among the grains. (b) and (c) The statistics of the micromagnetic simulations with the same grain realizations but for the NR with NE in parallel and anti-parallel configurations, respectively. At the bottom the static magnetization configurations in remanence are shown. Here, the color map indicates the vector orientation, where hue indicates the orientation of the magnetization according to the diagram in the right-bottom corner. (d), (e) Statistic of the chirality control in dependence on the decrease of the magnetization saturation of NE. (f), (g) Chirality control depending on the increase in the distribution of magnetization saturation in grains. (h), (i) Chirality control in dependence on the separation between the NE and the inner edge of NR. other grain distributions, a CW state also may stabilize. The second scenario is for the NR-NE system in the parallel configuration, we also start with full saturation at -2000 mT. The hysteresis loop is shown in Fig. 3(a) with a yellow dashed line, and it develops similarly to the previous scenario. Fig. 3(e) shows the magnetization of the NR and NE at 110 mT. As shown in Fig. 3(f), the magnetization of the system is just before switching from an OS to a VS at -30 mT. This switching occurs at a higher field than for the non-NE configuration. In this scenario, the NE is located on the right side of the NR, leading to a CCW magnetization chirality at remanence, as shown in Fig. 3(g). Importantly, the final state in this scenario does not depend on the random distribution of the grains, as demonstrated in Fig. 2(b). Starting from a positive value of the external magnetic field, i.e., \(B_{\text{ext}}=2000\) mT, we always reach a CCW chirality at remanence. In the third scenario, we simulate a truncated hysteresis loop for the NR-NE system that allows for an antiparallel configuration, where the magnetization of the NE and the nearest side of the NR are aligned in opposite directions. We begin at an external magnetic field of -2000 mT along the \(y\)-direction and follow the main loop, decreasing the magnitude of the field as in the previous scenario. However, in this simulation, we interrupt the process at 110 mT [Fig. 3(h)], when the magnetization of the NE and the nearest side of the NR are antiparallel. Then, we reverse the direction of the changes in the external magnetic field. Fig. 3(i) shows the magnetization at 33 mT, just before the demagnetization of the NR. The magnetization switches in the part of the ring closest to the NE, establishing a CW chirality in the ring, as shown in Fig. 3(j) and corresponding to Fig. 2(c). The results show that regardless of the direction of the external magnetic field at the starting point of the field change, parallel or antiparallel to the direction of magnetization in NE, the direction of magnetization of the same as the direction of magnetization of the nearest side of NR at remanence, which determines the VS chirality. Further conclusion is that the chirality control takes place at fields close to the switching field when one of the vertical parts of the ring changes the magnetization orientation as a result of DW motion in defined direction. This is clearly visible in the movies showing remagnetisation in NR provided in Supplementary Material. Thus, chirality is determined by the direction of movement of the DWs to the left or right part of the ring from the vertical symmetry axis, which is initiated by the magnetostatic interactions between NR and NE. #### iii.1.3 Discussion The stray field produced by NE interacts with the DWs and changes the internal field in the NR, so introduces the additional element to the system that control the direction of DWs propagation. In Fig. 4 we present schematically DW changes with decreasing magnetic field in the 3 scenarios of remagnetization discussed in the previous sections. In the non-NE configuration, Figs. 4(a) and (b), we can have two equivalent final configurations, CCW and CW, respectively. In the state 1 the DWs are in the HTH and TTT configuration. State 2 shows the DWs position at decreased field, where the DWs are placed with small tilts to the external magnetic field. The move of the DWs to the left or to the right are fully equivalent for perfectly symmetric NR, which leads to an uncontrolled VS chirality. States 3 and 4 show the process of annihilation of DWs and the final state, CCW or CW, respectively. Fig. 4(c-d) schematically illustrates the stages of DW evolution during the remagnetization process for both parallel (c) and antiparallel (d) configurations. The process starts from State 1, where the ring has an onion state with HTH-TTT DWs in line with the external magnetic Figure 3: (a) The simulations of the hysteresis loop, where: for non-NE and parallel configuration, the simulations start at full saturation at -2000 mT to 2000 mT and opposite from 2000 mT to -2000 mT, for antiparallel configuration, the simulations start at full saturation at -2000 mT and interrupted at point 110 mT, then from 110 mT to -2000 mT. Magnetization configuration at selected magnetic fields: (b),(c),(d) – in non-NE configuration, (e),(f),(g) – parallel configuration, (h),(i),(j) – antiparallel configuration. The colormap for magnetization orientation is the same as that shown on Fig. 2. field. In State 2, the DWs begin to move, influenced by the magnetization of the NR. The NR induces a stray field, resulting in an asymmetrical distribution of the effective field between the left and right parts of the NR, as shown in Fig. 4(e) and Fig. S2(a-b) in Supplementary Material. For the parallel configuration of the NE and NR, the stray field from NE (left part of Fig. 4(e)) pushes the DWs to the left side of the NR (State 3), ending with a CCW chirality in remanence (State 4). For the antiparallel configuration (right part of Fig. 4(e)), the stray field from NE pushes DWs to the right arm of NR, causing the DWs to annihilate on the right side (State 3) and resulting in a CW chirality in remanence (State 4). As described above, we brake the circular symmetry of the magnetic system by introducing NE, which creates a difference in the effective field distributions between the left and right parts of the NR and determines the direction of the DWs motion (see also Supplementary Materials) [2; 26]. Thus, the stray field produced by NE determines the direction of the torque exerted on the DWs in NR. To measure the effect of the NE on the NR, we extract from micromagnetic simulations magnetic torque for all the cases presented above. We use the function, which is defined in Mumax3, and returns Cartesian components of the magnetic torque in (T) units. The torque is saved immediately after the change in the external magnetic field, before starting the relaxation procedure. We transform the torque for each discretization cell from the Cartesian to the 2D polar coordinate system. To obtain a single measure, we averaged the azimuthal component of the torque field across all spatial dimensions of the NR, resulting in a scalar, \(\overline{\tau}_{\varphi}\), indicating whether the torque generated by the NE causes the NR's magnetization to rotate CW (\(\overline{\tau}_{\varphi}<0\)) or CCW (\(\overline{\tau}_{\varphi}>0\)). In Fig. 5 (left parts) we present \(\overline{\tau}_{\varphi}\) in dependence on the magnetic field for configurations considered above. For simulations of the curves (1), (2), and (4) we import magnetization texture for some selected seed from one of our previous analysis for non-NE configuration at the field 100 mT [Fig. 2(a)], which ends with CCW VS. Then, we set manually the NE inside the ring with anti-parallel [curve (1)] and parallel [curve (2)] magnetization, and perform simulations with decreasing magnetic field, extracting \(\tau_{\varphi}\) at each field step. The simulation (3) in non-NE configuration, was performed for other random seed to show the remagnetization of the NR to the CW state. We see that the sign of \(\overline{\tau}_{\varphi}\) at fields just before the switching field points out the VS chirality in remanence in all presented configurations. Moreover, for the case (1), where the magnetization orientation of NE was artificially reversed at the starting field 100 mT, the \(\overline{\tau}_{\varphi}\) from the positive value changes the sign at smaller fields, and ends with negative values expected for CW chirality at this configuration. ## IV Conclusions In summary, we have demonstrated with micromagnetic simulations the systematic control of the vortex chirality in a symmetric ferromagnetic ring by a ferromagnetic nanoelement placed inside the ring. NE, by exerting stray magnetostatic field, changes symmetry of HTHTTT DWs in the onion state, and during the remagnetization process determines the direction of the DWs movement and finally VS chirality in remanence. To control chirality NE requires sufficiently strong shape anisotropy to keep monodomain state and to have a switching field higher than the switching field of NR (without NE). In addition, the NE should have a sufficiently large magnetic moment (through large saturation magnetization or volume) to create a stray magnetostatic field that will determine the direction of DW motion. We show that this can be achieved by making NE with material in the shape similar to the part of the inner side of the ring, Figure 4: Schematic representation of the remagnetization process in a ring. Starting from the onion state (State 1), with decreasing magnetic field (applied vertically) the DWs move to the right or left (State 2) determining VS chirality at remanence (State 4) via annihilation at switching field (State 3). The chirality of the VS (State 4) is not controlled in the non-NE case (a), (b). With the NE magnetization oriented parallel (c) or antiparallel (d) to the external magnetic field in a remanence the ring has determined the chirality, CCW and CW, respectively. (e) Schematic representation of the effect of the magnetostatic stray field from the NE on the DWs in NR. which simplify eventual fabrication process. In addition, we demonstrated the resistance of this method to the variability of geometric and material parameters of the system and random NR disturbances. All this make the experimental implementation of the proposed system using existing technologies possible and make it useful for spintronic and magnonic applications. ###### Acknowledgements. The research that led to these results has received funding from the National Science Center of Poland, project no. 2020/37/B/ST3/03936.MM acknowledges funding from the Slovak Grant Agency APVV (grant number APVV-19-0311(RSWFA)), and supported by Research & Innovation Operational Programme funded by the ERDF-ITMS project code 313021T081. The simulations were partially performed at the Poznan Supercomputing and Networking Center (Grant No. 398).
2306.00800
FigGen: Text to Scientific Figure Generation
The generative modeling landscape has experienced tremendous growth in recent years, particularly in generating natural images and art. Recent techniques have shown impressive potential in creating complex visual compositions while delivering impressive realism and quality. However, state-of-the-art methods have been focusing on the narrow domain of natural images, while other distributions remain unexplored. In this paper, we introduce the problem of text-to-figure generation, that is creating scientific figures of papers from text descriptions. We present FigGen, a diffusion-based approach for text-to-figure as well as the main challenges of the proposed task. Code and models are available at https://github.com/joanrod/figure-diffusion
Juan A Rodriguez, David Vazquez, Issam Laradji, Marco Pedersoli, Pau Rodriguez
2023-06-01T15:28:41Z
http://arxiv.org/abs/2306.00800v3
# FigGen: Text to Scientific Figure Generation ###### Abstract The generative modeling landscape has experienced tremendous growth in recent years, particularly in generating natural images and art. Recent techniques have shown impressive potential in creating complex visual compositions while delivering impressive realism and quality. However, state-of-the-art methods have been focusing on the narrow domain of natural images, while other distributions remain unexplored. In this paper, we introduce the problem of text-to-figure generation, that is creating scientific figures of papers from text descriptions. We present FigGen, a diffusion-based approach for text-to-figure as well as the main challenges of the proposed task. Code and models are available at [https://github.com/joanrod/figure-diffusion](https://github.com/joanrod/figure-diffusion). ## 1 Introduction Scientific figure generation is an important aspect of research, as it helps to communicate findings in a concise and accessible way. The automatic generation of figures presents numerous advantages for researchers, such as savings in time and effort by utilizing the generated figures as a starting point, instead of investing resources in designing figures from scratch. Making visually appealing and understandable diagrams would allow accessibility for a wider audience. Furthermore, exploring the generative capabilities of models in the domain of discrete graphics would be of high interest. Generating figures can be a challenging task, as it involves representing complex relationships between discrete components such as boxes, arrows, and text, to name a few. Unlike natural images, concepts inside figures may have diverse representations and require a fine-grained understanding. For instance, generating a diagram of a neural network presents an ill-posed problem with high variance, as it can be represented by a simple box or an unfolded representation of its internal structure. Human understanding of figures largely relies on the text rendered within the image, as well as the support of text explanations from the paper written in technical language. By training a generative model on a large dataset of paper-figure pairs, we aim to capture the relationships between the components of a figure and the corresponding text in the paper. Dealing with variable lengths and highly technical text descriptions, different diagram styles, image aspect ratios, and text rendering fonts, sizes, and orientations are some of the challenges of this problem. Inspired by impressive results in text-to-image, we explore diffusion models to generate scientific figures. Our contributions are i) introduce the task of text-to-figure generation and ii) propose FigGen, a latent diffusion model that generates scientific figures from text captions. **Related work.** Deep learning has emerged as a powerful tool for conditional image generation (Ramesh et al., 2022; Saharia et al., 2022; Balaji et al., 2022), thanks to advances in techniques such as GANs (Goodfellow et al., 2014; Karras et al., 2019, 2021) and Diffusion (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020). In the domain of scientific figures, Rodriguez et al. (2023) presented Paper2Fig100k, a large dataset of paper-figure pairs. In this work, we aim to explore diffusion models applied to the task of text-to-figure generation and analyze its challenges. ## 2 Method and Experiments We train a latent diffusion model (Rombach et al., 2021) from scratch. First, we learn an image autoencoder that projects images into a compressed latent representation. The image encoder uses a KL loss and OCR perceptual loss (Rodriguez et al., 2023). The text encoder used for conditioning is learned end-to-end during the training of the diffusion model. The diffusion model interacts directly in the latent space and performs a forward schedule of data corruption while simultaneously learning to revert the process through a time and text conditional denoising U-Net (Ronneberger et al., 2015) (see Appendix A.1 for details). We use Paper2Fig100k, composed of figure-text pairs from research papers. It consists of \(81,194\) samples for training and \(21,259\) for validation. **Experimental results.** During generation, we use DDIM (Song et al., 2020a) sampler with \(200\) steps and generate \(~{}12,000\) samples for each model to compute FID, IS, KID (Regenwetter et al., 2023), and OCR-SIM1. We use classifier-free guidance (CFG) to test super-conditioning (Ho and Salimans, 2022). Table 1 presents results of different text encoders, and Figure 1 shows generated samples of FigGenBase. We find that the large text encoder offers the best results and that we can improve conditional generation by increasing the CFG scale. Although qualitative samples do not present sufficient quality to solve the task, FigGen has learned interesting relationships between texts and figures such as the difference between plots and architectures (see also Appendix A.3). Footnote 1: [https://github.com/joanrod/ocr-vqgan](https://github.com/joanrod/ocr-vqgan) ## 3 Conclusion In this paper, we introduce the task of text-to-figure generation and define FigGen, a latent diffusion model that we train on the Paper2Fig100k dataset. Our experiments show that FigGen is able to learn relationships between figures and texts and generate images that fit the distribution. However, these generations are not ready to be useful for researchers. One of the main challenges to solve is the variability in text and images, and how to better align both modalities. Also, future work must design validation metrics and loss functions for generative models of discrete objects. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Model & Text encoder & Parameters & CFG & FID\(\downarrow\) & IS\(\uparrow\) & KID\(\downarrow\) & OCR-SIM\(\downarrow\) \\ \hline FigGenBase & Bert (8 layers) & 866M & 1.0 & 302.46 & 1.04 & 0.32 & 5.97 \\ FigGenBase & Bert (8 layers) & 866M & 5.0 & 282.32 & **1.09** & **0.29** & 5.89 \\ FigGenBase & Bert (8 layers) & 866M & 10.0 & 284.12 & 1.08 & **0.29** & 5.83 \\ \hline FigGenBase & Bert (32 layers) & 942M & 1.0 & 308.58 & 1.03 & 0.32 & 5.95 \\ FigGenBase & Bert (32 layers) & 942M & 5.0 & 298.98 & 1.06 & 0.31 & 5.91 \\ FigGenBase & Bert (32 layers) & 942M & 10.0 & 301.10 & 1.06 & 0.31 & 5.86 \\ \hline FigGenLarge & Bert (128 layers) & 1.2B & 1.0 & 302.99 & 1.04 & 0.32 & 6.08 \\ FigGenLarge & Bert (128 layers) & 1.2B & 5.0 & **281.25** & **1.09** & **0.29** & **5.74** \\ FigGenLarge & Bert (128 layers) & 1.2B & 10.0 & 288.02 & **1.09** & **0.29** & 5.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Main quantitative results of our text to figure generation models. Figure 1: Samples generated by our model using captions from Paper2Fig100k test set. **Ethics statement.** A central concern of this work is fake paper generation. To address this, we consider building classifiers or using watermarks for the detection of fake content. However, further research is needed to elucidate how these systems should be made public. **URM Statement.** The authors acknowledge that at least one key author of this work meets the URM criteria of ICLR 2023 Tiny Papers Track.
2303.14073
Blending from binarity in microlensing searches toward the Large Magellanic Cloud
Studies of gravitational microlensing effects require the estimation of their detection efficiency as soon as one wants to quantify the massive compact objects along the line of sight of source targets. This is particularly important for setting limits on the contribution of massive compact objects to the Galactic halo. These estimates of detection efficiency must not only account for the blending effects of accidentally superimposed sources in crowded fields, but also for possible mixing of light from stars belonging to multiple gravitationally bound stellar systems. Until now, only blending due to accidental alignment of stars had been studied, in particular as a result of high-resolution space images. In this paper, we address the impact of unresolved binary sources that are physically gravitationally bound and not accidentally aligned, in the case of microlensing detection efficiencies toward the Large Magellanic Cloud (LMC). We used the Gaia catalog of nearby stars to constrain the local binarity rate, which we extrapolated to the distance of the LMC. Then we estimated an upper limit to the impact of this binarity on the detection efficiency of microlensing effects, as a function of lens mass. We find that a maximum of 6.2\% of microlensing events on LMC sources due to halo lenses heavier than $30 M_{\odot}$ could be affected as a result of the sources belonging to unresolved binary systems. This number is the maximum fraction of events for which the source is a binary system separated by about one angular Einstein radius or more in a configuration where light-curve distortion could affect the efficiency of some detection algorithms. For events caused by lighter lenses on LMC sources, our study shows that the chances of blending effects by binary systems is likely to be higher and should be studied in more detail to improve the accuracy of efficiency calculations.
Tristan Blaineau, Marc Moniez
2023-03-24T15:37:00Z
http://arxiv.org/abs/2303.14073v3
# Blending from binarity in microlensing searches towards the Large Magellanic Cloud ###### Abstract Context:Studies of gravitational microlensing effects require the estimation of their detection efficiency, as soon as one wants to quantify the massive compact objects along the line of sight of source targets. This is particularly important for setting limits on the contribution of massive compact objects to the Galactic halo. These estimates of detection efficiency must not only account for the blending effects of accidentally superimposed sources in crowded fields, but also for possible mixing of light from stars belonging to multiple gravitationally bound stellar systems. Aims:Until now, only accidental blending have been studied, in particular thanks to high resolution space images. We address in this paper the impact of unresolved binary sources in the case of microlensing detection efficiencies towards the Large Magellanic Cloud (LMC). Methods:We use the Gaia catalog of nearby stars to constrain the local binarity rate, which we extrapolate to the distance of the LMC. Then, we estimate the maximum fraction of the cases for which a microlensing event could be significantly modified, as a function of the lens mass. Results:We find that less than 6.2% of microlensing events on LMC sources due to halo lenses heavier than \(30M_{\odot}\) can be significantly affected by the fact that the sources belong to unresolved binary systems. For events caused by lighter lenses on LMC sources, our study shows that the risk of blending effects by binary systems is likely to be higher and efficiency calculations remain more uncertain. Conclusions: ## 1 Introduction Objects catalogued in dense fields are frequently composed of several blended sources. Ignoring this fact may distort the statistical conclusions of the microlensing searches because of its impact on detection efficiency. Some of the consequences of blending on microlensing have been studied by comparing ground-based images with high-resolution deep space images obtained notably with the Hubble Space Telecope (HST) (HST archive 2002). These space images allow to quantify the impact of accidental superpositions of sources in the catalogs of the ground-based surveys, due to the high density of the field (Tisserand et al. 2007; Wyrzykowski et al. 2011). However, another component of the blending remains poorly understood, that resulting from the mixing of light from multiple gravitationally bound stars. Space telescopes themselves are unable to resolve such systems when they are at a distance as large as the Large Magellanic Cloud (LMC). Their existence is an additional cause of blending, distinct from the mixing caused by coincidental alignments. In this paper we study the consequences, currently poorly known, of the binarity of stars on the detectability of the gravitational microlensing effects they may experience. In particular, we will study the case of the detection efficiency of microlensing effects due to high mass (\(>30M_{\odot}\)) Galactic halo objects, which have been recently searched in the LMC direction and excluded as a significant component of the hidden mass of the Galaxy (Blaineau et al. 2022). In section 2, we recall the fundamentals of the gravitational microlensing effect. In Section 3 we introduce the blending effects and their impact on the detection efficiency. We introduce the case of multiple sources and distinguish between three blending regimes. In section 4, we present our statistical analysis tools and show that we cannot extract statistical information on the LMC binary systems from HST images because of the separation limit. In section 5, we describe our methodology to estimate upper values of the local system binarity rate. We show how the distribution of distances between the components of star pairs in a complete GAIA catalog population allows us to quantify the rate of widely separated double systems in the Galactic plane. We extrapolate the local binarity rate down to \(50AU\) separations for stars closer than \(500pc\) to the Sun in section 6. In section 7, we quantify the maximum impact of binarity on the microlensing detection efficiency towards the LMC as a function of the projected lens Einstein radius. We discuss the validity domain and limitations of our study, and address the question of the dependence of binarity rates on the stellar type in Section 8. We conclude and summarize our results in Section 9, and propose some prospects for future microlensing surveys. ## 2 Overview of microlensing ### Description of a microlensing event The gravitational microlensing effect (Paczynski, 1986), first discovered in 1993 (Alcock et al., 1993; Aubourg et al., 1993; Udalski et al., 1993), occurs when a massive compact object (the lens) passes close enough to the line of sight of a background source and temporarily magnifies its brightness. Reviews of the microlensing formalism can be found in Schneider et al. (2006) and Rahvar (2015). When a single point lens of mass \(M_{L}\) located at distance \(D_{L}\) defects the light from a point source located at distance \(D_{S}\), a situation hereafter called PSPL event, an observer receives light from two images not separated in the telescopes. The total magnification \(A(t)\) of the apparent luminosity of the source is given by (Paczynski, 1986): \[A(t)=\frac{u(t)^{2}+2}{u(t)\sqrt{u(t)^{2}+4}}\, \tag{1}\] where \(u(t)\) is the distance of the lens to the undeflected line of sight, divided by the Einstein radius \(r_{\rm E}\), \[r_{\rm E}=\sqrt{\frac{4GM_{L}}{c^{2}}D_{S}\,x(1-x)}\simeq 4.5\,{\rm AU}\left[ \frac{M_{L}}{M_{\odot}}\right]^{\frac{1}{2}}\left[\frac{D_{S}}{10kpc}\right]^ {\frac{1}{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! properly take into account this blending effect for the interpretation of microlensing effect searches, especially to evaluate the optical depths. As mentioned above, the confusion of several stars in a single cataloged object has several possible origins: the superposition may be due to an accidental coincidence (accidental blending), depending on the field crowding, or it may be due to the existence of binary or multiple systems (binary blending). The estimation of accidental blending as it is classically obtained, makes the assumption of a locally uniform density distribution of stars; on the other hand the impact of superposition due to multiple systems, the binary blending, cannot be estimated in the same way. Considering the case of two stars blended into a single catalog object, we distinguish three blending regimes depending on their separation: * As long as the angular separation \(\delta\) between the two stars is large compared to both angular Einstein radii of the lens configurations1, we can consider them as independent in the context of the calculation of the expected number of events: we are in a blending regime that we can consider as classical or ordinary, since it is the one that occurs almost exclusively in the case of accidental mixing, and we observe then the addition of a constant flux to an amplified flux. Previous studies have shown that the impact of this blending regime is small (\(<10\%\)) and positive on the detection efficiency (Blaineau et al. (2022) and references included). Footnote 1: If the two stars are not at the same distance \(D_{z}\), then the Einstein rings for microlensing by the same given lens are different. * If the angular separation \(\delta\) between the two stars is of the order of the angular Einstein radii of both lens configurations, then we are in an intermediate regime, where the light curve can be described neither by a PSPL microlensing effect nor by the addition of a constant flux to an amplified flux. In this case, we observe the superposition of two events, with different magnifications, and maxima shifted in time. Depending on the geometrical configuration, two clearly separated peaks or a single asymmetric peak may appear. Several examples of magnification curves are shown in Fig. 1, with the corresponding geometrical configurations of sources and deflectors. As with the previous regime, this situation can occur only in the case of blending due to binarity, and the Einstein radii are identical for the microlensing of both stars. To quantify the impact of multiple sources on microlensing toward the LMC, we first need a way to count them. The following section shows why the limitations of space-based observations towards the LMC lead us to study a population of the solar environment using Gaia data (Gaia Collaboration et al. 2016). ## 4 Angular and physical distance separation between LMC stars detected by HST The spatial distribution of LMC stars is not expected to be uniform if it includes multiple bound systems. This is why we work with the distribution of the number of pairs of stars as a function of their separation (angular or spatial) and the two-point angular correlation function. The correlation function is calculated with the Landy-Szalay estimator (Landy & Szalay 1993), which compares the number of pairs in the data with the number of pairs in a simulation of a uniform spatial distribution in the same spatial domain. The other way to study multiple systems is to count the number of pairs as a function of their angular separation \(\delta\). We expect this distribution to be the sum of a contribution due to fortuitous alignments and an excess at the smallest values of \(\delta\) if there are multiple gravitationally bound systems in the set of the pairs. From simple geometrical considerations, the first contribution is a distribution which increases linearly with \(\delta\) as long as \(\delta\) is small enough so that edge effects do not limit the catalog used. Figure 1: (_up_) Three different trajectories of a deflector (red dashed) in front of two stars A and B blended into a single cataloged source; on the left stars A and B share \(80\%\) and \(20\%\) of the total luminous flux, on the right both stars have the same luminosity. The black outline corresponds to the positions where the deflector must be for the source-system to undergo a total magnification of 1.34. The coordinates are in units of Einstein radius, which is the same for both stars (located at the same distance). The second contribution, when it exists, is expected for smaller values of \(\delta\). Figure 2 shows the distribution of the number of pairs (up) and the two point correlation function (down), as a function of the angular separation \(\delta\) of the stars detected by SExtractor algorithm (Bertin & Arnouts 1996) on an image of a crowded field of the LMC. This image has been obtained by coadding the images taken with the \(1555\)w and \(814\)w filters by the Wide Field Planetary Camera 2 (WFPC2) of HST\({}^{2}\)(HST archive 2002). We notice that the correlation function remains zero down to the HST resolution limit (about \(0.5^{\prime\prime}\)), decreases below \(0.5^{\prime\prime}\) and is \(-1\) below \(0.25^{\prime\prime}\), which indicates that the algorithm can never distinguish two stars separated by less than \(0.25^{\prime\prime}\). Since there is no positive correlation for small separations, this shows that we also do not detect an excess of pairs in the HST data over a random distribution beyond a separation of \(0.5^{\prime\prime}\), which corresponds to a transverse separation of \(27500AU\) in the LMC. This situation is not surprising, since gravitationally bound systems at such distances are very rare (Dhital et al. 2010; Duchene & Kraus 2013). The LMC is therefore too far away for HST to resolve multiple systems that would cause a significant excess of close pairs compared to a random spatial distribution. ## 5 How to constrain the binarity rate in the LMC? Since the LMC is too far away, we studied a stellar population close to us, using data from the Gaia mission, and then we extrapolate the results to the distance of the LMC. First, we define the binarity rate we will use and describe our estimation method based on star pair counts. ### Methodology We wish to know the proportion of LMC objects in EROS or MACHO type catalogs that are binary systems as a function of their transverse separation \(a_{t}\). As we have just seen, the LMC is too far away for even space missions to resolve such systems. We therefore study a population of nearby stars cataloged by Gaia, for which the resolution of systems is possible as soon as \(a_{t}>500AU\). In the following, we call _object_ an element of the Gaia catalog (that can be made of a single star or more complex system) and we call _system_ a cluster of objects that we consider as gravitationally bound system (typically a resolved binary system). We will then count the binary stars in the local space, in a situation where they are well separated. Let's define \(n^{*}_{tot}\) as the total number of _objects_ in the catalog, and \(n^{*}_{bin}(a_{t})\) as the number of objects belonging to a resolved binary _system_ (_i.e._ made of two cataloged object), with a transverse separation \(>a_{t}\). From these simple counts of the resolved double star systems, we can numerically compute the function: \[f_{bin}(a_{t})=\frac{1}{n^{*}_{tot}}\frac{dn^{*}_{bin}(a_{t})}{da_{t}}. \tag{6}\] \(f_{bin}(a_{t})\) is the differential probability that a Gaia cataloged _object_ belongs to a double _system_ separated by a projected distance \(a_{t}\). This is a fonction that we will directly derive from the Gaia catalog in the next sections. We then define \(F_{bin}(a_{t})\), the integrated _system_ binarity rate, as the ratio between the number of binary _systems_ (unlike objects) with transverse separation \(>a_{t}\), to the total number of _systems_ : \[F_{bin}(a_{t})=\frac{n^{*}_{bin}(a_{t})/2}{(n^{*}_{tot}-n^{*}_{bin})+n^{*}_{ bin}/2}=\frac{n^{*}_{bin}(a_{t})/n^{*}_{tot}}{2-n^{*}_{bin}/n^{*}_{tot}}. \tag{7}\] Note that the total number of _systems_ at the denominator corresponds to the sum of the single star systems plus all the resolved binary systems -not only the systems separated by more than \(a_{t}-^{3}\). From the definition of \(f_{bin}(a_{t})\) we obtain \[\frac{n^{*}_{bin}(a_{t})}{n^{*}_{tot}}=\int^{\infty}_{a_{t}}f_{bin}(a)da_{t}, \tag{8}\] which is the fraction of _objects_ belonging to a binary _system_ with transverse separation \(>a_{t}\), \(n^{*}_{bin}\) is the total number of _objects_ belonging to a binary _system_ resolved by Gaia (depending on Gaia's resolution power). This number is estimated from \[\frac{n^{*}_{bin}}{n^{*}_{tot}}=\int^{\infty}_{a^{\prime}_{tot}}f_{bin}(a_{t} )da_{t}, \tag{9}\] where \(a^{\prime}_{tot}\) is the separation limit of Gaia in the catalog we use. As Gaia's resolution is not a step function, we start integrating from a very small separation \(a^{\prime}_{tot}=50AU\), below which we are sure Gaia cannot separate 2 components, but to which we Figure 2: (up) Number of pairs of stars per 1.2 arcsec interval as a function of their angular separation in a HST crowded LMC field image. The dotted line gives the predicted distribution of angular separations in the case of a spatially uniform distribution of stars with the same average star density. (Down) The measured two-point angular correlation function of the detected stars in the same field. The lower scale is the angular separation \(\delta\) and the upper scale gives the corresponding transverse separation in the LMC. will extrapolate our measurements in Section 6. This conservative choice then gives us a lower bound on \(n^{*}_{kin}\), and thus an upper bound on \(F_{bin}\) in the equation (7). This choice is not critical, since we will show that the integral (9) is much smaller than 2, and has therefore a minor impact in the denominator of Eq. (7). ### Binarity of stars closer than 600pc in the GAIA-EDR3 catalog We have used the Gaia-EDR3 catalog (Gaia Collaboration et al. 2020) for a preliminary assessment of the impact of binarity in microlensing studies. This exploratory work does not aim to extract binarity rates, but rather upper limits on the amount of multiple source systems that can give rise to complex gravitational microlensing effects. Our immediate goal is therefore to estimate the maximum fraction of binary systems separated by transverse (or projected) distances \(a_{t}\) comparable to or larger than the Einstein radius of a possible lens. In our definitions of the binarity rate, we have neglected the contribution of stellar systems made of more than two stars, since we find only 1% of such systems with angular size smaller than \(10^{\prime\prime}\) (corresponding to \(a_{t}<6000AU\) at \(600pc\) distance) in the selection of stars defined in the next paragraph. We will therefore neglect this type of system in our treatment of blending effects. The estimate of the transverse (or projected) physical distance \(a_{t}\) of two stars in the Gaia catalog is deduced from the angular distance \(\delta\) between the two components and their annual parallax measurements \(\pi\) (in ArcSec), assuming that they both have an intrinsic uniform and rectilinear motion4(Lindegren et al. 2012, 2021), by \(a_{t}=1AU\times\delta/\pi\). The parallax differences between components being too imprecise compared to the angular accuracies, we cannot make a study in the 3 dimensions which would include the longitudinal distance (along the line of sight). In the following analysis, parallaxes are just used to convert angular distances into transverse distances and also to reduce the risks of association of very distant objects, accidentally located on the same line of sight, by considering only pairs of stars with compatible parallaxes. Footnote 4: Since we are trying to quantify binary systems, we must remember that the rotation of the stars of a system around the center of gravity alters the uniform rectilinear motion assumed in the estimation of parallaxes. As we are only interested in binary systems separated by more than \(50AU\), with orbital periods of the order of a few centuries, we can consider that the variation of orbital velocity on the apparent trajectory of the stars on the sky has only a negligible impact on the estimation of parallaxes in Gaia. We selected the stars of the Gaia-EDR3 catalog (Gaia Collaboration et al. 2020), whose apparent magnitude is \(3<g<18\), a domain in which the catalog is complete (thus not biased) (Fabricius et al. 2020). Then we limited our sample to stars closer than 600pc (corresponding to a parallax \(\pi>1.66mas\)). At this distance, a separation of \(1^{\prime\prime}\) corresponds to \(600AU\). We also require that the parallax accuracy be better than 20%. The bias (Luri et al. 2018) induced by this selection can be neglected because we reject only 2% of the stars, the most distant of our sample. Finally, we have restricted our study to Galactic latitudes higher than \(20^{\circ}\) to avoid the very crowded and inhomogeneous areas of the Galactic plane (figure 3). Figure 4 shows the distribution of the absolute magnitude \(G\) of the stars of the GAIA catalog closer than 600pc, as a function of their distance estimated by their parallax. This catalog is complete between the lines of equal apparent magnitude marked by the thick red curves (\(g=3\) and \(g=18\)). We define 5 shells (Fig. 3 and 4), delimited by the spheres of radii (50, 140, 230, 320, 410 and 500pc) and study separately the pairs of stars in each shell (Table 1). These subdivisions allow us to verify that the separation distributions of the binaries are indeed functions of the physical distances \(a_{t}\), independently of the angular distances \(\delta\), as long as these are greater than the separation power of Gaia. To ensure that the distributions studied in the 5 shells considered are all complete, we only study the stellar population whose absolute Figure 4: Distribution of the number of stars in Gaia EDR3 as a function of their distance and absolute magnitude (left scale). The right scale gives the apparent magnitude that these stars would have in the LMC. Gaia catalog is complete between the thick red curves. The red lines are the iso-apparent-magnitude lines in Gaia in the (Distance, absolute magnitude) plane. We restrict the study to stars within the absolute magnitude range delimited by the thick horizontal black lines (\(-0.5<G<9.5\)), which ensures homogeneity of stellar type over the volume we consider; the vertical solid lines correspond to the distance limits of our sample. The vertical dashed black lines delineate the distance domains of the shells we will consider in our analysis (between \(50\) and \(500pc\)). Figure 3: Projection perpendicular to the Galactic plane of the spatial distribution of the stars of the Gaia catalog, and representation of the limits of the shells used in our study. The representation is centered on the Sun. magnitude is in the range \(-0.5<G<9.5\) (between the thick black lines in Figure 4). Finally, we only consider pairs of stars with a magnitude difference of less than 2.5, beyond which we can neglect the impact of the less luminous component on the light curve of a microlensing effect. Figure 5 shows the angular separation distribution \(\delta\) for the pairs of stars closer than \(600pc\). We first observe a clear excess of pairs for \(\delta<20^{\prime\prime}\), in agreement with Zavada & Piska (2020), which demonstrates the existence of Gaia-detectable bound systems within \(600pc\), in contrast to the situation in Fig. 2 which showed the inability of HST to separate bound systems in the LMC. Second, we note that like HST, our algorithm fails to separate stars that are too close in angular distance, in this case around \(0.5^{\prime\prime}\). The question of the separation limit, as it also relates to Gaia's scanning law, is beyond the scope of this paper (see Blaineau (2021) for more details). For our study, it is sufficient to know that we decided to focus only on pairs separated by at least \(2^{\prime\prime}\) in order not to be limited by the resolution of Gaia. We have shown that the results of the next sections vary by less than 2% (half of the estimated uncertainty) if we change this minimal separation in the range \([1.5^{\prime\prime}-2.5^{\prime\prime}]\)(Blaineau, 2021). To estimate the binarity rate of stars in a shell, we first build the distribution of the type shown in Figure 5; then, we calculate the number of pairs remaining after subtracting the component due to accidental alignments, as a function of the separation. This expected random component is a linear function, fitted between \(60^{\prime\prime}\)5 and \(120^{\prime\prime}\) (dashed line in Figure 5). Once the angular separation is converted to physical transverse separation \(a_{t}\) (by using the measured parallax \(\pi\)), from the excess in each channel of \(a_{t}\) we can derive an estimate of the differential stellar binarity rate \(f_{bin}(a_{t})\) defined by expression (6), plotted in Figure 6 for each shell. We recall that this quantity represents the differential fraction of stars in pairs _in excess_ of the expected number of accidental pairs separated by \(a_{t}\), per unit of \(a_{t}\). Footnote 5: Two stars at \(\sim 450pc\) separated by \(60^{\prime\prime}\) would be separated by only \(0.5^{\prime\prime}\) if they were in the LMC. This directly illustrates the fact that an excess count of pair in LMC is only expected for separations much smaller than the resolution limit of HST. The figure shows that the rates found do not depend on the shell considered, which allows us to use the average distribution, weighted by the effective number of stars in each shell; these effective numbers are the numbers of stars found within the intervals of parallax of each shell, statistically corrected for misattributions of pairs in the shells, due to uncertainties in the measured parallaxes 6 (see also Table 1). It should be noted that the closest shells are obviously those that allow us to estimate the binarity rate at the smallest separations \(a_{t}\), but at the expense of the statistics limited by the small volume of the shell. For shells at larger distances, the statistic is more comfortable, but our angular separation limit of \(2^{\prime\prime}\) prevents estimates at the smallest physical separations. Footnote 6: These uncertainties result in uncertainties on the distance of the stars; a pair can then be either misattributed if both components are effectively in another shell, or it can be missed, if only one of the components is in another shell. ## 6 Estimation of the local stellar binarity rate In this section, we fit and extrapolate the differential binarity rate down to \(50AU\) transverse separations. \begin{table} \begin{tabular}{l c c c} \hline \hline & \(a_{t}\) for \(\delta=2^{\prime\prime}\) & \multicolumn{2}{c}{number of stars} \\ shell limits & separation & in shell & effective \\ \hline \([\)50-140\(]\) pc & \([\)100-280\(]\) AU & \(16049\) & \(158000\) \\ \hline \([\)140-230\(]\) pc & \([\)280-460\(]\) AU & \(481496\) & \(465000\) \\ \hline \([\)230-320\(]\) pc & \([\)460-640\(]\) AU & \(867259\) & \(807000\) \\ \hline \([\)320-410\(]\) pc & \([\)640-820\(]\) AU & \(1259719\) & \(1109000\) \\ \hline \([\)410-500\(]\) pc & \([\)820-1000\(]\) AU & \(1634717\) & \(1313000\) \\ \hline \hline \end{tabular} \end{table} Table 1: Limits and contents of the shells (see text). Figure 5: _Number of star pairs per 0.5 arcsec interval as a function of their angular separation. The dotted line gives the predicted distribution in the case of a spatially uniform distribution with the same average star density. Only stars with Galactic latitude larger than \(60^{\circ}\) are considered here (1,233,947 stars). We measure about 29,000 pairs in excess of accidental pairs._ Figure 6: _Differential rate of binaries \(f_{bin}(a_{t})\) measured in the five shells as a function of transverse separation \(a_{t}\). The red dashed line gives the means for all shells. Error bars are not shown to avoid overloading the figure, but the scatter of each series gives an indication._ Following Duquennoy & Mayor (1991) and Raghavan et al. (2010), we considered a log-normal model for the distribution of semi-major axes \(a\) of binary systems. Since we measure projected separations \(a_{t}\) and not semi-major axes, this distribution is modified as follows, by considering that the orbits have no preferred inclination: \[f_{bin}^{DM}(a_{t})=A\int_{a_{t}}^{+\infty}\frac{a_{t}}{r^{2} \sigma\sqrt{2\pi(r^{2}-a_{t}^{2})}}\exp-\frac{\left(\ln r/r_{\rm mode}-\sigma^{ 2}\right)^{2}}{2\sigma^{2}}dr, \tag{10}\] where \(A\) and \(\sigma\) are fitted to our data once \(r_{mode}\) is chosen, corresponding to the maximum of the distribution. If we chose \(r_{mode}=0.1AU\), then we find \(A=0.126\) and \(\sigma=2.72\), but the fit is unsatisfactory for \(a_{t}<600AU\), and this whatever the parameter \(r_{mode}\) (Figure 7). This poor agreement can probably be attributed to the fact that this lognormal distribution was established for solar-type stars, while we study a different and more extended population. Nevertheless, following this fitted model, we compute the binarity rate for _systems_ with \(a_{t}>200AU\)1 using Eqs (9), then (8) and (7), and find \(F_{bin}(200AU)=2.76\pm 0.03\%\). By varying the \(r_{mode}\) parameter within \(10^{-4}AU<r_{mode}<10^{2}AU\), we find values close to, contained within \(2.05\%<F_{bin}(200AU)<2.8\%\). Similarly, we find that \(F_{bin}(100AU)<3.6\%\) regardless of the \(r_{mode}\) parameter. Footnote 1: This length corresponds to the most likely projected Einstein radius (at the LMC) of a \(\sim 50M_{\odot}\) Galactic halo lens magnifying light from a LMC source. We also fitted to our measurements the following empirical power-law parameterization : \[f_{bin}^{PL}(a_{t})=(0.19AU^{-1})\times(a_{t}/1AU)^{-1.379}, \tag{11}\] that better describes our data for small \(a_{t}\) values. (Figure 7). With this alternative model for the distribution of projected separations of binaries, we find that \(F_{bin}(200AU)=3.48\pm 0.03\%\), a value close to the one found with the previous model. In section 5, we mentioned that the total probability of a Gaia object being a member of a resolved binary system is less than the value given by expression (9). The value we find by integrating the Eq. (11) from \(a_{t}=50AU\) is \(0.114\), which is an upper limit of the fraction of Gaia objects belonging to binaries resolved in our sample. This number is indeed small compared to 2, and its exact value does not impact the computation of \(F_{bin}\) from Eq. (7) especially since we are interested in the upper limit. It should be noted further that nothing can be said from our data about the binarity rate for \(a_{t}<50AU\), because extrapolation below this value is not constrained. ## 7 Extrapolation at the LMC; impact of binarity on microlensing detection The previous study concerns a population of Milky Way stars with \(-0.5<G<9.5\). In the LMC, they would have an apparent magnitude \(18<g<28\), _i.e._, would be among the faintest stars in a classical catalog searching for gravitational microlensing effects. This is a limitation of this work, in addition to the fact that we assume that this population has the same binarity statistical characteristics in the LMC as in the Milky Way disk. We saw in section 3 that if the angular separation of the components of a blend is much smaller than the angular Einstein radius -expressed by \(a_{t}<<R_{E}/x\), where \(x\) is given by Eq. (3)-, then both sources undergo roughly the same magnification and everything happens as if there were only one source. It is therefore the pairs with \(a_{t}\geq R_{E}/x\) whose proportion we need to estimate, to quantify their impact on the statistics of microlensing effects. From the \(f_{bin}^{PL}(a_{t})\) function of the differential binarity rate and using Eq. (7), we can estimate the maximum proportion of situations where binarity can significantly affect microlensing effects. Figure 8 shows this proportion as a function of \(R_{E}/x\), the Einstein radius projected in the LMC, after integration over the distribution of \(a_{t}\) under 3 different assumptions: assuming that the blend effect is significant as soon as \(a_{t}>R_{E}/2x\) (thus integrating the differential distribution from \(R_{E}/2x\) to infinity), or only when \(a_{t}>R_{E}/x\), or weighting the differential distribution between 0 and 1 (ramping) when \(a_{t}\) varies from \(0.1R_{E}/x\) to \(1.75R_{E}/x\). The latter assumption is derived from the study of Griest & Hu (1992), which discusses in detail the distortions expected by a microlensing curve for a composite source. Figure 8 shows for example that under the most pessimistic assumption (blending effect to be taken into account as soon as \(a_{t}>R_{E}/2x\)), less than 7% of the sources are binary systems with a luminosity difference of less than 2.5 magnitude between components, which can significantly affect the light curves of microlensing events when \(R_{E}/x>50AU\). ## 8 Discussion ### Field of application One must first remember the limitations of this work: only the population of stars of absolute magnitude \(-0.5<g<9.5\) could be well studied, and any use for another population is an extrapolation, either for an identical population but in another galaxy with possibly different metallicity, or for a stellar population in the Milky Way with more extensive types. Figure 7: _Projected log-normal function \(f_{bin}^{DM}\) for \(r_{mode}=0.1AU\) (dotted line), and power function \(f_{bin}^{PL}\) (full line), fitted to the differential fraction of binary, as a function of the projected separation. The vertical red line corresponds to a projected separation of 200 AU. The inset shows that the fit of \(f_{bin}^{DM}\) is not satisfactory at small separations._ To compute the fraction of potentially complex events due to binarity as a function of the lens mass, we combine the rates from Fig. 8 with the projected Einstein radius \(R_{E}/x\) distribution expected for that lens mass. Figure 9 shows this generic distribution for lenses of \(1M_{\odot}\) mass, changing with \(M_{L}\) by simply scaling the abscissa with \(\sqrt{M_{L}/M_{\odot}}\). It has been established by assuming that the lenses are spatially distributed according to the so-called standard dark matter halo model described in Blaineau et al. (2022). Table 2 shows, for a series of lens masses, the maximum expected fractions of situations where binaries could affect microlensing detection, splitted into 3 domains of projected Einstein radius \(R_{E}/x\). For each lens mass, the event fraction for each domain is deduced from the distribution of Fig. 9 properly scaled. For \(R_{E}/x<50AU\), we conservatively assume that 100% of the events can be affected by blending due to binarity; for \(50AU<R_{E}/x<1000AU\), we integrate the most pessimistic function of Fig. 8, weighted by the normalised distribution of \(R_{E}/x\); for \(R_{E}/x>1000AU\), we consider that a maximum of 2% of the events can be affected by blending due to binarity. The total given in the table is the maximum proportion of events for which the classical detection efficiency calculation is not applicable, because of the additional risk of event superposition or blend due to the binarity of the sources. In the absence of a specific simulation, the status of these events on binaries in terms of detection efficiency is poorly known, and this must be taken into account as a systematic uncertainty for the measurement of optical depths and event rates. This study shows that the impact of source binarity can be neglected to first order when searching for gravitational microlensing effects from lenses heavier than \(30M_{\odot}\).The maximum value of uncertainty 6.2% is all the smaller as the lenses are massive (Fig. 9), showing that binarity of LMC sources must have an additional effect to the estimates of accidental blending effects that can be neglected for heavy lenses, as was done in Blaineau et al. (2022). Figures 8 and 9 allows to estimate these numbers in case of even more massive lenses towards the LMC. For an estimate corresponding to another Galactic halo model, or to other targets than LMC, Fig. 9 needs to be rebuild. For events due to lighter lenses with a larger probability of projected Einstein radii \(R_{E}/x<50AU\), it is currently not possible to draw reliable conclusions on the impact of the blend due to the binarity of the LMC sources with our technique. An alternative method for estimating the differential binarity rate with a projected separation of less than \(50AU\) is needed, coupled with a specific simulation to estimate the detection efficiency of non-PSPL events. in LMC catalogs like the EROS2 or MACHO surveys, which are composed of rather bright stars, we overestimate the binarity rate by assuming that the stellar populations from our Gaia sample and in the LMC are similar. We also investigated whether there are correlations between magnitude differences and the separation \(a_{t}\) of binaries. Figure 10 shows the distributions of magnitude differences \(\Delta G\) between components for binaries with \(1000UA<a_{t}<2500UA\) (Fig. 10(b)) and with \(a_{t}<1000UA\) (Fig. 10(c)), obtained by subtracting from the observed distributions the expected distributions for random pairs (Fig. 10(a)), with appropriate normalization. The latter distribution is deduced from that of the difference \(\Delta G\) of pairs separated by \(a_{t}>30000AU\) -- which turns out to be approximately uniform --. It appears that the closer the binary system, the smaller the difference in magnitude between components. It is tempting to explain this difference by the intervention of a gravitational capture mechanism, which would favor the formation of distant binaries, whose luminosities would be consequently less correlated. These observations are corroborated by the distribution of mass ratios as a function of the semi-major axis or period of the binary systems in Moe & Di Stefano (2017). Finally, we examined the case of red giants, which constitute an important part of the catalogs of the historical microlensing surveys to the LMC. Unfortunately, they represent too few stars in our Gaia sample to establish a reliable binarity rate. However, we examined the magnitude distribution of stars in pairs with separation \(a_{t}<2500UA\) containing at least one giant. In this sample of pairs, we found that there are almost no giant binary systems (less than 1% of the sample), but only binaries consisting of a giant and a main sequence star, with magnitude difference \(\Delta G\) smaller than 2.5 in 85% of the cases. This fact reinforces our conclusion that we probably overestimate the proportion of binaries in EROS2/MACHO type catalogs by extrapolating the binarity rate measured in Section 7. ## 9 Conclusions: The impact of binarity on microlensing surveys We conclude from this study that the search detection efficiency of long duration microlensing events due to lenses heavier than \(30M_{\odot}\) toward LMC is not significantly affected by the source binarity. This result is useful not only in the recent combined analysis of EROS and MACHO data spanning several decades (Blaineu et al. 2022), but also in future research with Rubin-LSST. On the other hand, for lenses lighter than \(30M_{\odot}\), as soon as the projected Einstein radius is less than a few tens of \(AU\), the binarity rates extrapolated here are higher and less reliable. The fraction of events that can be affected by blending due to source binarity becomes less negligible, and another study must be undertaken to estimate the impact of a binarity rate that may be high (but probably positive) on the detection efficiency. Although it is not possible to estimate from the Gaia database the differential binarity rate for \(a_{t}<50AU\) in the LMC, one can conversely consider detecting binarity effects by measuring distortions with respect to a simple (PSPL) microlensing effect. In particular, if the photometric accuracy of LSST reaches a few milli-magnitudes, one could use possible deviations from PSPL microlensing effects to infer source binarity rates. Our study of the impact of binarity rates on microlensing detection efficiency toward the LMC is easily transferable, through some scaling and modelisation of the lens spatial distribution, to studies of microlensing within the Galactic plane. In this case, the source population should better resemble the one studied in this paper. Our last comment is that the use of a tolerant prefittering, not sensitive to the precise shape of the magnification curve, remains the safest technique to mitigate the effects of distortion due to the binarity of the source on the detection efficiency. ###### Acknowledgements. We thank Olivier Perdereau for his useful comments on the manuscript. This work was supported by the Paris Ile-de-France Region.
2309.01343
Distributional Domain-Invariant Preference Matching for Cross-Domain Recommendation
Learning accurate cross-domain preference mappings in the absence of overlapped users/items has presented a persistent challenge in Non-overlapping Cross-domain Recommendation (NOCDR). Despite the efforts made in previous studies to address NOCDR, several limitations still exist. Specifically, 1) while some approaches substitute overlapping users/items with overlapping behaviors, they cannot handle NOCDR scenarios where such auxiliary information is unavailable; 2) often, cross-domain preference mapping is modeled by learning deterministic explicit representation matchings between sampled users in two domains. However, this can be biased due to individual preferences and thus fails to incorporate preference continuity and universality of the general population. In light of this, we assume that despite the scattered nature of user behaviors, there exists a consistent latent preference distribution shared among common people. Modeling such distributions further allows us to capture the continuity in user behaviors within each domain and discover preference invariance across domains. To this end, we propose a Distributional domain-invariant Preference Matching method for non-overlapping Cross-Domain Recommendation (DPMCDR). For each domain, we hierarchically approximate a posterior of domain-level preference distribution with empirical evidence derived from user-item interactions. Next, we aim to build distributional implicit matchings between the domain-level preferences of two domains. This process involves mapping them to a shared latent space and seeking a consensus on domain-invariant preference by minimizing the distance between their distributional representations therein. In this way, we can identify the alignment of two non-overlapping domains if they exhibit similar patterns of domain-invariant preference.
Jing Du, Zesheng Ye, Bin Guo, Zhiwen Yu, Lina Yao
2023-09-04T04:02:04Z
http://arxiv.org/abs/2309.01343v1
# Distributional Domain-Invariant Preference Matching for Cross-Domain Recommendation ###### Abstract Learning accurate cross-domain preference mappings in the absence of overlapped users/items has presented a persistent challenge in Non-overlapping Cross-domain Recommendation (NOCDR). Despite the efforts made in previous studies to address NOCDR, several limitations still exist. Specifically, 1) while some approaches substitute overlapping users/items with overlapping behaviors, they cannot handle NOCDR scenarios where such _auxiliary information_ is unavailable; 2) often, cross-domain preference mapping is modeled by learning _deterministic explicit representation_ matchings between sampled users in two domains. However, this can be biased due to individual preferences and thus fails to incorporate preference continuity and universality of the general population. In light of this, we assume that despite the scattered nature of user behaviors, there exists a consistent latent preference distribution shared among common people. Modeling such distributions further allows us to capture the continuity in user behaviors within each domain and discover preference invariance across domains. To this end, we propose a Distributional domain-invariant Preference Matching method for non-overlapping Cross-Domain Recommendation (DPMCDR). For each domain, we hierarchically approximate a posterior of domain-level preference distribution with empirical evidence derived from user-item interactions. Next, we aim to build _distributional implicit_ matchings between the domain-level preferences of two domains. This process involves mapping them to a shared latent space and seeking a consensus on domain-invariant preference by minimizing the distance between their distributional representations therein. In this way, we can identify the alignment of two non-overlapping domains if they exhibit similar patterns of domain-invariant preference. Experiments on real-world datasets demonstrate that DPMCDR outperforms the state-of-the-art approaches with a range of evaluation metrics. Cross-Domain Recommendation, Distributional Preference Matching, Preference Invariance ## I Introduction Cross-domain recommendation(CDR) is widely considered an effective approach to tackle the long-standing data scarcity issue in recommender systems [1] by transferring knowledge of users/items available in one domain to another [2, 3]. Collaborative Filtering (CF) has emerged as a widely explored approach, where Matrix Factorization (MF)[4, 5] and Neural Networks (NN)[6, 7] have been actively employed in the context of CDR. MF-based methods aim to discover user similarities from observed user-item interactions to facilitate knowledge transfer. For instance, Hu et al. [5] propose to capture cross-domain factors by horizontally connecting the interaction matrices of different domains. However, MF-based methods are highly dependent on observation availability and thus tend to perform poorly in cold-start settings. To overcome it, recent NN-based methods have adopted Embedding-and-Mapping [8] to adapt the information across different domains effectively. As an example, PTUPCDR [7] use a meta-network to generate personalized bridge functions based on user representations to transfer personalized preferences. Nevertheless, these approaches necessitate overlapping users/items to develop reliable representations and capture domain correlations [9]. Their performance would be compromised in the absence of overlapping users/items, leading to what is known as the Non-Overlapping Cross-Domain Recommendation(NOCDR) problem. Prior NOCDR studies have primarily explored auxiliary user/item profiles or behaviors as substitutes for unavailable users/items [10]. In this case, Liu et al. [11] connect non-overlapping users through the overlapped review attributes of users. Likewise, users exhibiting similar rating patterns can be linked based on social behaviors [12]. _Limitations of Previous Works:_ Having said that, two central limitations remain in applying existing CDR approaches to NOCDR settings. First, existing methods primarily focus on constructing _deterministic_ cross-domain mappings between sampled individuals, regardless of OCDR or NOCDR, shown in Fig. (1). Specifically, the OCDR method PTUPCDR matches the representations of the same user present in both domains; while in [11], the individual review attributes are aligned between domains for NOCDR. Such _deterministic explicit_ matching, however, is susceptible to _individual biases_ introduced by sampling. It can only capture the differences between individuals, thus struggling to uncover common preferences within the population [13]. On the other hand, NOCDR methods that integrate auxiliary behaviors essentially pass the needs for overlapping information on to additional data from users and items, such as ratings, tags, and reviews [12, 14, 11]. In [15], domain knowledge is embedded into a cluster-level full rating matrix, allowing for the transfer of multiple rating patterns within the matrix from the source to the target domain. In this context, _explicit_ mapping can still be learned to some extent. Even so, their limitations become evident in more challenging NOCDR scenarios where no auxiliary information is available. The lack of overlapping information renders those methods that rely on _explicit_ mapping relations ineffective, thus unsuitable for addressing NOCDR. _Research Motivation:_ Still, recommendation services are intended for the general public and should be designed to follow a consistent pattern of underlying user preferences that govern user behaviors, regardless of specific domains [10]. Intuitively, NOCDR can take advantage of these intrinsic domain-invariant preferences. However, related exploration remains under-investigated; the potential benefits are not fully realized by previous NOCDR methods [13]. In light of this, we assume that there exists a continuous prior distribution of domain-level preferences, which implicitly describes the general preference pattern within each domain. By approximating the posterior distribution, we can obtain latent cross-domain invariant preferences through alignment with different domains. Given the limitations of _explicit_ cross-domain matching in the absence of exact individual mapping relationships [8, 16, 6], we propose matching the predictive distributions of these domain-invariant preferences instead. The key motivation is to ensure that domain-invariant preferences, which capture the inherent patterns of users, remain highly similar irrespective of the specific domain they originate from. To achieve this, we first use hierarchical probabilistic modeling to derive domain-level user preferences by leveraging groups of user representations from each domain, whilst considering their inner correlations. The domain-level preferences further parameterize a predictive distribution of domain-invariant preference from both source and target domain perspectives. Transferring these between domains can be seen as sharing the underlying universality with others. In this way, we identify a cross-domain invariant preference by aligning two predictive distributions derived from random groups of users, referred to as _distributional implicit_ matching. Intuitively, this implies reaching a consensus across domains on such a cross-domain invariant preference. Moreover, _implicit_ distributional matching could benefit from reduced sampling bias with monte-carlo methods [17]. _Developed Method:_ We propose a **D**istributional Domain-invariant **P**reference **M**atching approach for non-overlapping **C**ross-**D**omain **R**ecommendation, named **DPMCDR**. We consider the user population's preference as a continuous prior distribution across different domains. To approximate the posterior within each domain, we leverage groups of users and derive domain-level latent representations and cross-domain invariant preferences. By considering preferences as a collective whole, one can draw correlations between users and reduce individual bias. Towards this, we first compute deterministic user/item representations with _Deterministic Graph Encoders_ from observed user-item interactions. Then, we design _Stochastic Latent Preference Identifier_ for posterior approximation of domain-level preference with random groups of latent user representations. This is followed by a _Distributional Preference Matching_ that parameterizes a predictive distribution of cross-domain invariant preference in a shared latent space for both domains. To ensure consistency of cross-domain invariant preference, we construct a bi-directional transfer path by minimizing the Jenson-Shannon divergence between them. In addition, we implement _User-specific Optimizers_ and _Domain-specific Optimizers_ informed by variational Information Bottleneck (VIB)[18] to constrain the latent representations and domain-level preference in terms of stronger generalization [19] for improved prediction performance. _Contributions:_ Briefly, our contributions are as follows: * We propose a cross-domain invariant preference matching approach for NOCDR, which models the _domain-level_ preferences as continuous distributions and yields a cross-domain invariant preference aligned across domains, in view of distributional _implicit_ preference matching. * Specifically, we approximate the continuous _domain-level_ preferences with groups of random users in both domains and parameterize two distributions of _cross-domain invariant_ preferences therefrom. With latent correlations among users incorporated, two distributions align different interpretations of the preference commonality. * In addition, we present two VIB-informed optimizers that constrain latent representations to be maximally compact and informative about the user-item interactions. The _user-specific optimizer_ helps to obtain robust user representations within each domain, while the _domain-specific optimizer_ facilitates the generalization of domain-level preference in another domain. ## II Related work We categorize a CDR model as either OCDR or NOCDR based on the presence or absence of overlapping users/items. Fig. 1: Difference between matching with _deterministic explicit matching_ (left) and our _distributional implicit_ matching (right). ### _Overlapping Cross-Domain Recommendation_ The OCDR scenario involves partial or complete user-item interactions with both domains. In this context, OCDR models basically strive to capture the representations of overlap users from observed interactions and align/transfer them across domains [8] with different methodologies, such as Matrix Factorization(MF)-based methods[4, 20, 5, 21] and Neural Network(NN)-based methods [22, 23, 7, 6, 16]. For MF-based methods, CMF [4] share user parameters across domains to transfer knowledge. Hu et al. [5] horizontally connect the matrices of different domains to obtain potential users and item factors for prediction. Moreover, MPF [21] analyze user behaviors on different websites and applies the probabilistic matrix factorization to capture cross-site user preferences for knowledge transfer. The cold-start problems, however, may prevent the use of MF-based methods, leading to the surge of NN-based methods, which have demonstrated improved capacity under cold-start settings. Man et al. [8] define the Embedding-and-Mapping paradigm and inspire a series of OCDR models. Following pre-training of user and item embeddings, this paradigm uses overlapping correlations between users/items to learn a mapping function. For example, DCDIR [22] and HCDIR [23] construct heterogeneous information networks and leverage the rich information available in the user-item interactions. Zhu et al. [7] propose a meta-network to generate personalized user preference transfer bridges based on user representations, enabling the transfer of personalized preferences. VDEA [24] optimize the variational lower bound of Mixture-Of-Gaussian and align the embedding distribution of overlapping and non-overlapping users at the user level. More recently, DisenCDR [6] and CDRIB [16] harness mutual information to filter informative user/item representations for knowledge transfer. In summary, OCDR methods urge the existence of overlapping users/items to transfer knowledge between domains. Their performances will be significantly reduced when faced with NOCDR scenarios. ### _Non-overlapping Cross-Domain Recommendation_ A NOCDR setting suggests that no user has interactions in both domains simultaneously. Solutions involve finding _explicit_ correlations of user behaviors as substitutes for overlapping users/items[13]. Yang et al.[25] connect different items with the same tags to facilitate semantic matching of items based on shared tags. Wang et al.[26] capture user sequential behaviors by jointly embedding the rated items into a unified space. Liu et al.[11] merge latent embeddings of reviews and attributions across domains and reduce the domain discrepancy with attribution alignment. However, these approaches rely heavily on auxiliary information and overlapping behaviors for knowledge matching. In contrast, a series of works known as codebook-based methods develop methods around cluster-level rating patterns compressed into a codebook. Li et al.[27] compress the rating matrix into a compact representation in the source domain, and reconstruct the target matrix by expanding this codebook. Subsequent works [28, 29, 30, 31] direct their efforts on cluster-wise correspondence, placing a strong assumption of cluster correspondence and relying on discrete _explicit_ matching to transfer knowledge. A more recent study [32] assumes that the exact rating patterns between domains can be aligned with iterative optimization. These deterministic methods, relying on explicit relations or sampled users, may result in _individual bias_ among users who behave similarly in source and target domains but can be inconsistently between discrete groups. In contrast, our method assumes the continuous distribution of underlying preferences and models the cross-domain invariance from the _implicit_ distributional perspective, eventually reducing the _individual bias_. Noticeably, our method is effective when only user-item interactions exist, without the need for auxiliary information as required by previous studies. ## III Problem Definition We target a NOCDR scenario1 where only user-item interactions in either domain are available. For each domain, we have two node sets: a user set \(\mathcal{U}=\{u_{i}\}_{i=1:|\mathcal{U}|}\) and an item set \(\mathcal{V}=\{v_{i}\}_{i=1:|\mathcal{V}|}\). The user-item interactions constitute the edge set \(\mathcal{R}=\left\langle r_{u_{i},v_{j}}\right\rangle\) with \(u_{i}\in\mathcal{U}\) and \(v_{j}\in\mathcal{V}\), indicating the interaction of user \(u_{i}\) and item \(v_{j}\). If user \(u_{i}\) has interacted with item \(v_{j}\), then \(r_{u_{i},v_{j}}=1\), else \(r_{u_{i},v_{j}}=0\). Footnote 1: We discuss NOCDR to demonstrate the method capability in an extremely challenging setting. We note that our method can handle OCDR as well. Given the source domain \(S\) and target domain \(T\), we generate domain-specific interaction bipartite graphs \(\mathcal{G}^{S}=\left\langle\mathcal{U}^{S},\mathcal{V}^{S},\mathcal{R}^{S}\right\rangle\) and \(\mathcal{G}^{T}=\left\langle\mathcal{U}^{T},\mathcal{V}^{T},\mathcal{R}^{T}\right\rangle\), with disjoint node sets \(\mathcal{U}^{S}\cap\mathcal{U}^{T}=\varnothing\), and \(\mathcal{V}^{S}\cap\mathcal{V}^{T}=\varnothing\), i.e., \(\mathcal{S}\) and \(\mathcal{T}\) share no users and items. We further denote the edge sets \(\mathcal{R}^{S}\) and \(\mathcal{R}^{T}\) by two adjacency matrices \(\mathbf{A}^{S}\in\mathbb{R}^{|\mathcal{U}^{S}|\times|\mathcal{V}^{S}|}\) and \(\mathbf{A}^{T}\in\mathbb{R}^{|\mathcal{U}^{T}|\times|\mathcal{V}^{T}|}\), where each element \(a_{i,j}\) refers to the interaction status from user \(u_{i}\) to item \(v_{j}\). Fig. 2: Framework of DPMCDR, containing _Deterministic Graph Encoders_ and _Stochastic Latent Preference Identifiers_ as the forward modules, while the backward information is conveyed by _user-specific optimizers_, _domain-specific optimizers_ and _Distributional Preference Matching_. In this section, we focus on a cold-start setting. Given a new user with no interactions in either domain, we predict the item to be interacted with, by approaching _distributional preference matching_ between the source and the target domain. ## IV Methodology ### _Overview_ Fig. (2) provides an overview of the DPMCDR framework. DPMCDR utilizes _Deterministic Graph Encoders_ to aggregate homogeneous neighbors of each node from user-item interactions, deriving user/item-specific representations in each domain. Then, _Stochastic Latent Preference Identifiers_ approximate the posterior of latent user/item representations and domain-level preferences to capture preference continuity and commonality in user behaviors, presumed to be stable regardless of domains. Concerning cross-domain alignment, DPMCDR includes the _Distributional Preference Matching_, which parameterizes a Gaussian-based cross-domain invariant preference distribution using random groups of users from both domains. Two outcomes result from this: domain-level preference distributions for both source and target domains, reflecting cross-domain invariance from the respective perspectives of each domain. Model optimization in DPMCDR takes place through _user-specific optimizer_, _domain-specific optimizer_, and _distributional preference matching optimizer_. While the first two optimizers guide each representation with predictive information about NOCDR task, the last matching objective ensures invariant preferences are aligned across domains. ### _Deterministic Graph Encoder_ We first obtain initial user and item representations from the user-item interaction graph \(\mathcal{G}\) in each domain. Using the graph convolutional network (GCN) [33], we can compute user- and item-specific representations from their homogeneous neighbors. Unlike previous GCN-based works that directly aggregate one-hop2 neighbors [34, 7, 35, 36], potentially leading to inappropriate aggregation of heterogeneous neighbors [16, 6], we derive deterministic user- or item-specific representations that bridge two-hop neighbors in respective categories, i.e., connecting users with users. Note that the following steps are domain-agnostic, we omitted domain indicators \(S\) and \(T\) for simplicity. Using a \(k\)-layer GCN, the following node representations \(\mathbf{\tilde{E}}^{(k)}\) are computed: Footnote 2: In the interaction bipartite graph \(\mathcal{G}\), a user can have no one-hop neighbors other than an item. The homogeneous nodes can thus only be indirectly linked. \[\mathbf{\tilde{E}}^{(k)}\leftarrow\begin{cases}\mathbf{\Phi}^{(1)}(\mathbf{ \tilde{E}}^{(1)}\parallel\mathbf{E}),&\text{if }k=1\\ \mathbf{\Phi}^{(k)}(\mathbf{\tilde{E}}^{(k-1)}\parallel\mathbf{\tilde{E}}^{(k -2)}),&\text{if }k\geq 2\end{cases} \tag{1}\] \[\text{with }\mathbf{\tilde{E}}^{(1)}:=\begin{cases}\mathbf{\tilde{U}}^{(1)}= \overline{\mathbf{A}}\left(\mathbf{\Phi}^{(0)}\left(\overline{\mathbf{A}}^{\top}\mathbf{ U}\mathbf{W}_{u}\right)\right)\mathbf{W}_{u^{\prime}},\\ \mathbf{\tilde{V}}^{(1)}=\overline{\mathbf{A}}^{\top}\left(\mathbf{\Phi}^{(0)} \left(\overline{\mathbf{A}}\mathbf{V}\mathbf{W}_{v}\right)\right)\mathbf{W}_{v^{\prime}},\end{cases} \tag{2}\] where \(\mathbf{\tilde{E}}^{(k)}=\{\mathbf{\tilde{U}}^{(k)}\), \(\mathbf{\tilde{V}}^{(k)}\}\) and \(\mathbf{E}=\{\mathbf{U},\mathbf{V}\}\). \(\mathbf{U}\in\mathbb{R}^{|\mathcal{U}|\times d}\) and \(\mathbf{V}\in\mathbb{R}^{|\mathcal{V}|\times d}\) are random initial embeddings of users and items. \(d\) is the embedding size. \(\mathbf{W}_{u},\mathbf{W}_{v}\) are random weight parameters. \(\parallel\) is vector concatenation. \(\mathbf{\Phi}^{(k)}(\cdot)\) defines a non-linear transformation at the \(k\)-th layer with LeakyReLU activation function. \(\overline{\mathbf{A}}\) represents the normalized adjacency matrix3. Eq. (2) ensures that each node aggregates its two-hop homogeneous neighbors. Following [35], the outputs from all \(k\) layers are then concatenated to form \(\mathbf{\tilde{U}}\in\mathbb{R}^{|\mathcal{U}|\times k\cdot d}\) and \(\mathbf{\tilde{V}}\in\mathbb{R}^{|\mathcal{V}|\times k\cdot d}\) for all observed users and items, respectively. Footnote 3: \(\overline{\mathbf{A}}\) indicates the edges from \(u\) to \(v\), thus its transpose \(\overline{\mathbf{A}}^{\top}\) is from \(v\) to \(u\). ### _Stochastic Latent Preference Identifier_ Having obtained the deterministic representations for source and target domains, our next step is to uncover the underlying preference patterns existing across both domains, i.e., cross-domain invariant preference. Recall that our approach avoids discrete _explicit_ matching and instead aligns the users' preferences on a _implicit_ basis, by assuming user behaviors follow a continuous prior preference distribution \(p(\mathbf{z})\). This inherently enables the inference of latent correlations among users. Assume a random group of user representations \(\mathbf{\tilde{h}}\) are i.i.d discrete observations drawn from a generative process in the form of \(p(\mathbf{h},\mathbf{z})=p(\mathbf{z})p(\mathbf{h}|\mathbf{z})\) over latent variables \(\mathbf{z}\) for each domain. Generally, the true posterior \(p(\mathbf{z}|\mathbf{h})\) is intractable, thus needs to be approximated by a conditional posterior \(q(\mathbf{z}|\mathbf{\tilde{h}})\) with the amortized inference [37] (here \(\mathbf{\tilde{h}}=\{\tilde{\mathbf{h}}_{u_{n}}\}_{n=1:N}\in\mathbf{\tilde{U}}\))4. To improve model expressivity [38, 39], our implementation borrows from hierarchical priors \(\mathbf{z}=\{\mathbf{z}_{l}\}_{l=1:L}\). That is, we represent the prior by \(p(\mathbf{z})=\prod_{l}p(\mathbf{z}_{l}|\mathbf{z}_{<l})\) and corresponding posterior by \(q(\mathbf{z}|\mathbf{x})=\prod_{l}q(\mathbf{z}_{l}|\mathbf{z}_{<l},\mathbf{ \tilde{h}})\). In practice, we set \(L=2\) to account for latent user-specific representation(\(l=1\)) and domain-level preference(\(l=2\)). Upon domain-level preference \(q(\mathbf{z}_{2}|\mathbf{z}_{1},\mathbf{h})\), we can extract the cross-domain invariant preference (denoted as \(\mathbf{r}\)) and parameterize a predictive distribution over \(N\) observations \(\mathbf{\tilde{h}}\). Thus, the source and target domains would yield a prediction \(p(\mathbf{r}^{S}|\mathbf{z})\) and \(p(\mathbf{r}^{T}|\mathbf{z})\), respectively. This represents an interpretation of cross-domain invariant preference from the source/target domain perspective. Fig. (3) shows the graphical model. Footnote 4: The user representations \(\tilde{\mathbf{h}}_{u_{n}}\in\mathbf{\tilde{U}}\) are generated from Sec. (IV-B). Here we only show source-domain user inference, omitting \(S\) and \(T\) for brevity. We also obtain the latent item representations \(\mathbf{\tilde{z}}_{1,v}\) from \(\tilde{\mathbf{h}}_{v_{n}}\in\mathbf{\tilde{V}}\), following a similar procedure in Sec. (IV-C1). #### Iv-C1 Inference of \(q(\mathbf{z}_{1}|\mathbf{\tilde{h}})\) Assume \(p(\mathbf{z}_{1})\sim\mathcal{N}(0,\mathbf{I})\), we conclude a Gaussian posterior \(q(\mathbf{z}_{1}|\mathbf{\tilde{h}})=\mathcal{N}(\mathbf{\mu}_{1},\text{diag}( \mathbf{\Sigma}_{1}))\) by computing the sufficient statistics with respect to \(N\) users. Fig. 3: Graphical model for identifying the _Cross-domain Invariant Preference_ from source domain \(S\) and target domain \(T\), with random \(N\) users in both domains. The reparameterization trick [40] enables us to sample the latent user representation as \[\tilde{\mathbf{z}}_{1}=\boldsymbol{\mu}_{1}+\boldsymbol{\Sigma}_{1}\odot \boldsymbol{\epsilon}_{1},\text{ with }\boldsymbol{\epsilon}_{1}\sim\mathcal{N}(0,\mathbf{I}), \tag{3}\] Specifically, \(\boldsymbol{\mu}_{1}\) and \(\boldsymbol{\Sigma}_{1}\) are parameterized with a multilayer perceptron (MLP) \(f_{(\cdot)}\), where \(\boldsymbol{\mu}_{1}=f_{\boldsymbol{\mu}_{1}}(\tilde{\mathbf{h}})\in\mathbb{R }^{N\times k\cdot d}\), \(\boldsymbol{\Sigma}_{1}=f_{\boldsymbol{\Sigma}_{1}}(\tilde{\mathbf{h}})\in \mathbb{R}^{N\times k\cdot d}\). \(\odot\) denotes element-wise multiplication. #### Iii-C2 Inference of \(q(\mathbf{z}_{2}|\mathbf{z}_{1},\tilde{\mathbf{h}})\) Providing \(q(\mathbf{z}_{1}|\tilde{\mathbf{h}})\), we further capture the domain-level user preference \(p(\mathbf{z}_{2}|\mathbf{z}_{1})\) based on latent user representations \(\tilde{\mathbf{z}}_{1}\). For either domain, we approximate the posterior with \(q(\mathbf{z}_{2}|\mathbf{z}_{1},\tilde{\mathbf{h}})=\mathcal{N}(\boldsymbol{ \mu}_{2},\boldsymbol{\Sigma}_{2})\) using two MLPs on \(\tilde{\mathbf{z}}_{1}\). Akin to Eq. (3), we sample domain-level preference with the reparameterization trick: \[\tilde{\mathbf{z}}_{2}=\boldsymbol{\mu}_{2}+\boldsymbol{\Sigma}_{2}\odot \boldsymbol{\epsilon}_{2},\text{ with }\boldsymbol{\epsilon}_{2}\sim\mathcal{N}(0, \mathbf{I}), \tag{4}\] where \(\boldsymbol{\mu}_{2}=f_{\boldsymbol{\mu}_{2}}(\tilde{\mathbf{z}}_{1})\in \mathbb{R}^{N\times d}\), \(\boldsymbol{\Sigma}_{2}=f_{\boldsymbol{\Sigma}_{2}}(\tilde{\mathbf{z}}_{1}) \in\mathbb{R}^{N\times d}\). ### _Distributional Preference Matching_ #### Iii-D1 Parameterizing \(p(\mathbf{r}|\mathbf{z})\) With \(\tilde{\mathbf{z}}_{2}^{S}\) and \(\tilde{\mathbf{z}}_{2}^{T}\) estimated from both domains, we define two conditional predictive distributions, \(p(\mathbf{r}^{S}|\tilde{\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T})\) and \(p(\mathbf{r}^{T}|\tilde{\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T})\), to model the cross-domain invariant preference \(\mathbf{r}^{S}\) and \(\mathbf{r}^{T}\), described by source and target domain, respectively. Following, we create a shared latent space to align such two interpretations, by minimizing the Jenson-Shannon (JS) divergence between them5. Footnote 5: The symmetric JS divergence allows a bi-directional transfer. It means that the cross-domain transfer can be performed from source domain \(S\) to target domain \(T\), and from target domain \(T\) to source domain \(S\) as well. Given latent domain-level preferences \(\tilde{\mathbf{z}}_{2}^{S}\) and \(\tilde{\mathbf{z}}_{2}^{T}\), we distinguish source-driven interpretations and target-driven ones using the attention mechanism [41] by varying the queries. In the source perspective, we first apply a linear transformation on the concatenation of \(\tilde{z}_{2}^{S}\) and \(\tilde{z}_{2}^{T}\) to form a source-driven representation \(\tilde{\mathbf{r}}_{u}^{S}=\boldsymbol{W}_{\hat{\mathbf{r}}}(\tilde{z}_{2}^{S }\,\|\,\tilde{z}_{2}^{T})+\boldsymbol{b}_{\hat{\mathbf{r}}}\in\mathbb{R}^{N \times k\cdot d}\). Then, the source-driven correlations within \(\tilde{\mathbf{r}}_{u}^{S}\) are encoded by a multi-head self-attention. Similarly, reversing the concatenation would produce a target-driven \(\tilde{\mathbf{r}}_{u}^{T}\). Following, we parameterize \(p(\mathbf{r}^{S}|\tilde{\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T})\) and \(p(\mathbf{r}^{T}|\tilde{\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T})\) by assuming their Gaussian forms and computing the sufficient statistics \(\{\boldsymbol{\mu}_{v}^{S},\boldsymbol{\Sigma}_{v}^{S}\}\) and \(\{\boldsymbol{\mu}_{r}^{T},\boldsymbol{\Sigma}_{r}^{T}\}\), where \(\boldsymbol{\mu}_{v}^{S}=f_{\boldsymbol{\mu}_{v}^{S}}(\tilde{\mathbf{r}}_{u}^ {S})\), \(\boldsymbol{\Sigma}_{r}^{S}=f_{\boldsymbol{\Sigma}_{r}^{S}}(\tilde{\mathbf{r}} _{u}^{S})\). Note that \(\boldsymbol{\mu}_{r}^{T}\) and \(\boldsymbol{\Sigma}_{r}^{T}\) are computed in the same way. #### Iii-D2 Alignment of \(p(\mathbf{r}|\mathbf{z})\) Further, aligning two predictive distributions, i.e., \(p(\mathbf{r}^{S}|\tilde{\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T})\) and \(p(\mathbf{r}^{T}|\tilde{\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T})\) is performed in light of minimizing the JS divergence, to ensure consistent preference in the latent space. It is feasible as their concrete function forms have been specified. We denote the _distributional preference matching_ objective by \(\mathcal{L}_{m}\), such that \[\mathcal{L}_{m}=\frac{1}{2}\left\{D_{KL}\left(p(\mathbf{r}^{S}| \tilde{\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T}),p(\mathbf{r}^{T}|\tilde {\mathbf{z}}_{2}^{S},\tilde{\mathbf{z}}_{2}^{T})\right)\right. \tag{5}\] \[\left.+D_{KL}\left(p(\mathbf{r}^{T}|\tilde{\mathbf{z}}_{2}^{S}, \tilde{\mathbf{z}}_{2}^{T}),p(\mathbf{r}^{S}|\tilde{\mathbf{z}}_{2}^{S},\tilde{ \mathbf{z}}_{2}^{T})\right)\right\}\] where \(D_{KL}(\cdot,\cdot)\) is the Kullback-Leibler (KL) divergence between two distributions. Matching the predictive distribution from user groups eliminates the need for specify accurate _explicit_ matching, which is beneficial for cold-start NOCDR. ### _Predictive Optimization_ Having established the preference matching process, we now detail the objectives constraining latent user/item representations and domain-level preferences upon predicting the user-item interactions. Our predictive objectives are inspired by Variational Information Bottleneck (VIB) [18] that derive the latent representations as minimal sufficient statistics for predicting user-item interactions. Specifically, we present _user-specific optimizers_ to constrain the user- and item-specific representations, i.e. \(\tilde{\mathbf{z}}_{1}\); and _domain-specific optimizer_ to constrain the domain-level preference representations, i.e., \(\tilde{\mathbf{z}}_{2}\). #### Iii-E1 User-specific Optimizer We expect the latent user representations to (a) remove irrelevant features from user and item representations, and (2) preserve maximum information provided by user-item interactions. Intuitively, this motivates the use of VIB in optimizing the latent user-specific \(\tilde{\mathbf{z}}_{1,u}\) and item-specific representations \(\tilde{\mathbf{z}}_{1,v}\). We set up a _user-specific optimizer6_ for source domain \(S\) and target domain \(T\), respectively. Below we illustrate \(S\) only for brevity. The VIB-informed objective is formulated as Footnote 6: _user-specific optimizer_ optimizes for latent node-level representations in Sec. (IV-C1). Thus, item representations are actually handled as well. \[\max\left(I(\tilde{\mathbf{z}}_{1,u}^{S};\,\boldsymbol{A}^{S})+I( \tilde{\mathbf{z}}_{1,v}^{S};\,\boldsymbol{A}^{S})\right), \tag{6}\] \[\text{s.t.}\,\min\left(I(\boldsymbol{U}^{S};\,\tilde{\mathbf{z}}_{1,u }^{S})+I(\boldsymbol{V}^{S};\,\tilde{\mathbf{z}}_{1,v}^{S})\right)\] where \(I(\cdot\,;\cdot)\) measures the mutual information. \(\boldsymbol{U}^{S}\), \(\boldsymbol{V}^{S}\) are initial embeddings of users and items in the source domain. \(\boldsymbol{A}^{S}\) is the interaction adjacency matrix in \(\mathcal{G}^{S}\), approximating information of interest from user-item interactions in the source domain. The details are covered in Eq. (9). Moreover, the latent user representation \(\tilde{\mathbf{z}}_{1,u}^{S}\) and item representation \(\tilde{\mathbf{z}}_{1,v}^{S}\) are independent in terms of different generative processes. This enables us to further transform \(I(\tilde{\mathbf{z}}_{1,u}^{S};\,\boldsymbol{A}^{S})+I(\tilde{\mathbf{z}}_{1,v}^ {S};\,\boldsymbol{A}^{S})\) into \(I(\tilde{\mathbf{z}}_{1,u}^{S},\tilde{\mathbf{z}}_{1,v}^{S};\,\boldsymbol{A}^{S})\) with the mutual information chain rule [16]. Equivalently, Eq. (6) can be translated to a constrained optimization objective denoted by \(\mathcal{L}_{u}^{S}\), such that \[\mathcal{L}_{u}^{S}=-(\underbrace{I(\tilde{\mathbf{z}}_{1,u}^{S},\tilde{ \mathbf{z}}_{1,v}^{S};\,\boldsymbol{A}^{S})}_{\text{in-domain interactions }\text{\char 3}}\underbrace{-\beta_{u}^{S}\cdot I(\boldsymbol{U}^{S};\tilde{\mathbf{z}}_{1,u }^{S})}_{\text{users}\text{\char 3}}-\underbrace{-\beta_{v}^{S}\cdot I(\boldsymbol{V}^{S};\,\tilde{\mathbf{z}}_{1,v }^{S})}_{\text{items}\text{\char 3}}, \tag{7}\] where \(\beta_{u}^{S},\beta_{v}^{S}\) are Lagrangian multipliers for users and to reconstruct observed user-item interactions. Following [45], we can parameterize the conditional likelihood as \[\begin{split} I(\mathbf{\tilde{z}}_{1,u}^{S},\mathbf{\tilde{z}}_{1,v }^{S};\;\mathbf{A}^{S})&\geq\mathbb{E}_{Q_{U}\,Q_{V}}\left[\log p(\mathbf{A }^{S}|\mathbf{\tilde{z}}_{1,u}^{S},\mathbf{\tilde{z}}_{1,v}^{S})\right]\\ &=\sum_{u_{i}}^{U^{S}}\sum_{v_{j}}^{V^{S}}\log g(\mathbf{\tilde{z}}_ {1,u}^{S},\mathbf{\tilde{z}}_{1,v_{j}}^{S})\end{split} \tag{9}\] where \(Q_{U},Q_{V}\) abbreviate \(q(\mathbf{\tilde{z}}_{1,u}^{S}|\mathbf{\tilde{h}}_{u}^{S})\) and \(q(\mathbf{\tilde{z}}_{1,v}^{S}|\mathbf{\tilde{h}}_{v}^{S})\). \(\mathbf{\tilde{z}}_{u}^{S}\) and \(\mathbf{\tilde{z}}_{v_{j}}^{S}\) are latent representations of user \(u_{i}\) and item \(v_{j}\). \(g(\cdot,\cdot)\) can be any differentiable function [45] in terms of the specific prediction task, e.g., inner product. In this study, we predict the next interaction item and adopt binary cross entropy (BCE) as the objective. Since Eq. (7, 8, 9) also hold for target domain \(T\), we similarly employ \(\mathcal{L}_{u}^{T}\) to optimize for target domain \(T\). #### Iv-B2 Domain-specific Optimizer NOCDR targets developing transferable user representations to be effective in another domain. While derived from random groups of users, the _domain-level preference_ ought to encode behavioral universality of the user population, implying stronger generalization for cross-domain predictions. To this end, we apply another two VIB objectives for \(\mathbf{\tilde{z}}_{2}^{S}\) and \(\mathbf{\tilde{z}}_{2}^{T}\). In doing so, the source representation \(\mathbf{\tilde{z}}_{2}^{S}\) is optimized to be a compact form of \(\mathbf{U}^{S}\) while also being able to predict target domain interactions \(\mathbf{A}^{T}\) accurately; and vice versa for the target representation \(\mathbf{\tilde{z}}_{2}^{T}\). Now consider the joint optimization7 of both domains. Denote the interaction information by \(\mathbf{A}^{ST}\), a joint optimization has the following VIB-based objective: Footnote 7: As mentioned in Sec. (IV-D2), this can be viewed as a symmetric bi-directional transfer. \[\begin{split}&\max\left(I(\mathbf{\tilde{z}}_{2}^{S};\mathbf{A}^{ST})+I( \mathbf{\tilde{z}}_{2}^{T};\mathbf{A}^{ST})\right),\\ &\text{s.t.}\ \min\left(I(\mathbf{U}^{S};\,\mathbf{\tilde{z}}_{2}^{S})+I(\mathbf{U}^{T};\, \mathbf{\tilde{z}}_{2}^{T})\right)\end{split} \tag{10}\] where \(\mathbf{A}^{ST}=\begin{bmatrix}\mathbf{A}^{S}&0\\ 0&\mathbf{A}^{T}\end{bmatrix}\) is a diagonal adjacency matrix. Similarly, we rewrite \(I(\mathbf{\tilde{z}}_{2}^{S};\mathbf{A}^{ST})+I(\mathbf{\tilde{z}}_{2}^{T};\mathbf{A} ^{ST})\) to \(I(\mathbf{\tilde{z}}_{2}^{S},\mathbf{\tilde{z}}_{2}^{T};\mathbf{A}^{ST})\), provided that \(\mathbf{\tilde{z}}_{2}^{S}\) and \(\mathbf{\tilde{z}}_{2}^{T}\) are independent derivations. We thus have an equivalent constrained optimization objective for _domain-level preference_ as \[\mathcal{L}_{d}=-(\underbrace{I(\mathbf{\tilde{z}}_{2}^{S},\mathbf{\tilde{z}}_ {2}^{T};\;\mathbf{A}^{ST})}_{\text{cross-domain interactions}^{S}}-\underbrace{\beta_{2}^{S} \cdot I(\mathbf{U}^{S};\,\mathbf{\tilde{z}}_{2}^{S})}_{\text{source users}}- \underbrace{\beta_{2}^{T}\cdot I(\mathbf{U}^{T};\,\mathbf{\tilde{z}}_{2}^{T})}_{ \text{target users}}, \tag{11}\] where \(\beta_{2}^{S},\beta_{2}^{T}\) are Lagrangian multipliers. Variational approximations of and are in the same way as and and. \[\begin{split} I(\mathbf{U}^{S};\,\mathbf{\tilde{z}}_{2}^{S})& +I(\mathbf{U}^{T};\,\mathbf{\tilde{z}}_{2}^{T})\\ &\leq D_{KL}\left(q(\mathbf{\tilde{z}}_{2}^{S}|\mathbf{U}^{S}),\,p( \mathbf{z}_{2}^{S}|\mathbf{z}_{1,u}^{S})\right)\\ &\qquad\qquad+D_{KL}\left(q(\mathbf{\tilde{z}}_{2}^{T}|\mathbf{U}^{T} ),\,p(\mathbf{z}_{2}^{T}|\mathbf{z}_{1,u}^{T})\right)\\ &=D_{KL}\left(q(\mathbf{\tilde{z}}_{2}^{S}|\mathbf{\tilde{z}}_{1,u }^{S},\mathbf{\tilde{h}}_{u}^{S}),\,p(\mathbf{z}_{2}^{S}|\mathbf{z}_{1,u}^{S}) \right)\\ &\qquad\qquad+D_{KL}\left(q(\mathbf{\tilde{z}}_{2}^{T}|\mathbf{ \tilde{z}}_{1,u}^{T},\mathbf{\tilde{h}}_{u}^{T}),\,p(\mathbf{z}_{2}^{T}| \mathbf{z}_{1,u}^{T})\right)\end{split} \tag{12}\] As aforementioned, we expect the latent _domain-level_ preference to function effectively in both domains, i.e., source users can be adapted to the target domain, and vice versa. We thus empirically approximate as with Eq. (9) using the predictions of users with all the items in both domains, \[\begin{split} I(\mathbf{R}_{s},\mathbf{R}_{t};\mathbf{Z})\geq& \sum_{v_{j}}^{\mathbf{A}^{ST}}\{\sum_{u_{i}}^{U^{S}}\log g(\mathbf{\tilde{z}}_ {2,u_{i}}^{S},\mathbf{\tilde{z}}_{1,v_{j}})\\ &\qquad+\sum_{u_{i}}^{U^{T}}\log g(\mathbf{\tilde{z}}_{2,u_{i}}^{ T},\mathbf{\tilde{z}}_{1,v_{j}})\}\end{split} \tag{13}\] Analogously, the differentiable function \(g(\cdot,\cdot)\) is set to BCE for predicting the user-item interactions. Summarizing all objectives, we end up with the following, \[\mathcal{L}=\mathcal{L}_{m}+\mathcal{L}_{d}+\mathcal{L}_{u}^{S}+\mathcal{L}_{u}^ {T} \tag{14}\] where \(\mathcal{L}_{m}\) describes _distributional preference matching_ in Sec. (IV-D), \(\mathcal{L}_{d}\) specifies the cross-domain predictive objective in _domain-specific optimizer_. \(\mathcal{L}_{u}^{S}\) and \(\mathcal{L}_{u}^{T}\) refers to _user-specific optimizer_ for source and target users, respectively. #### Iv-B3 Time Complexity The time complexity for one domain is given by: 1) The Deterministic Graph Encoder (Sec. (IV-B)): \(\mathcal{O}(k\times(|\mathcal{U}|+|\mathcal{V}|))\), \(|\mathcal{U}|\) and \(|\mathcal{V}|\) are the number of user/item nodes in one domain, \(k\) is the number of layers; 2) The item-specific inference (Sec. (IV-C1)): \(\mathcal{O}(|\mathcal{V}|)\); 3) The user-specific inference (Sec. (IV-C1)): \(\mathcal{O}(N)\), where \(N\) is the sampling size of the random user group; 4) The domain-level preference inference (Sec. (IV-C2)): \(\mathcal{O}(N)\); 5) The multi-head attention (Sec. (IV-D2)): \(\mathcal{O}((k\times d)^{2}\times h)\), \(h\) is the number of heads; 6) The distributional preference matching (Sec. (IV-D): \(\mathcal{O}(N)\). Denote the node number of the source domain by \(|S|\), and the target domain by \(|T|\). Summing all computations up gives us the overall asymptotic upper bound as \(\mathcal{O}(k\cdot(|S|^{2}+|T|^{2})+6N+|V^{S}|+|V^{T}|+2h\times k^{2}d^{2})\) for two domains. The complexity of _Stochastic Latent Preference Identifier_ and _Distributional Preference Matching_ scale linearly with the number of samples, whose practical computations can take advantage of the parallelization enabled by a GPU. ## V Experiments We will examine the following research questions: * Can DPMCDR perform NOCDR for cold-start users and outperform other state-of-the-art methods? * Can DPMCDR also demonstrate competitive performance with state-of-the-art methods in OCDR settings? * How does each module of DPMCDR contribute to model performance? * How do model parameters affect experimental results? \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline **Features** & users \# & N/O users \# & items & unknowns & unknown & test & new items \# & new length \\ \hline \hline Celeb1 & 273.916 & 110.78 & 493.181 & 176.872 & 627.41 & 291.42 & 4.025 \\ Eleventure & 1109.842 & 107.460 & 169.799 & 180.53 & 190.52 & 2042 & 7.56 \\ \hline Celeb2 & 42.985 & 130.977 & 179.94 & 1865.92 & 102.56 & 9 901.56 & 901 & 3.71 \\ \hline Segu & 273.238 & 104.071 & 125.656 & 1468.022 & 35.66 & 35.89 & 901 & 8.24 \\ \hline Celeb3 & 210.652 & 122.728 & 12 ### _Experimental Setup_ #### Iv-A1 Datasets We conduct extensive experiments on a large-scale real-world dataset: Amazon Review Data8. We evaluate DPMCDR and comparison methods for eight categories in this dataset. In our experiments, we pair them up and construct four CDR scenarios: Cellphone-Electronic, Cloth-Sport, Game-Video, and Music-Movie. Tab. (I) details the information of each scenario. Footnote 8: [https://nijianmo.github.io/amazon/index.html](https://nijianmo.github.io/amazon/index.html) #### Iv-A2 Methods for comparison We include eight baselines for empirical comparisons, including both single-domain recommendation (SDR) and cross-domain recommendation (CDR) models. For SDR models, we include three methods: **MF**, **Caser[46]**, and **IDNP[34]**. Neither of these methods involves cross-domain adaptation, so we evaluate them by training in one domain and conducting cold-start prediction in another domain. We further introduce five state-of-the-art models proposed to tackle the CDR task: **CMF**[4], **EMCDR**[8] (with two variants implemented with different encoders: **EMCDR-MF** and **EMCDR-NGCF**), **DisenCDR**[6], **PTUPCDR**[7], and **CDRIB**[16]. The reported results of CDR models are obtained upon performing cross-domain adaptation during training. DisenCDR and CRDIB support bi-directional transfer as with DPMCDR. They obtain the results for both domains within a single training process, whilst other CDR methods need to be trained twice for different source domains. #### Iv-A3 Evaluation Setting Following common practices [6], items with fewer than 10 interactions and users with less than 5 interactions are filtered out. Recall that we focus on NOCDR predictions for cold-start users, we use non-overlapping users for training. Following the full ranking principle [47], 20% of overlapping users are randomly sampled to form the validation and testing sets, aligning with conventional settings in the baselines [16, 6]. Our evaluation simulates a cold-start scenario. The ground-truth user-item interactions of evaluated users are available for measuring performance only. #### Iv-A4 Metrics The prediction concerns the items that will be interacted with, we compare all the models in terms of three commonly-used metrics: MRR (Mean Recall Ratio), HR (Hit Ratio)@K with K\(=\{10,20,30\}\), and Normalized Discounted Cumulative Gain (NDCG)@K with K varying in \(\{10,20,30\}\). #### Iv-A5 Implementation All models are trained on an internal server equipped with two Intel Xeon E5-2697 v2 CPUs, a single NVIDIA RTX A5000 GPU and 768 GB of memory. Each method in comparison is implemented based on the official open-source code and is adapted for our evaluation setting. We use the best (hyper-)parameters of all baseline models as the official implementation/description. We report the results of all models in runs with five random seeds to minimize the impact of random noise. To implement DPMCDR, we fix \(k\), the layers of _Deterministic Graph Encoder_ to 3. For \(f_{\mathbf{\mu}_{1}}\), we utilize a 1-layer MLP with Leaky-ReLU activation function, while for \(f_{\mathbf{\Sigma}_{1}}\), we employ a 1-layer MLP with Softmax activation \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c|}{Methods} & \multicolumn{1}{c|}{**Model**} & \multicolumn function. Furthermore, \(f_{\mathbf{\mu}_{2}}\), \(f_{\mathbf{\Sigma}_{2}}\), \(f_{\mathbf{\mu}_{2}^{S}}\), \(f_{\mathbf{\Sigma}_{2}^{S}}\), \(f_{\mathbf{\mu}_{2}^{T}}\), and \(f_{\mathbf{\Sigma}_{2}^{T}}\) are implemented using 1-layer MLPs with ReLU activation function. The multi-head attention is with 2 heads. The dropout rate is searched within {0, 0.1, 0.2, 0.3, 0.4, 0.5} and is set to 0.3 for the best performance. We search for the embedding size \(d\) within {16, 32, 64, 128}. We choose the best warmup epoch from {10, 20, 30, 40, 50}. For simplicity, all Lagrangian multipliers \(\beta\) are set to the same value, and they are searched for from {0.5, 1, 1.5, 2, 2.5, 3}. We search the number of randomly selected users \(N\) in _Stochastic Latent Preference Identifier_ from {64, 128, 256, 512, 1024}. The batch size is fixed to 1024. We use Adam with a learning rate of 1e-3 and weight decay of 1e-6. DPMCDR converges on all four tasks after at most 70 training epochs. ### _Performance Analysis (RQ 1)_ Table (II) summarizes the overall performance for four CDR datasets. All metrics with the best results are **bolded**; those with the second-best results are _underlined_. DPMCDR consistently outperforms all the methods in comparison with all evaluation metrics on four CDR tasks. Moreover, the model performances support our design choices, not only for DPMCDR but also across a range of baselines. Our discussion is framed by the following perspectives. _Powerful Graph Encoder matters:_ As for SDR methods, MF is unable to capture user preferences accurately in all scenarios. Caser and IDNP organize consecutive user-item interaction records as sequences, and leverage sequential patterns to capture user preferences therefrom. For CDR methods, EMCDR-NGCF exhibits substantial improvements over EMCDR-MF despite following the same adaptation procedure, because they employ different embedding backbones. It is widely recognized that NGCF is more expressive than MF since NGCF encodes higher-order connectivity from user-item interaction graphs using powerful graph neural networks. This also motivates the use of GCN in DPMCDR, which encodes deterministic user/item representations from interaction data. _Latent User Correlations benefit Cross-domain Prediction:_ Ideally, it is beneficial to exploit and identify the preference commonality shared by user behaviors. The probabilistic modeling may provide a pathway, particularly with a limited size of interactions. Observe that Caser requires extensive interaction sequences to work effectively, like with Music and Movie review data. IDNP, on the other hand, assumes latent correlations between sequences and derives the intrinsic user preferences from a functional perspective. In the case of limited interactions (e.g., Game-Video), CMF connects the general user embedding between domains to reduce individual biases, leading to more robust results than EMCDR and PTUPCDR. For example, CMF improves NDCG@30 by 2.99% over EMCDR-MF in the Game and 2.14% in Video. There might be a reason for this since EMCDR and PTUPCDR focus on individual user representations, without taking into consideration correlations between users. In DPMCDR, we hierarchically model this underlying preference commonality with probabilistic domain-level and cross-domain preferences. _Variational Information Bottleneck improves Generalization:_ Based on information theory, DisenCDR and CRDIB are state-of-the-art CDR approaches tailored to sift out irrelevant information for effective transfer, with impressive performance gaps over others. Similarly, DPMCDR also includes VIB in the predictive objectives for stronger generalization. Moreover, DPMCDR extends VIB to the optimization of latent domain-level representations, which further improves expressivity. _Distributional Implicit Matching outperforms Explicit Mapping:_ Again, NOCDR does not offer reliable explicit correspondence in the two domains. Unfortunately, this renders the inability of previous approaches relying on deterministic _explicit_ mapping. Instead, DPMCDR aligns the cross-domain invariant preferences of both domains derived by random groups of users, with _distributional preference matching_. It does not require and never imposes the alignment on an individual basis. Therefore, DPMCDR consistently and significantly outperforms state-of-the-art in all metrics with NOCDR. ### _Overlapped Scenario (RQ 2)_ Having demonstrated its performance in NOCDR, we now investigate whether DPMCDR can handle OCDR with comparable ability. During training, we consider a partial-overlapping setting in which 85% of the users do not overlap between domains and 15% do. We examine how well DPMCDR performs against consistent second-best performers in NOCDR, i.e., CDRIB and DisenCDR, under two cross-domain scenarios (Cellphone-Electronic and Cloth-sport) in Table (III). Note that CDRIB and DisenCDR are methods designed for OCDR with bespoke components that leverage explicit shared information between domains, whereas DPMCDR does not. However, we find that DPMCDR, which treats all users equally regardless of overlap, consistently outperforms CDRIB and DisenCDR. For example, DPMCDR improves MRR by 7.03% in Cellphone and 13.68% in Electronic, demonstrating superior performance. Our empirical results suggest that DPMCDR is effective in capturing cross-domain knowledge and generating accurate recommendations for both OCDR and NOCDR. ### _Ablation Studies (RQ 3)_ We have discussed the four highlighted designs in DPMCDR in terms of overall performance. We now present ablation studies to assess the utility of each. Specifically, (A) contains _Deterministic Graph Encoder_ only, with all other components are excluded. (B) further introduces _user-level Stochastic \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c|}{Cellphone} & \multicolumn{3}{c}{Electronic} \\ \cline{2-7} & MRR & NDCG@30 & HR@30 & MRR & NDCG@30 & HR@30 \\ \hline **DisenCDR** & 0.0619 & 0.079 & 0.1743 & 0.0726 & 0.0956 & 0.2083 \\ **CDRIB** & 0.0726 & 0.1130 & 0.2880 & 0.0950 & 0.1381 & 0.3246 \\ **DPMCDR** & **0.0777** & **0.1195** & **0.3020** & **0.1080** & **0.1548** & **0.3560** \\ \hline Improv(\(\times\)) & **7.03\%** & **5.70\%** & **4.87\%** & **13.68\%** & **12.19\%** & **9.35\%** \\ \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c|}{Cloth} & \multicolumn{3}{c}{Sport} \\ \cline{2-7} & MRR & NDCG@30 & HR@30 & HR@30 & MRR & NDCG@30 & HR@30 \\ \hline **DisenCDR** & 0.0242 & 0.0344 & 0.1222 & 0.0256 & 0.0495 & 0.1740 \\ **CDRIB** & 0.0464 & 0.0702 & 0.1825 & 0.0313 & 0.0528 & 0.1485 \\ **DPMCDR** & **0.0468** & **0.0713** & **0.1847** & **0.0396** & **0.0637** & **0.1761** \\ \hline Improve(\(\times\)) & **0.97\%** & **1.58\%** & **1.22\%** & **19.61\%** & **20.74\%** & **1.29\%** \\ \hline \hline \end{tabular} \end{table} TABLE III: partial-overlapped CDR scenarios. _Latent Preference Identifier_, i.e., \(q(\mathbf{z}_{1}|\tilde{\mathbf{h}})\), and _user-specific optimizer_. (C) employ complete _Stochastic Latent Preference Identifier_ and _Distributional Preference Matching_, but without two VIB-based predictive _user-specific optimizers_ and _domain-specific optimizers_. (D) keeps all components of DPMCDR except for _domain-specific optimizers_. Here we only showcase the results for Game-Video due to page limitations. Fig. (4) compares the NDCG and HR of four variants and full DPMCDR. Variant (A) reports 6.32% in NDCG@30 and 17.96% in HR@30, outperforming most CDR models (except for CDRIB) and all SDR baselines. Including _user-specific Stochastic Latent Preference Identifier_ (variant B) would give rise to performances. The worst performer of all variants is variant (C) since no predictive objectives are involved. The domain-level preference cannot be reasonably derived without ground-truth observations. In comparison, variant (D) injects additional _user-specific optimizers_, considerably improving performance over variant (C). With all modules equipped, DPMCDR (full) achieves the best performance on all metrics. ### _Parameter sensitivity (RQ 3)_ We evaluate four important hyperparameters, i.e., embedding size \(d\), warm-up epoch, sampling size \(N\), and Lagrangian multiplier \(\beta\), to see how varying these values would affect the performance of DPMCDR. We only show the results of Cloth-Sport and Cellphone-Electronic given limited pages. As shown in Fig. (5), we search for the best embedding size within {16, 32, 64, 128}. In the Cellphone-Electronic scenario, all metrics increase with embedding size \(d\) and reach an optimum when \(d\)=128. Concerning Cloth-Sport, NDCG and HR show fluctuations but perform best when \(d\)=128. It is still only marginally better than \(d\)=32. Increasing \(d\) may improve performance but at the cost of greater computational costs. Warm-up epochs refer to the period before _distributional preference matching_ joins model training. Optimizing for preference matching can be detrimental if latent representations are not strong enough. We search the best warm-up epochs from {0, 10, 20, 30, 40, 50}. We find that Different tasks lead to different peak performance epochs. Nevertheless, the model retains a higher performance after warming up than without, underlining the need for cross-domain matching. As for sampling size of groups \(N\) in _Stochastic Latent Preference Identifier_, we vary \(N\) within {64, 128, 256, 512, 1024}. Increasing \(N\) from 64 to 256 leads to fluctuations in HR@30 and NDCG30. However, we observe an improvement in most results if we enlarge the sampling size to 1024. Larger sampling sizes enhance DPMCDR to aggregate more users, enabling accurate derivation of domain preference distribution. We lastly evaluate Lagrangian multiplier \(\beta\) ranging from 0.5 to 3 with a step length of 0.5. The choice of \(\beta\) depends on the specific CDR task. While \(\beta\)=2.5 gives the best results for Cloth-Sport CDR prediction, this value hampers the performance for the Cellphone-Electronic task. Although best values differ across CDR tasks, both domains follow similar trends. ## VI Conclusion and Future Works We have proposed a distributional cross-domain invariant preference matching approach, DPMCDR, based on a shared latent space that aligns cross-domain invariant preferences for Cross-Domain Recommendation. We presumed deterministic user representations as observations from the continuous preference prior distribution and approximated its posterior with random groups of users, drawing latent user-wise correlations and identifying commonality within latent representations. The latent representations are further grounded in a shared latent space to match the predictive distributions of cross-domain invariant preferences described by two domains. Our optimization further improved the cross-domain generalization of invariant preferences under the variational information bottleneck principle. Extensive experiments demonstrated that DPMCDR consistently outperforms state-of-the-art with a range of metrics, regardless of the existence of overlap users. The advantages of DPMCDR are particularly highlighted in NOCDR where no overlap information can be used. This case still allows DPMCDR to capture user preference commonality and address the limitation of existing studies. Future plans include extending distributional matching to the multi-domain settings, where varied divergences among domains must be addressed to avoid negative transfer. Adapting such domain-wise similarities into modeling might be an interesting workaround. Fig. 4: Ablation Study in the Game(left)-Video(right) scenario. Fig. 5: Parameter Sensitivity
2308.06705
Analysis of the strong vertices of $Σ_{c}ΔD^{*}$ and $Σ_{b}ΔB^{*}$ in QCD sum rules
In this work, we analyze the strong vertices $\Sigma_{c}\Delta D^{*}$ and $\Sigma_{b}\Delta B^{*}$ using the three-point QCD sum rules under the tensor structures $i\epsilon^{\rho\tau\alpha\beta}p_{\alpha}p'_{\beta}$, $p^{\rho}p'^{\tau}$ and $p^{\rho}p^{\tau}$. We firstly calculate the momentum dependent strong coupling constants $g(Q^{2})$ by considering contributions of the perturbative part and the condensate terms $\langle\overline{q}q\rangle$, $\langle g_{s}^{2}GG \rangle$, $\langle\overline{q}g_{s}\sigma Gq\rangle$ and $\langle\overline{q}q\rangle^{2}$. By fitting these coupling constants into analytical functions and extrapolating them into time-like regions, we then obtain the on-shell values of strong coupling constants for these vertices. The results are $g_{1\Sigma_{c}\Delta D^{*}}=5.13^{+0.39}_{-0.49}$ GeV$^{-1}$, $g_{2\Sigma_{c}\Delta D^{*}}=-3.03^{+0.27}_{-0.35}$ GeV$^{-2}$, $g_{3\Sigma_{c}\Delta D^{*}}=17.64^{+1.51}_{-1.95}$ GeV$^{-2}$, $g_{1\Sigma_{b}\Delta B^{*}}=20.97^{+2.15}_{-2.39}$ GeV$^{-1}$, $g_{2\Sigma_{b}\Delta B^{*}}=-11.42^{+1.17}_{-1.28}$ GeV$^{-2}$ and $g_{3\Sigma_{b}\Delta B^{*}}=24.87^{+2.57}_{-2.82}$ GeV$^{-2}$. These strong coupling constants are important parameters which can help us to understand the strong decay behaviors of hadrons.
Jie Lu, Guo-Liang Yu, Zhi-Gang Wang, Bin Wu
2023-08-13T07:43:51Z
http://arxiv.org/abs/2308.06705v4
Analysis of the strong vertices of \(\Sigma_{c}\Delta D^{*}\) and \(\Sigma_{b}\Delta B^{*}\) in QCD sum rules ###### Abstract In this work, we analyze the strong vertices \(\Sigma_{c}\Delta D^{*}\) and \(\Sigma_{b}\Delta B^{*}\) using the three-point QCD sum rules under the tensor structures \(ie^{m_{0}}p_{a}p_{b}^{\prime}p_{c}^{\prime}p^{\prime\prime}\) and \(p^{\prime}p^{\prime}\). We firstly calculate the momentum dependent strong coupling constants \(g(Q^{2})\) by considering contributions of the perturbative part and the condensate terms \(\langle\overline{q}q\rangle\), \(\langle g_{z}^{2}GG\rangle\), \(\langle\overline{q}g_{z}\sigma Gq\rangle\) and \(\langle\overline{q}q\rangle^{2}\). By fitting these coupling constants into analytical functions and extrapolating them into time-like regions, we then obtain the on-shell values of strong coupling constants for these vertices. The results are \(g_{12\Delta,\Delta D^{*}}=5.13^{+0.39}_{-0.49}\) GeV\({}^{-1}\), \(g_{22\Delta,\Delta D^{*}}=-3.03^{+0.27}_{-0.33}\) GeV\({}^{-2}\), \(g_{12\Delta,\Delta D^{*}}=20.97^{+2.15}_{-2.39}\) GeV\({}^{-1}\), \(g_{22\Delta,\Delta D^{*}}=-11.42^{+1.25}_{-1.25}\) GeV\({}^{-2}\) and \(g_{32\Delta,\Delta D^{*}}=24.87^{+2.57}_{-2.82}\) GeV\({}^{-2}\). These strong coupling constants are important parameters which can help us to understand the strong decay behaviors of hadrons. ## I Introduction The physics of charmed hadrons became an interesting subjects since the observations of \(J/\psi\) meson [1; 2]and charmed baryons (\(\Lambda_{c},\Sigma_{c}\)) [3]. Up to now, lots of charmed baryons have been discovered by different experimental collaborations[4]. Moreover, many bottom baryons such as \(\Lambda_{b}\), \(\Xi_{b}\), \(\Sigma_{b}\), \(\Sigma_{b}\) and \(\Omega_{b}\) have also been confirmed in experiments by CFD and LHCb collaborations[5; 6; 7; 8; 9; 10]. Although scientists have devoted much of their energy to this field, but the details of some charmed and bottom baryons are still less known. Thus, many experimental plans for the research of charmed and bottom baryons have been proposed by PANDA[11], J-PARC[12] and many other facilities. Under this circumstance, theoretical research on production of the baryons is very interesting and important. The strong coupling constants of baryons is an important input parameter which can help us to understand their production and decay processes[13]. This is the first motivation for us to carry out the present work. Since the observation of X(3872) by Belle collaboration in 2003[14], exotic hadrons which are beyond the usual quark-model emerged like bamboo shoots after a spring rain [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. Some exotic states were interpreted as hadronic molecular states because their masses are close to the known two-hadrons thresholds[29]. However, the study of mass spectra is insufficient to understand the inner structure of these exotic states. We need to further study their strong decay behaviours, where the strong coupling constants are particularly important. For examples, in Ref[30], the authors predicted two pentaquark molecular states \(D^{*}\Sigma_{c}\) and \(D^{*}\Sigma_{c}^{*}\) with the QCD sum rules. These two states were named as \(P_{c}(4470)\) and \(P_{c}(4620)\) which have the isospin \(I=\frac{3}{2}\). If we studied their two-body strong decay \(P_{c}(4470/4620)\to J/\psi\Delta\), this process can be described by the triangle diagram in Fig. 1. From this figure, we can see that analysis of strong vertices \(P_{c}\Sigma_{c}D^{*}\), \(P_{c}\Sigma_{c}^{*}D^{*}\), \(DD^{*}J/\psi\), \(D^{*}D^{*}J/\psi\), \(\Sigma_{c}\Delta D\), \(\Sigma_{c}^{*}\Delta D\), \(\Sigma_{c}\Delta D^{*}\) and \(\Sigma_{c}^{*}\Delta D^{*}\) is essential for us to study the strong decay behaviors of these two exotic states. This constituents the second motivation of our present work. The strong interaction between the hadrons is non-perturbative in the low energy region, which can not be studied from the QCD first principle. But, as an important parameter, the strong coupling constant is urgently needed in studying the production and strong decay process of hadrons. Thus, some phenomenological methods are employed to analyze the strong vertices[31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. The QCD sum rules (QCDSR)[46] and the light-cone sum rules (LCSR) are powerful phenomenological methods to study the strong interaction. In recent years, some coupling constants have been analyzed with LCSR by considering the higher-order QCD corrections and subleading power contributions[47; 48]. These studies show that considering the higher-order QCD corrections and subleading power contributions is very important for the accuracy of the results. In our previous work, we have analyzed the strong vertices \(\Sigma_{c}ND\), \(\Sigma_{b}NB\), \(\Sigma_{c}^{*}ND\), \(\Sigma_{b}^{*}NB\), \(\Sigma_{c}ND^{*}\) and \(\Sigma_{b}NB^{*}\) in the frame work of QCDSR basing on three-point correlation function[40; 38; 41], where the higher-order perturbative corrections were neglected. As a continuation of these works, we analyze the strong vertices \(\Sigma_{c}\Delta D^{*}\) and \(\Sigma_{b}\Delta B^{*}\) using the three-point QCDSR under the tensor structure \(ie^{\prime\prime\prime}p_{a}p_{a}p_{b}^{\prime}\), \(p^{\prime}p^{\prime\prime}\) and \(p^{\prime}p^{\prime}\). According to our previous work, it showed that the subleading power contributions are really important for the final results. Considering higher-order corrections should make the final results more accurate, however it will also make the calculations of the three-point QCDSR very complicated. Thus, we neglect contributions from these corrections in the present work. The layout of this paper is as follows. After the introduc
2308.09436
Transformer-based Detection of Microorganisms on High-Resolution Petri Dish Images
Many medical or pharmaceutical processes have strict guidelines regarding continuous hygiene monitoring. This often involves the labor-intensive task of manually counting microorganisms in Petri dishes by trained personnel. Automation attempts often struggle due to major challenges: significant scaling differences, low separation, low contrast, etc. To address these challenges, we introduce AttnPAFPN, a high-resolution detection pipeline that leverages a novel transformer variation, the efficient-global self-attention mechanism. Our streamlined approach can be easily integrated in almost any multi-scale object detection pipeline. In a comprehensive evaluation on the publicly available AGAR dataset, we demonstrate the superior accuracy of our network over the current state-of-the-art. In order to demonstrate the task-independent performance of our approach, we perform further experiments on COCO and LIVECell datasets.
Nikolas Ebert, Didier Stricker, Oliver Wasenmüller
2023-08-18T10:07:38Z
http://arxiv.org/abs/2308.09436v2
# Transformer-based Detection of Microorganisms ###### Abstract Many medical or pharmaceutical processes have strict guidelines regarding continuous hygiene monitoring. This often involves the labor-intensive task of manually counting microorganisms in Petri dishes by trained personnel. Automation attempts often struggle due to major challenges: significant scaling differences, low separation, low contrast, etc. To address these challenges, we introduce At-tnPAFPN, a high-resolution detection pipeline that leverages a novel transformer variation, the efficient-global self-attention mechanism. Our streamlined approach can be easily integrated in almost any multi-scale object detection pipeline. In a comprehensive evaluation on the publicly available AGAR dataset, we demonstrate the superior accuracy of our network over the current state-of-the-art. In order to demonstrate the task-independent performance of our approach, we perform further experiments on COCO and LIVECell datasets. ## 1 Introduction Regulatory bodies such as the European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA) mandate strict guidelines for continuous hygiene monitoring in the pharmaceutical, cosmetics and food industries. As a result, a large number of Petri dishes must be examined for microbial colonies on a daily basis by experienced biologists, which is time-consuming and error-prone. Automating this process presents several challenges. One is the high resolution required to reliably detect tiny colonies. Another is that colonies vary widely in size and shape and can overlap, making automated detection difficult (see Figure 1). There are several open-source approaches [17, 22, 39] that use classical computer vision techniques such as image filters and intensity variations to differentiate colonies from the agar-medium. However, these processes are based on hand-crafted features and laborious to use. Colony detection can be automated through the use of neural networks, such as Faster-RCNN [29], which have proven to be more accurate and robust than traditional Figure 1: The biggest challenges in hygiene monitoring are the detection of particularly small organisms, the significant variation in colony size, low contrast between foreground and background, as well as a high number of colonies with large overlap. The images show typical inference results of our method on the test data. computer vision methods. Recently, transformer networks [40] were introduced, outperforming their convolutional-counterparts in most tasks [28, 46]. This success is partly due to the self-attention mechanism, which enables transformers to model information spatial dependencies within large receptive fields. A drawback of standard self-attention is its quadratic complexity, resulting in large memory requirements and computational costs, especially when applied to high-resolution images for hygiene monitoring. In this paper, we present an innovative approach to colony detection in the field of computer vision. Our method, called AttnPAFPN, leverages a novel efficient-global self-attention mechanism to improve the performance of a path aggregation feature pyramid network (PAFPN) [26] for object detection. In combination with further optimizations, our efficient-global self-attention achieves superior accuracy and performance, especially when processing high-resolution images. Furthermore, we introduce new high-resolution prediction-heads to improve the detection of tiny objects. A hallmark of our AttnPAFPN is its flexibility, as it can be integrated into almost any top-down object detection method. To demonstrate this flexibility, we integrate our method into two general object detectors [15, 33, 19]. Augmented with our AttnPAFPN, these networks show superior performance in terms of accuracy over the current SoTA on the AGAR dataset [29] for colony detection. In addition, we include an extensive ablation study of our method with varying image resolutions. To demonstrate the task-independent performance of our approach, we also conduct experiments on COCO [25] for general object detection and on LIVECell [13] for the segmentation of cells in microscope images. ## 2 Related Works ### Detecting colonies Automated colony counting has been of interest since the late 1950s [1, 30]. Nowadays, there are several tools available, such as OpenCFU [17] and AutoCellSeg [39], which assist in the detection of microorganisms, based on conventional computer vision methods. The main drawback of these tools is their limited automation, requiring handcrafted features for colony detection. Setting these features requires expert knowledge, similar to manual counting. In addition to these conventional methods, several deep learning-based approaches [14, 16, 29, 18, 36] have been proposed for detecting colonies of microorganisms on agar plates. Ferrari et al. [16] utilize convolutional neural networks (CNNs) for bacterial classification, resulting in significant improvements compared to handcrafted feature-based support vector machine (SVM) systems. Andreini et al. [2] use k-means clustering to perform foreground-background segmentation, rather than classification or counting colonies. Multiple methods [4, 14, 32] approach colony detection by using modified U-Net [35] structures. Mask-RCNN [19] has also been adapted multiple times [27, 31] for detecting and segmenting microorganisms in agar dishes. Majchrowska et al. [29] used an image-patch approach, dividing high-resolution images into smaller overlapping areas to perform individual object detection [6, 33] and then merging the resulting bounding boxes. However, a common drawback of these methods is that they were developed either for low-resolution images or for image slices. ### Object detection In recent years, deep learning approaches have made significant progress in the field of object detection [24, 33, 10], outperforming classical methods by a large margin, highlighting the potential of the current SoTA to improve accuracy and speed of colony detection. Two stage detectors such as Faster-RCNN [33] and its variants [6, 19] first define regions of interest and then perform object detection. RetinaNet [24] introduced Focal loss to address the class imbalance problem in one-stage detectors. FCOS [38] and VariFocalNet [44] locate objects of interest by using anchor points and point-to-boundary distances. TOOD [15] presented a task-aligned learning strategy for explicitly aligning the two tasks of classification and localization in a learning-based manner. All these methods have in common that they focus on the prediction-head. As a neck, a Feature Pyramid Network (FPN) [23] is usually used to improve accuracy by creating multi-scale features. The Path Aggregation Feature Pyramid Network (PAFPN) [26] extends the FPN approach by adding a bottom-up path to enhance FPN features with accurate localization signals from low levels. YOLOv4 [5] introduces further bottlenecks into the PAFPN for more diverse representations. ResFPN [34] enhances FPN by integrating multiple residual skip connections to leverage information from higher scales for stronger and more localized features. The transformer-based DETR [7, 46] works entirely without FPN and achieves still SoTA-results. The methods mentioned are designed for the COCO dataset [25], which is known for its diversity and mainly consists of medium-sized images and objects. Therefore, the benchmark does not adequately represent the challenge of high-resolution hygiene monitoring, with its numerous tiny colony growths and homogeneous backgrounds. Accordingly, the aforementioned methods are only conditionally suitable for solving the task of colony detection. To address these drawbacks, we investigate cutting-edge object detection techniques and incorporate a specialized Attention-based Path Aggregation Feature Pyramid Network (AttnPAFPN) for high-resolution feature extraction in order to detect colonies on agar dishes (see Figure 2). The goal of our work is to provide a solution specific to the chal lenges of colony detection, improving both the accuracy and efficiency compared to SoTA methods. ## 3 Method This section outlines the design choices of our proposed AttnPAFPN to specifically address the limitations of current SoTA methods in processing high-resolution images. The proposed detection network consists of three key components: a backbone for extracting image features from the input, our neck (AttnPAFPN) for generating a hierarchical feature representation at different scales, followed by a detection head for the final predictions (e.g. TOOD [15]). ### AttnPAFPN Our primary contribution is the novel AttnPAFPN network neck, tailored to high-resolution images and small objects. AttnPAFPN utilizes our efficient-global self-attention mechanism and a new high-resolution output, allowing the network to focus on essential features, even for extremely small objects. Our streamlined method is further optimized using concepts from CSP-Net [41], resulting in improved performance, lower parameter counter, and reduced complexity. The end-to-end trainable encoder-decoder is shown in Figure 2. At the initial stage of our AttnPAFPN, we use the lowest resolution backbone features (e.g. with a total stride of \(32\)). These features are passed through a CSP-Bottleneck block to create high-level features, which are then used in both the top-down and bottom-up pathways. In the top-down pathway, the features are first upsampled by a factor of \(2\), then concatenated with the backbone features of corresponding size, before being processed again by a subsequent CSP-Bottleneck. This process is repeated until the last stage (stride of \(4\)) is reached, which enables AttnPAFPN to recognize tiny objects due to its high-resolution features. To reduce computational complexity and the number of parameters, we compress the depth of the backbone features by applying a \(1\times 1\) convolutional layer before passing them to the feature pyramid. The bottom-up path of our AttnPAFPN also utilizes CSP-Bottleneck blocks, but instead of upsampling, a strided convolutional layer is used to process the features. This path also includes a final strided \(3\times 3\) convolutional layer to generate an output with a factor of \(\frac{1}{64}\) of the original image size and enables the network to recognize large objects. Our final AttnPAFPN predicts objects at five different scales, with total strides of \(\{4,8,16,32,64\}\). ### Self-Attention augmented CSP-Bottlenecks One of the key contributions of our work is the integration of transformers [9, 40] into CSP-Bottlenecks [41] (Figure 2(a),2(c)), similar to the approach taken by BoTNet [37] integrating transformers into ResNet for image classification [20]. The structure of CSP-Bottlenecks can be seen in Figure 2(a). First, the incoming featuremaps are divided into two parts in depth. The first part is passed directly to the output after a single pointwise-convolution operation. The other half is processed \(N\) times by a residual bottleneck (see Figure 2(b)) and then concatenated with the first half. Finally, a pointwise convolution is performed to enable Figure 2: **Architecture overview. Our object detection network consists of a backbone network, a neck and a prediction-head. We use our AttnPAFPN as the neck, which consists of self-attention extended CSP-Bottlenecks (SA-CSP). Almost any method can be used for the final prediction by the head (e.g. TOOD [15]).** communication between the channels. To integrate self-attention mechanisms into these structure, we replace the convolutional bottleneck with our self-attention augmented version (see Figure 2(c). However, the use of standard self-attention is limited by its quadratic complexity, especially when applied to high-resolution images, such as those of hygiene monitoring. To address this challenge, we compare two resolution-optimized transformers: our novel efficient-global self-attention and local-window self-attention similar to Swin Transformer [28]. In general, a transformer-layer [40] can be described as \[\begin{split} y^{*}&=\text{Self-Attention}(\text{ LN}(x))+x,\\ y&=\text{FFN}(\text{LN}(y^{*}))+y^{*},\end{split} \tag{1}\] with \(x\) as its input and \(y\) as output features. LN refers to layer normalization [3], and FFN to a linear feed-forward layer. Self-attention [28] can be formulated as \[\text{Self-Attention}(q,k,v)=\text{Softmax}(\tfrac{qk^{T}}{\sqrt{d}}+b)v, \tag{2}\] where \(q,k,v\) are query, key and value matrices generated from input-features, \(d\) is a scaling factor and \(b\) is a trainable relative position bias term. Inspired by SegFormer [43], we extend our feed-forward-network (FFN) by CNN-layers, adding an inductive bias for finer localization using additional positional information: \[\begin{split} y^{*}&=\text{ReLU}(\text{LN}(\text{ PWConv}(x))),\\ y&=\text{PWConv}(\text{GeLU}(\text{LN}(\text{ DWConv}_{3\times 3}(y^{*}))))+x,\end{split} \tag{3}\] where ReLU [21] corresponds to Gaussian Error Linear Unit activation, PWConv to a point-wise convolution and DWConv\({}_{3\times 3}\) to a depth-wise \(3\times 3\) convolution. Local-window self-attention splits input features into non-overlapping windows with limited receptive fields, before applying multihead self-attention. As a result, the computational effort of self-attention is linear to the window-size. One downside is that information cannot pass between the windows within a layer. Several successive layers with shifting windows is necessary to create a global receptive field. In our experiments we follow the window partitioning strategy of Swin Transformer [28]. In contrast, our efficient-global self-attention reduces the spatial resolution of the input to a fixed global size by performing adaptive max-pooling on the input. The size of the global window is freely selectable, but we have set the window size in all our networks to \(\frac{1}{64}\) of the original resolution. In case of \(1024\times 1024\) resolution, the fixed global window would be \(16\times 16\). Regardless of the input resolution, it is also possible to set the global window to a fixed size. This results in a network complexity that is completely independent of the image resolution. With our efficient-global self-attention we create a single window with a global receptive field to which self-attention is subsequently applied. These and many more transformer variants can be easily inserted into the bottleneck structure as shown in Figure 2(c). ## 4 Evaluation Our evaluation focuses on demonstrating the benefits of our AttnPAFPN in high-resolution object detection. For this purpose we use the public AGAR dataset [29], containing high-resolution images of five different types of bacteria on agar plates. The data is divided into the higher-resolution (HR) and lower-resolution subsets (LR). The HR subset contains approximately 5k training images and 2k test images, with a resolution of around \(4,000^{2}\) pixels. The LR subset has around 3.5k training images and 1k test images with a resolution of \(2,048^{2}\) pixels. In addition to these two subsets, a third mixed-resolution subset is created by combining both subsets. We perform an ablation study to determine the effect individual components have on our methods accuracy. Results are shown in Tables 1 and 2. Furthermore, we implement our AttnPAFPN in current SoTA methods (e.g. TOOD Figure 3: **Illustrations of the used network modules.** [15]) and compare it with five different object detection models, listing the results in Table 3. All methods are implemented in the MMDetection-Framework [8] and we use the mAP metric to evaluate their performance. mAP provides a comprehensive assessment of accuracy and recall, averaging the maximum precision score for each recall value of all classes. In hygiene monitoring, detecting all colonies is a priority over precise localization. Hence, we use the Recall at an IoU threshold of 0.5 (R\({}^{50}\)) as an additional metric. ### Ablation Study In our first experiment, we assess the impact of our method by comparing AttnPAFPN with a baseline model (TOOD [15] + FPN [23]) as shown in Table 1. All networks are trained for 20 epochs with the SGD optimizer, a batch-size of 8, and use a pre-trained ResNet50 [20] as their backbone. The learning rate starts at \(5\cdot 10^{-3}\) and decreases by a factor of 10 after 8 and 16 epochs. Replacing the standard FPN in TOOD with the convolutional CSP-PAFPN leads to an improvement in mAP (\(+5.6/+0.8\)) and Recall (\(+7.8/\pm 0\)), but also increases the number of parameters by more than 100%. By introducing our efficient-global self attention (SA) into the CSP-Bottlenecks, we were able to reduce the parameters by over 15% and further boost mAP (\(+3.3/+1.1\)) and R\({}^{50}\) (\(+5.2/+1.8\)) compared to the previous step. In these initial experiments, all network necks use only the backbone scales \(\{8,16,32\}\) for predictions. To ensure better recognition of particularly large and tiny colonies, we add two more scales, so that we ultimately perform detection across five resolutions: \(\{4,8,16,32,64\}\). To address the heavy-weight nature of our network, we implemented \(1\times 1\) feature-compression layers in our AttnPAFPN, reducing the depth \(C\) of backbone-features to \(C^{*}=256\). Through this feature reduction our method achieves a parameter count comparable to the baseline FPN, while still achieving a stronger performance in terms of mAP and R\({}^{50}\). For a final increase in performance, we utilize multi-scale training. Overall AttnPAFPN increases mAP by \(+10.5/+2.3\) and Recall by \(+13.8/+2.1\) in comparison to the baseline. In Section 3.1 of our study, we present two variants of efficient transformer layers that are specifically designed for high-resolution images. Table 2 compares the performance of local-window SA (v1) and efficient-global SA (v2). The results indicate that efficient-global SA, which provides a coarse-grained overview of the entire image, leads to a significant improvement in accuracy. The differences in mAP are only marginal on the HR subset; on the LR subset, both networks achieve almost identical accuracy. The decisive point here is the significantly lower complexity and the lower number of weights of the global self-attention. ### Quantitative Evaluation In our final experiment, as listed in Table 3, we compare the performance of our proposed method, AttnPAFPN, with SoTA object detection methods [15, 24, 33, 44, 46]. The training process of all the networks is equal to the description in Section 4.1. The first few rows which are titled with "Patches: \(512\times 512\)" present the results of Majchrowska et al. [29]. They divide the images into patches of size \(512\times 512\) and then detect the colonies in each of these patches using Faster-RCNN [33] and Cascade-RCNN [6] with ResNet50 [20] as the backbone, similar to our setup. The following lines contain the results of the SoTA and our method using the full image under different resolutions. Upon comparison with Faster-RCNN, our AttnPAFPN shows lower performance for lower resolution, especially \(1024\times 1024\) for the HR-Subset. However, as the resolution increases, AttnPAFPN outperforms all baselines by a large margin. Furthermore, our AttnPAFPN \begin{table} \begin{tabular}{l|c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Params**} & \multicolumn{4}{c|}{**HR-Subset**} & \multicolumn{4}{c}{**LR-Subset**} \\ & & **mAP** & **AP\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\ achieved best results for TOOD [15] at a final resolution of \(2048\times 2048\), but it also shows excellent results even at moderate resolutions and therefore does not necessarily require very high resolutions with high computational overhead. ### Further Experiments Extending the evaluations in Section 4.1 and 4.2, we perform several more experiments on the AGAR dataset [29]. We investigating ability of generalization only using a small number of training data and examined various backbones. Furthermore, we evaluated the performace of our network on COCO [25] for general object detection and on LIVE-Cell [13] for detection of cells on low-resolution images. #### 4.3.1 Limited Data Analysis In our first additional experiment, we investigate how a reduction of the amount of data affects the training of our networks. For this reason, we created three evenly distributed subsets from the higher-resolution (HR) set, each containing 10 % (524 images), 5 % (262 images), and 1 % (53 images) of the training data. For evaluation, we use the complete validation set of the HR subset as described in Section 4. In contrast to the training in Section 4.2, we increase the number of epochs to 100 and reduce the learning rate after 50 and 80 epochs by a factor of 10. The results listed in Table 4 show a drop between 3 % to 5 % of the mAP with respect to networks, trained on all data when using 10 % of the training data. The drop from TOOD [15] extended by our AttnPAFPN shows a larger loss in mAP due to the added complexity of the data-hungry transformer layers, but it still shows better accuracy than the pure TOOD trained on all data. When using 5 % of the training data, a similar picture emerges. When training with only 1 % of the image data, a very strong drop in accuracy (approximately 20 % to 25 %) of all networks can be seen. However, our AttnPAFPN still shows an above-average performance here. \begin{table} \begin{tabular}{l|c|c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Params**} & \multicolumn{4}{c}{**Metrics**} \\ & & **mAP** & **AP\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\ #### 4.3.2 Backbone Analysis During all previous experiments we have used a pretrained ResNet50 [20] as the network backbone, since it is still considered as one of the most important baselines in computer vision. Further improvements in accuaracy can be achieved by using modern CNNs or transformer backbones. For this reason we want to compare ResNet50 with a stronger deformable convolution backbone (ResNet101-dcnv2) [45] and three transformer-based backbones. For the transformer backbones we use Swin-T [28], PVTV2-b2 [42], and the high-resolution optimized PLG-ViT-T [12, 11]. All transformer backbones are similar in size to ResNet50 and training takes place exclusively on the higher-resolution subset of AGAR [29] at a resolution of \(1536\times 1536\). We trained ResNet101-dcnv2 with the same hyperparameters as ResNet50. For the transformer backbones, we adapted the training recipes proposed by the authors from COCO [25] to AGAR. The results in Table 5 confirm the trend of recent years, with transformers outperforming their CNN counterparts. Even the larger ResNet101-dcnv2 backbone cannot keep up with the transformers. These manage to outperform ResNet50 and ResNet101-dcnv2 by about \(+2\) and \(+1\) mAP, respectively. It is also shown that the differences between transformer networks in terms of accuracy are small. However, this experiment shows the major drawback of the standard SA used by PVTV2. Even if the number of parameters is the same, the computational cost is significantly higher compared to Swin and PLG-ViT. PVTV2 requires about 200 % more GPU memory than the other two networks during training. The computational effort is also significantly higher during the inference [12]. For this reason, PLG-ViT will be used as the backbone of choice in the final experiment to achieve the best possible trade-off between accuracy and performance. #### 4.3.3 Beyond Colony Detection In addition to detecting bacteria colonies in high-resolution images, we also want to evaluate our method on medium-resolution images of other areas of application. For this purpose we use the COCO dataset [25], which is a widespread baseline for object detection.For training the networks on COCO we use the standard settings [8] proposed by the authors and train for 12 epochs. As network heads we use TOOD [15] and Mask-RCNN [19] as an additional method for instance segmentation. The results of the evaluation on COCO can be seen in Table 6. In contrast to AGAR, only a slightly improvement of the accuracy stemming from AttnPAFPN can be seen. As already noted in Section 2, this can be explained by the different characteristics of the COCO dataset, such as the relatively small number of tiny objects, in contrast to the AGAR dataset. Using Mask-RCNN, on the other hand, the impact of our neck is more significant. We achieve \(+1.4/+1.2\) mAP for detection and segmentation, respectively. The extension of the methods by a stronger transformer backbone increases the accuracy considerably. We also performed experiments on the LIVECell dataset [13], which is used to detect and segment cells in microscopy images. For this we also use TOOD and Mask-RCNN, which were previously pre-trained on COCO. Additionally, we made some adjustments regarding the anchorboxes of Mask-RCNN and TOOD as suggested by the authors of the dataset [13]. As a result, the networks are better adapted to the characteristics of the dataset. The results on the LIVECell data set are listed in Table 7. Here we can see that especially Mask-RCNN performs much better on the dataset than TOOD, which is a pure detection network. But especially TOOD benefits strongly from the extension by AttnPAFPN, which outperforms the baseline by \(+3.4\) mAP. Mask-RCNN achieves with AttnPAFPN an increase in accuracy of \(+1.2/+0.7\) \begin{table} \begin{tabular}{l|c|c|c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Method**}} & \multirow{2}{*}{**Backbone**} & \multirow{2}{*}{**Params**} & \multicolumn{2}{c}{**Metrics**} \\ & & & **mAP\({}^{\textbf{th}}\)** & **mAP\({}^{\textbf{m}}\)g** \\ \hline TOOD [15] & ResNet50 [20] & 32.0 M & 42.4 & - \\ TOOD [15] + ours & ResNet50 [20] & 32.8 M & 42.6 & - \\ TOOD [15] + ours & PLG-ViT [12] & 34.8 M & 48.0 & - \\ \hline Mask-RCNN [19] & ResNet50 [20] & 43.7 M & 38.2 & 34.7 \\ Mask-RCNN [19] + ours & PLG-ViT [12] & 45.4 M & 39.6 & 35.9 \\ Mask-RCNN [19] + ours & PLG-ViT [12] & 48.4 M & 45.4 & 41.4 \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison of detection and segmentation accuracy on the COCO [25] validation set. Different methods [19, 15] are compared with our approach. We use TOOD [15] and Mask-RCNN [19] as the head and ResNet50 [20] and PLG-ViT [12] as the backbone.** \begin{table} \begin{tabular}{l|c|c|c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Method**}} & \multirow{2}{*}{**Params**} & \multicolumn{2}{c}{**Metrics**} \\ & & **mAP\({}^{\textbf{th}}\)** & **mAP\({}^{\textbf{m}}\)g** \\ \hline ResNet50 [20] & **32.8 M** & 68.2 & 96.3 & 81.1 & 96.2 \\ ResNet101-dcnv2 [45] & 54.3 M & 69.2 & 96.9 & 82.9 & 97.5 \\ Swin Tiny [28] & 36.4 M & 69.9 & 96.8 & 84.1 & 97.3 \\ PVTV2-b2 [42] & 33.7 M & 70.2 & 96.8 & 83.9 & 97.4 \\ PLG-ViT Tiny [12] & 34.8 M & **70.4** & **97.0** & **84.2** & **97.6** \\ \hline \hline \end{tabular} \end{table} Table 5: **Comparison of detection accuracy of our method on the AGAR [29] val set. Different backbones [12, 20, 28, 42, 45] are compared. For this experiment, we use TOOD [15] as head and evaluate at a resolution of \(1536\times 1536\).** mAP for detection and segmentation, respectively. Figure 5 shows some visual results of our method with Mask-RCNN as head and ResNet50 as backbone. ### Visual Evaluation In addition to a quantitative evaluation we also present a qualitative evaluation on a greatly enlarged section of the image in Figure 4. Here it can be seen that the conventional method has difficulties with particularly small and overlapping colonies, in contrast to our method. In addition to the visual results on AGAR [29], typical results on the LIVE-Cell dataset [13] can be seen in Figure 5. ## 5 Conclusion In this paper, we presented AttnPAFPN, a high-performance feature pyramid for high-resolution object detection. Our AttnPAFPN uses our state-of-the-art efficient-global self-attention layers for better visual understanding. Moreover, the efficient-global self-attention can be easily interchanged with any other self-attention mechanism. Furthermore we add a additional scales to our PAFPN for predicting tiny and large objects on high- and low-resolution featuremaps, respectively. In order to be executable even on resource-constrained hardware, we have considered efficiency and parameter count during the optimization of our method. We have performed a comprehensive evaluation on a large scale public dataset [44] for detecting bacterial colonies on agar dishes and proved the surpassing accuracy of our method compared to the current state-of-the-art. In addition, we have performed experiments on the standard object detection baseline COCO [25], as well as on LIVE-Cell [13] for biomedical image analysis. ## Acknowledgments This work was supported by funding from the the Federal Ministry of Education and Research Germany in the project M\({}^{2}\)Aind-DeepLearning (13FHS108IA). Additional funding was provided by the German Research Foundation under grant number INST874/9-1 and by the Albert and Anneliese Konanz Foundation. Figure 4: **Qualitative comparison** of Faster-RCNN [33] (a), TOOD [15] (b), Faster-RCNN + AttnPAFPN (c) and TOOD + AttnPAFPN (d). All networks are trained on the AGAR dataset [29] under equal conditions. A small colony is visible in the first row with low contrast and distracting texture in the background. The second row shows a cluster of colonies with low contrast. Figure 5: **Qualitative result** of Mask-RCNN [19] with our AttnPAFPN and ResNet50 [20] on LIVECell [13].
2305.01096
A Novel Model for Driver Lane Change Prediction in Cooperative Adaptive Cruise Control Systems
Accurate lane change prediction can reduce potential accidents and contribute to higher road safety. Adaptive cruise control (ACC), lane departure avoidance (LDA), and lane keeping assistance (LKA) are some conventional modules in advanced driver assistance systems (ADAS). Thanks to vehicle-to-vehicle communication (V2V), vehicles can share traffic information with surrounding vehicles, enabling cooperative adaptive cruise control (CACC). While ACC relies on the vehicle's sensors to obtain the position and velocity of the leading vehicle, CACC also has access to the acceleration of multiple vehicles through V2V communication. This paper compares the type of information (position, velocity, acceleration) and the number of surrounding vehicles for driver lane change prediction. We trained an LSTM (Long Short-Term Memory) on the HighD dataset to predict lane change intention. Results indicate a significant improvement in accuracy with an increase in the number of surrounding vehicles and the information received from them. Specifically, the proposed model can predict the ego vehicle lane change with 59.15% and 92.43% accuracy in ACC and CACC scenarios, respectively.
Armin Nejadhossein Qasemabadi, Saeed Mozaffari, Mahdi Rezaei, Majid Ahmadi, Shahpour Alirezaee
2023-05-01T21:40:23Z
http://arxiv.org/abs/2305.01096v1
# A Novel Model for Driver Lane Change Prediction in Cooperative Adaptive Cruise Control Systems ###### Abstract Accurate lane change prediction can reduce potential accidents and contribute to higher road safety. Adaptive cruise control (ACC), lane departure avoidance (LDA), and lane keeping assistance (LKA) are some conventional modules in advanced driver assistance systems (ADAS). Thanks to vehicle-to-vehicle communication (V2V), vehicles can share traffic information with surrounding vehicles, enabling cooperative adaptive cruise control (CACC). While ACC relies on the vehicle's sensors to obtain the position and velocity of the leading vehicle, CACC also has access to the acceleration of multiple vehicles through V2V communication. This paper compares the type of information (position, velocity, acceleration) and the number of surrounding vehicles for driver lane change prediction. We trained an LSTM (Long Short-Term Memory) on the HighD dataset to predict lane change intention. Results indicate a significant improvement in accuracy with an increase in the number of surrounding vehicles and the information received from them. Specifically, the proposed model can predict the ego vehicle lane change with 59.15% and 92.43% accuracy in ACC and CACC scenarios, respectively. ## I Introduction The number of vehicles on roads and highways has soared in recent years and we are witnessing more traffic congestion and vehicle accidents on the streets. To address these issues, car manufacturers have been developing advanced driver assistance systems (ADAS) [1]. Currently, ADAS has various modules for safe and convenient driving including collision avoidance system (CA), adaptive cruise control (ACC), and lane keeping assistance (LKA). The next generation of ADAS and automated vehicles will rely on advanced sensor technology and leverage artificial intelligence to predict the drivers' behavior and readiness and take appropriate measures in advance, to avoid accidents [2]. Lane changing is one of the most important behaviors of drivers as it is the main cause of vehicle collisions. Accurate lane change (LC) prediction will lead to improved vehicle safety and passengers' comfort. Lane change prediction is a subset of trajectory prediction in which spatial coordinates of vehicles are predicted in the future time. Unlike trajectory prediction, lane change prediction aims to predict if the driver drives away from the current lane and merges into adjacent lanes or keeps the current lane for driving. In this context, the vehicle equipped with ADAS and automated driving functions is called the ego vehicle and other vehicles around the ego vehicle are referred to as surrounding vehicles. Lane change prediction can be divided into two main groups: driver lane change prediction [3] and surrounding vehicles' lane change prediction [4]. In other words, driver lane change prediction aims to predict the ego vehicle change lane, while surrounding vehicles' lane change prediction tries to forecast when another vehicle tries to cut-in in front of the ego vehicle from adjacent lanes. There are several methods commonly used for lane change prediction in autonomous driving systems, including rule-based methods [5], machine learning-based methods [6], and sensor fusion-based methods [7]. Rule-based methods rely on speed difference and spacing between the vehicle and surrounding traffic to predict lane changes, while machine learning-based methods utilize trajectories of the surrounding vehicles to make predictions. Sensor fusion-based methods combine data from multiple sensors, such as cameras, lidar, and radar, or multiple vehicles information obtained through V2V to predict lane changes. Agent-based methods, game theory and mixed logic programming have been used for rule-based lane change prediction [8]. Machine learning-based methods include Bayes classifier, support vector machine, hidden Markov model, or artificial neural network and deep learning algorithms [9]. Vehicle motion parameters such as steering wheel angle, driver's parameters like eye movement and head rotation, and surrounding vehicles information such as location, speed, and acceleration are combined in sensor fusion-based methods [10]. This paper focuses on the driver lane change prediction scenario and aims to explore the impact of the ego-vehicle's status (location, speed, acceleration) and the number of surrounding vehicles (ACC and CACC systems) on the lane change prediction accuracy. We used multiple long short-term memory (multi-LSTM) deep models which were trained and evaluated on a real traffic data set (HighD) [11]. ## II ACC and CACC Systems According to the SAE level 3 (L3) autonomy, lane-changing algorithms are the basis of ACC systems in which the vehicle is capable of changing lanes under a human driver's supervision. ACC systems typically use one or more sensors, such as radar, lidar, or cameras, to detect the distance and speed of the vehicle in front. Utilizing this information, the ego vehicle can follow the leading vehicle at a safe distance. However, the performance of the ACC systems is limited to the on-board sensors' range of approximately 150 meters and a field of view of approximately 20 degrees. Therefore, the CACC systems have emerged to supplant onboard sensors with vehicular communication to exchange information between the vehicles [12]. Unlike the ACC systems that only rely on distance and velocity measurements, the CASC can use extra information from adjacent vehicles such as their acceleration profile [13]. Therefore, the type of information and number of surrounding vehicles are different in ACC and CASC systems. In this paper, we assume that the ACC system can measure the position and velocity (two parameters) of lead vehicles in the current lane, left lane, and right lane (3 vehicles). In the CASC system, on the other hand, we assume that the position, velocity, and acceleration (three parameters) of lead and lag vehicles in the current lane, left lane, and right lane (6 vehicles), as well as adjacent vehicles in the left and right lanes (2 vehicles), are available. ## III Dataset In this paper, we used _HighD_ which is a large-scale dataset containing high-resolution videos recorded by a drone from German highways [11]. The dataset contains the trajectories of more than 110,000 vehicles recorded at six different locations. For lane change prediction specifically, the HighD dataset includes 5,600 complete lane changes performed by the drivers, as well as data on the surrounding vehicles and the driving environment. Compared to other datasets used for lane change prediction, this dataset has a larger size from the lane change point of view. For example, the number of lane changes in the HighD dataset is two times as much as NGSIM [14]. This is mainly due to a lower average traffic density and the larger number of lanes result. The data set has metadata which provides valuable information for lane change prediction such as the assigned ID to each vehicle, its (x,y) position, lateral/longitudinal velocity and acceleration of the vehicle, lane ID, as well as IDs of eight surrounding vehicles. Figure 1 shows the location of the ego vehicle, preceding/following vehicles (PV, FV) which are in the same lane with the ego vehicle, left preceding/ alongside/following (LP, LA, LF) vehicles which are in the adjacent lane on the left as well as right preceding/ alongside/following (RP, RA, RF) vehicles which are in the adjacent lane on the right. ## IV Proposed method In the proposed LC prediction method, a LC commences when the vehicle's lane ID changes. After finding the vehicles with LC behavior and extracting the required information we train an LSTM to predict lane changing (LC) and lane keeping (LK) actions. ### _Variables_ To train and test the LSTM, first, we selected the vehicles with LC. Then, the corresponding LC frame is detected. After finding the LC frame (\(f_{ic}\)), we select \(n\) frames before the event and use [\(f_{ic}-n\), \(f_{ic}\)] frames as our training set. In other words, parameter \(n\) indicates the time length of the data set. Similarly, \(n\) frames will be used for training the LSTM for LK action. Finally, surrounding vehicles' parameters were extracted from the _HighD_ data set which include relative distance, relative speed, and relative acceleration between surrounding vehicles and the ego vehicle. We will investigate the effect of these parameters and the number of surrounding vehicles on the LC behavior. Table 1 shows the behavior of vehicle number 48 which was in the \(3^{\text{rd}}\) lane from frame 1137 to frame 1147. At frame 1148, the vehicle moved to the \(2^{\text{nd}}\) lane. The ego vehicle was surrounded by vehicles number 45, 46, and 49. The ID value is set to 0, if no vehicle exists in the corresponding location. The input vector of the LSTM model depends on the vehicle's information, and the type of surrounding vehicles: \[x_{t}=[dp(i),dv(i),da(i)] \tag{1}\] where \(i\) shows the type of vehicle which iterates over {LA, LP, PV, RP, RA, RF, FV, LF}. Parameters _dp(i), dv(i)_ and _da(i)_ are Manhattan distances between position, velocity and acceleration of the ego vehicle and the \(i^{\text{th}}\) vehicle, respectively. The length of the input vector also depends on the number of frames before LC. \[X_{t}=[x_{t-n},...,x_{t-1},x_{t}] \tag{2}\] where \(t\) is equal to \(f_{ic}\) for the car with LC and \(n\) is the time length. ### _LSTM Model_ LSTM stands for Long Short-Term Memory, which is a variant of Recurrent Neural Networks (RNN). As shown in Figure 2, our proposed multi-LSTM network consists of two LSTM layers, each consisting of several LSTM cells. The output of the second layer goes through a fully connected layer (FC) with 32 neurons to predict the binary value of 0 or 1, representing LK and LC actions respectively. Table 2 shows other parameters of our LSTM model. Each cell receives input from the previous cells in the same layer as well as the previous layer. After processing the inputs, the cell generates an output and propagates it to the next cells. Figure 3 shows an LSTM cell which consists of four components. Cell state stores information over time. The input gate determines how much data should be entered into the memory cell. Forget gate indicates what part of data should be discarded from going into the memory cell. Finally, the output gate controls the output of the memory cell, determining which information from the previous time step should be kept or forgotten. This allows the network to selectively remember or forget information over time. Relationships between LSTM cell components are as follows: \[f_{t}= \sigma(W_{xf}\cdot xt+W_{lf}\,h_{t-1}+b_{f}\,) \tag{3}\] \[i_{t}= \sigma(W_{xi}\,i_{t}+W_{lf}\,h_{t-1}+b_{f})\] (4) \[o_{t}= \sigma(W_{x}\,o_{t}+W_{lh}\,h_{t-1}+b_{o}\,)\] (5) \[c_{t}= f_{t}\,\odot\,c_{t-1}+\,i_{t}\,\odot\,\tanh(W_{xc}\,x_{t}+W_{hc} \,h_{t-1}+b_{c}\,)\] (6) \[h_{t}= o_{t}\,\odot\,\tanh(c_{t}) \tag{7}\] where \(\sigma\) is sigmoid function, and \(\odot\) is element-wise product and \(f_{t}\), \(i_{t}\) and \(o_{t}\) are gating vectors. ## I Experimental Results This section studies the effect of LSTM architecture, type of information, number of frames and number of surrounding vehicles on the LC prediction. ### _Evaluation Metrics_ The performance of the LC prediction model can be assessed based on accuracy, precision, and recall metrics. These criteria are calculated by using false positive (FP), false negative (FN), true negative (TN), and true positive (TP) counts. \[Accuracy=\frac{\textit{TP}+\textit{TN}}{\textit{TP}+\textit{TN}+ \textit{FP}+\textit{FN}} \tag{8}\] \[Precision=\frac{\textit{TP}}{\textit{TP}+\textit{FP}}\] (9) \[Recall=\frac{\textit{TP}}{\textit{TP}+\textit{FN}} \tag{10}\] ### _LSTM Architecture_ In this experiment, we investigate the effect of LSTM cells number on the LC prediction. Table 3 shows that increasing the number of cells from 8 to 128 leads to an improvement in the LSTM performance, achieving an average increase of 2% in accuracy across all tested datasets. However, when we used 256 cells, the accuracy declined to 91.97 due to the overfitting problem, when multi-LSTM starts memorizing data. Therefore, 128 cells were selected for the next experiments. ### _Vehicles' Information_ To assess the influence of incorporating diverse information regarding surrounding vehicles on our metrics, we conducted an analysis of three distinct scenarios: solely _dp_, both _dp_ and _dv_, and the complete set of available information encompassing _dp_, _dv_, and _da_. As illustrated in Table 4, augmenting the \begin{table} \begin{tabular}{c c c} **Parameter** & **Description** & **Value** \\ \hline & CAC(8 surrounding cars, 5 frames, 0, dv., da) & 120 \\ Input Dimension & ACC(3 preceding cars, 5 frames, 0, 0) & 30 \\ Output Dimension & Dimension of output layer & 1 \\ Batch Size & Number of training cases over each optimizer update & 32 \\ Hidden Layer Number & Number of LSTM layers & 2 \\ Dropout rate & The rate used in dropout layers & 0.2 \\ Number of epochs & Number of training update & 100 \\ Loss function & Function to evaluate loss & Binary cross entropy \\ Activation Function & Function to activate output of LSTM layers & Rabin \\ Optimiter & The function to minimize loss & RMSprop \\ \end{tabular} \end{table} TABLE II: LSTM Parameters. Fig. 3: An LSTM cell. Fig. 2: The architecture of the proposed LSTM network. amount of information improves the LC accuracy. Nonetheless, the inclusion of additional data results in a significant surge in our execution time, as evidenced by the increase from 540s for solely _dp_ to 986s and 1624s for _dp_, _dv_, and _dp_, _dv_, _da_, respectively. ### _Frame Set Size_ Time length (frame set size) before lane change that determines the input sequence length has a significant effect on the accuracy of predictions made by an LSTM network. If the frame set size is too short, the network may not have enough information to make accurate predictions. On the other hand, if the frame set size is too long, the network may suffer from irrelevant information, high computation, and vanishing gradient problem. Figure 4 shows that frame set of size 5 produces the best results for LC prediction. ### _ACC and CACC Systems_ The number and location of vehicles with respect to the ego vehicle, are critical factors in the design of our system. In this regard, we present a comprehensive analysis of the performance metrics associated with four different scenarios shown in Table 5. Experiments demonstrate that the incorporation of information regarding the alongside vehicles fails to enhance the accuracy of the network, given that the presence of a vehicle in this region typically results in a LK. Furthermore, a comparison between the first and the last row of Table 5 reveals a significant improvement in all the metrics when utilizing all surrounding vehicles, in comparison to the scenario of utilizing solely three preceding vehicles. ## VI Conclusion This study explores the impact of vehicle-to-vehicle communication and the type of vehicles' information on the prediction accuracy of lane change. Results demonstrate that the proposed model, which employs an LSTM trained on the HighD dataset, achieves significant improvement in accuracy with an increase in the number of surrounding vehicles and their information. By changing our scenario from ACC to CACC, a 33.28% increase in accuracy was seen. Increasing the number of LSTM cells to 128 and selecting a frame set size of 5 leads to maximum accuracy. Additionally, using more information about other vehicles increases lane change prediction accuracy at the cost of a higher computation burden. ## Acknowledgment We acknowledge the financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) Catalyst Grant.
2310.05046
FakeGPT: Fake News Generation, Explanation and Detection of Large Language Models
The rampant spread of fake news has adversely affected society, resulting in extensive research on curbing its spread. As a notable milestone in large language models (LLMs), ChatGPT has gained significant attention due to its exceptional natural language processing capabilities. In this study, we present a thorough exploration of ChatGPT's proficiency in generating, explaining, and detecting fake news as follows. Generation -- We employ four prompt methods to generate fake news samples and prove the high quality of these samples through both self-assessment and human evaluation. Explanation -- We obtain nine features to characterize fake news based on ChatGPT's explanations and analyze the distribution of these factors across multiple public datasets. Detection -- We examine ChatGPT's capacity to identify fake news. We explore its detection consistency and then propose a reason-aware prompt method to improve its performance. Although our experiments demonstrate that ChatGPT shows commendable performance in detecting fake news, there is still room for its improvement. Consequently, we further probe into the potential extra information that could bolster its effectiveness in detecting fake news.
Yue Huang, Lichao Sun
2023-10-08T07:01:07Z
http://arxiv.org/abs/2310.05046v2
# Harnessing the Power of ChatGPT in Fake News: ###### Abstract The rampant spread of fake news has adversely affected society, resulting in extensive research on curbing its spread. As a notable milestone in large language models (LLMs), ChatGPT has gained significant attention due to its exceptional natural language processing capabilities. In this study, we present a thorough exploration of ChatGPT's proficiency in generating, explaining, and detecting fake news as follows. _Generation_ - We employ four prompt methods to generate fake news samples and prove the high quality of these samples through both self-assessment and human evaluation. _Explanation_ - We obtain nine features to characterize fake news based on ChatGPT's explanations and analyze the distribution of these factors across multiple public datasets. _Detection_ - We examine ChatGPT's capacity to identify fake news. We explore its detection consistency and then propose a reason-aware prompt method to improve its performance. Although our experiments demonstrate that ChatGPT shows commendable performance in detecting fake news, there is still room for its improvement. Consequently, we further probe into the potential extra information that could bolster its effectiveness in detecting fake news. ## 1 Introduction Fake news has raised significant concerns all over the world Zhou and Zafarani (2020). For example, malicious actors spread fake news to gain advertising revenue Rao (2022), influence people's opinions Faris et al. (2017), and even interfere with the election Allcott and Gentzkow (2017). Therefore, both industry and academia pay much attention to studying fake news nowadays. Most existing fake news are text-based messages spreading in the social network, so much related research utilizes the language models (e.g., GPT-2 Zellers et al. (2019), BERT Singhal et al. (2020)) to generate and detect the fake news. Recently, the most popular large language model, i.e., ChatGPT Zhou et al. (2023), has received widespread acclaim for its exceptional performance across various domains, including code bug fixing Xia and Zhang (2023), text translation Jiao et al. (2023); Gao et al. (2023), and text summarization Gao et al. (2023). However, ChatGPT has been limited exploration of for studying fake news. Even though it has been released for nearly eight months, it is still the top performers among all popular large language models (LLMs)1. Footnote 1: [https://huggingface.co/spaces/ludwigstump/llm-leaderboard](https://huggingface.co/spaces/ludwigstump/llm-leaderboard) Due to its popularity and strong capabilities, ChatGPT presents both opportunities and challenges within the domain of fake news research. Despite its potential, recent studies Deshpande et al. (2023); Li et al. (2023) have raised concerns about ChatGPT being exploited for malicious purposes, which makes it potential to generate fake Figure 1: Multiple prompts for fake news generation through ChatGPT. The words in red mean details of generated fake news. news as shown in Figure 1. As a result, it is vital to explore ChatGPT's capacity for fake news generation in order to address this severe problem next. Besides generating fake news via ChatGPT, we should also leverage its ability for fake news explanation and detection. For example, a significant advantage of ChatGPT lies in its exceptional understanding capability, which has been proved in recent studies like hate speech explanation Huang et al. (2023) and emoji understanding Das et al. (2023). This has motivated us to utilize ChatGPT for fake news understanding, by providing explanations that demonstrate a certain level of comprehension and reasoning. Moreover, it is crucial to investigate the performance of ChatGPT in fake news detection, identify its limitations, and devise strategies to enhance its detection capabilities. In this paper, we did an in-depth exploration in fake news generation, detection, and explanation via ChatGPT. In Section 3, we first investigate four possible prompting methods that enable ChatGPT to generate fake news. To evaluate the quality of the generated samples, we conduct both self-evaluation and human evaluation and find that the news generated by ChatGPT is extremely confusing. Then we conduct fake news explanations through it and identify nine features that define fake news in Section 4. Based on these features from fake news explanations, we propose an effective reason-aware prompting method to enhance ChatGPT's ability to detect fake news in Section 5. Our experiments demonstrate that the reason-aware prompt improves ChatGPT's fake news detection capabilities across most datasets. We discover that ChatGPT exhibits impressive performance in detecting fake news in some datasets, but there is still room for improvement. Therefore, we delve into additional information (e.g., context information of fake news) that could assist ChatGPT in further enhancing fake news detection. Our contributions in this paper can be summarized as follows: * We examine ChatGPT's capability to generate fake news using four prompting methods. The results from self-evaluation and human evaluation show that the generated samples are of high quality, comparable to real-world news. * We investigate ChatGPT's capacity to explain fake news and summarize nine features that define fake news across nine datasets, which offers some insights for future work. * We assess ChatGPT's effectiveness in detecting fake news. Based on the summarized features from the above explanations, we propose a reason-aware prompting method to enhance its detection capability. Experimental results indicate that while ChatGPT exhibits a strong ability to detect fake news, there is still room for improvement. Therefore, we explore additional information that can assist ChatGPT in detecting fake news more effectively. ## 2 Related Work **Fake News Detection and Generation.** In recent years, there has been a considerable body of research on the detection and generation of fake news. Much research focused on additional information of fake news. For example, EANN Wang et al. (2018) introduced an event discriminator to predict event-auxiliary labels, MVAE Khattar et al. (2019) used a variational autoencoder to discover correlations between modalities, and SpotFake+ Singhal et al. (2020) employed transfer learning to extract features from pre-trained models. Some researchers focused on consistency between modalities for fake news detection Xue et al. (2021); Sun et al. (2021). Graph networks were also utilized in several studies Ren et al. (2020); Xu et al. (2022); Wang et al. (2020); Mehta et al. (2022), with excellent results. Meanwhile, users' historical and social engagements are used in UFFD Dou et al. (2021). Some explainable models for fake news detection were proposed like defend Shu et al. (2019) and xfake Yang et al. (2019). In the field of fake news generation, Grover Zellers et al. (2019) introduced a controllable language generation model that can generate fake news and detect generated fake news. In addition, a method is proposed Shu et al. (2021) to generate news by learning from external knowledge and using a claim reconstructor. **Evaluation of ChatGPT.** Several studies have focused on evaluating ChatGPT's performance across various tasks. For instance, ChatGPT was evaluated on common NLP tasks Bang et al. (2023); Qin et al. (2023), demonstrating superior zero-shot learning performance. Translation capabilities of ChatGPT were also explored in recent studies Jiao et al. (2023); Gao et al. (2023). Some research also studied its ability to explain implicit hate speech Huang et al. (2023), personality assessment Rao et al. (2023) and human-like summarization Gao et al. (2023). Furthermore, ChatGPT also shows great potential in bug fixing (Xia and Zhang, 2023) and text data augmentation (Dai et al., 2023). ## 3 Fake News Generation via ChatGPT In this section, we first investigate how to use ChatGPT's to generate fake news by prompts. Here, we explore four prompt methods for generation. In order to fairly evaluate the quality of the generated fake news, we conduct both self-evaluation and human evaluation on the generated samples. ### Prompt Methods As we know, in many instances, when we ask ChatGPT to generate potentially harmful content (e.g., fake news), ChatGPT will refuse to provide a response (e.g., say something like "As an AI language model, I cannot...") because of the utilization of its moderation (Markov et al., 2022) mechanism and the technique of reinforcement learning from human feedback (RLHF) (Bai et al., 2022). To avoid it, we employ the following four methods as shown in Figure 2 to prompt ChatGPT in generating fake news. We also provided their comparison from two perspectives: generate target content and generate extreme content, in Appendix A.2. **(a) Altering text meaning.** This prompt way entails modifying the original meaning of a given text. To be specific, we prompt ChatGPT to change the meaning of the given text, resulting in a meaning different from the initial one. The generated text may conflict with the facts in the original text, which means it may be a piece of fake news. **(b) Inventing stories.** This method entails creating fictional stories by providing the outline of the target story and prompting ChatGPT to generate this story with details. Therefore, the generated story with unreal information may serve as fake news. **(c) Creating imaginary text.** This approach focuses on generating fictional content. We provide the original text and prompt ChatGPT to transform it into a fabricated piece. The method is different from prompt method (b) because the content generated by ChatGPT is arbitrary, while in (b), we can specify the generated content by providing an outline of the story. **(d) Multiple prompts.** Above three methods all use a single prompt to generate fake news. However, they are not direct (e.g., generate news-like content directly) and always fail to generate target text due to OpenAI's mechanism against harmful content. Therefore, inspired by the recent study (Shaikh et al., 2022; Li et al., 2023), we devised a three-step prompt strategy (i.e., multiple prompts) to generate target fake news that can evade ChatGPT's filters. We show an example of this prompt method in Figure 1. First, we employ the "Topic Prompt" to guide the conversation toward a news-related subject, prompting ChatGPT to generate content indirectly associated with the desired news topic. Secondly, we utilize the "Deep Prompt" to generate a more specific news article. However, these initial news articles may still lack critical details, which is where the third step comes in. Thirdly, we use the "News Augmentation Prompt" to augment the news content generated by ChatGPT, adding specific details such as time, location, and media source to make the news article more realistic and believable. ### Quality of Generated Samples We use the above four methods to generate 40 pieces of fake news. To evaluate the generation quality of ChatGPT, we conduct both self-evaluation and human evaluation. **Self-evaluation.** For self-evaluation, we performed fake news detection using ChatGPT itself. To minimize the impact of contextual semantics during the conversation, we created a new conversation Figure 2: Four kinds of the prompt template. for each sample during evaluation. Additionally, to achieve more realistic and accurate results, we categorized ChatGPT's outputs into three distinct categories: fake news, real news, and uncertain. We utilized a prompt template such as "_Please evaluate the authenticity of the following news. You can respond with 'fake','real', or 'uncertain'"_. The experiment revealed that out of the 40 fake news samples, ChatGPT accurately identified 29 fake news instances (an accuracy of 72.5\(\%\)). However, it judged nine instances as real news and two instances as uncertain cases, suggesting a slight difficulty in detecting its own generated content. **Human evaluation.** To assess the real-world effectiveness of ChatGPT's generated samples, we conduct the human evaluation by handing out questionnaires. The details of human evaluation can be found in Appendix A.3. We totally collected 294 data items during human evaluation, consisting of 223 items about fake news and 71 items about real news. Overall, we observed that humans achieved an accuracy of only 54.8\(\%\) in identifying the generated fake news, highlighting the challenge of distinguishing these instances as fake. Notably, one sample exhibited the lowest accuracy, with only 10 out of 33 judgments being correct (a mere 33.3\(\%\) accuracy). This suggests that some generated samples effectively deceive human judgment. Furthermore, we investigated the reasons why humans think the given news is fake (as shown in Table 2). "Lack of evidence or credible source" is the primary reason, comprising 36\(\%\). This discovery aligns with the observations in Section 4, emphasizing the significance of incorporating additional details to improve the generation quality. The factor ranks second is "unauthoritative or informal expressions," indicating the need for ChatGPT to enhance its language style when generating news-like content. Furthermore, "fact conflict" constitutes 18\(\%\) of the cases, implying that generated news may include factual inconsistencies (e.g., hallucination [1]), highlighting the importance of fact-checking for its outputs. Overall, the above results indicate that leveraging certain prompt ways allows ChatGPT to produce high-quality fake news, closely resembling real-world news. \begin{table} \begin{tabular}{c l l} \hline \hline **Person** & **Per. (\%)** \\ \hline Fact Conflict & 18.4 \\ Unauthoritative or informal expressions & 23.9 \\ Oversimplification or emotional bias & 13.5 \\ Lack of evidence or credible source & 36.2 \\ Lack of context & 6.1 \\ Other & 1.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Percentage of reasons in human evaluation. \begin{table} \begin{tabular}{c l l} \hline \hline **Option** & **Reason** & **Description** \\ \hline **A** & Emotional bias or misleading intent & This explanation suggests that fake news is characterized by an emotional bias, which can include an excessively aggressive portrayal of a subject or an attempt to manipulate readers to achieve a hidden agenda. \\ \hline B & Lack of evidence or credible sources & This reason indicates that fake news lacks credible evidence to support its claims. \\ \hline C & Conflicting facts & This reason suggests that fake news conflicts with established facts, such as wrong information about people or events. \\ \hline D & Informal statements, expressions, or vague language & This reason highlights that the language used in fake news may not be formal, or may be vague or ambiguous. \\ \hline E & Insufficient supporting materials & This reason indicates that although the news may have mentioned the source of an event or provided relevant evidence, the evidence is not sufficient to support its claims. \\ \hline F & Lack of context or taken out of context & This reason indicates that fake news may lack relevant context, such as comments, retweets and user information that provide additional information. \\ \hline **G** & Misinterpretation or misquotation & This reason suggests that fake news may misinterpret or misquote facts, leading to inaccurate or false claims. \\ \hline H & Oversimplification or exaggeration & This reason highlights that fake news may oversimplify or exaggerate information, leading to false claims. \\ \hline I & Doctored images or videos & This reason indicates that the images or videos mentioned in the news text may be altered or misrepresented, making them untrustworthy. \\ \hline J & Other & ChatGPT must specify a reason if the above options don’t match its answer. \\ \hline \hline \end{tabular} \end{table} Table 1: Summary reason from fake news explanation. ## 4 Explanation of Fake News via ChatGPT In this section, we evaluate ChatGPT's capacity to provide explanations on given fake news. Our goal is to examine the factors that contribute to defining fake news. The explanation process comprises two stages: reason summary and reason selection, which are shown in Figure 3(a) and Figure 3(b) separately. By analyzing the distribution of these nine factors, we found that these reasons (factors), to different extents, characterize fake news and may provide insights for future work. ### Reason Summary Firstly, we select some fake news from nine public datasets and ask ChatGPT to explain why these pieces of news are fake. Then we select a subset from these explanations and manually summarize them, yielding elementary reasons. We consult ChatGPT to determine if any of these reasons overlap and to suggest additional reasons. After several iterations of this process, we finally identify nine reasons that ChatGPT offers for why a given piece of news is fake. The nine explainable reasons are summarized in Table 1. ### Reason Selection After summarizing the explanations, we ask ChatGPT to select reasons from these nine options (potentially selecting more than one option) or provide its own reason if none of the listed options apply when presented with a fake news sample. The distribution of single options across different datasets is shown in Figure 4. Letter A to I represent the nine reasons respectively, and J represents other reasons. We also list some explanations and their mapping options in Appendix H. ### Analysis In Figure 4, we noticed that the distribution of options across the nine datasets is generally similar, with slight variations in the distribution of specific options. Reason B (i.e., "not providing relevant evidence") is the most prevalent characteristic of fake news across almost all datasets. This observation aligns with the findings of some prior research (Xu et al., 2022; Popat et al., 2018) which focus on using evidence information. Instead, in the Covid-19 dataset, option A (i.e., "misleading intentions") ranks highest, implying that much fake news in this dataset may have intentions such as inciting panic or showcasing bravado. This insight highlights the significance of considering emotional information in news, as studied by previous research (Zhang et al., 2021; Zhu et al., 2022). Additionally, we discovered that reason D (i.e., "linguistic style") is the third most common reason across most datasets, especially in the FakeNewNet dataset, where reasons D and B are nearly equally prevalent. This observation suggests that utilizing the linguistic style of news may improve fake news detection, as proved in previous research (Zhu et al., 2022; Przybyla, 2020). Moreover, we noticed that the proportion of reason C (i.e., "factual errors") is relatively higher in the Covid-19 and Liar compared to other datasets. This trend Figure 4: Distribution of reasons behind fake news (single option) Figure 3: Fake news summary(a), reason selection(b), original prompt(c) and reason-aware prompt(d). may be due to the frequent presence of factual errors in these datasets. For instance, the Covid-19 dataset includes content with obvious factual conflict, such as the new assert that 5G can spread Covid-19, showcasing ChatGPT's certain ability of fact-checking, which is also a popular research topic of LLMs recently Li et al. (2023c). In addition, we also observed that these reasons are interrelated through multi-options distribution, and we analyze them in Appendix B. ## 5 Fake News Detection via ChatGPT In this section, we first evaluated the consistency of ChatGPT during detecting fake news. Then we proposed a reason-aware prompt method based on summarizing the reasons behind fake news to enhance its detection ability. ### Experimental Settings We show the details of experiments during detection section including model version, datasets and prompt templates in Appendix D. As mentioned in Section 5.3, ChatGPT occasionally produces inconsistent answers for certain samples. To mitigate the impact of this inconsistency on our detection performance, in addition to the 2-class task, we also introduced a 3-class task, where ChatGPT predicts whether a sample is _"true"_, _"fake"_, or _"unclear"_. ### Metrics For the 2-class task, we use accuracy and F1 score to evaluate ChatGPT's effectiveness. For the 3-class task, we use four metrics: Acc-1, Acc-2, Acc-3 and F1 score, which are introduced as follows: **Acc-1 and F1 Score.** We remove the samples with "unclear" predictions and analyze the prediction results of the remaining samples (e.g., treat it as binary classification task), using two metrics: accuracy (i.e., Acc-1) and F1 Score. **Acc-2.** We retain the samples with "unclear" predictions and regard all of them as misclassified samples, which we measured using Accuracy-2 (Acc-2). This metric can potentially indicate the frequency when ChatGPT predicts a given sample as an "unclear" label. **Acc-3.** We remove the samples with "unclear" predictions and analyze the predictions of the remaining samples while maintaining a positive-to-negative sample ratio of 1:1. This metric, denoted as Accuracy-3 (Acc-3), aims to prevent any biases introduced by the uncertain samples. For instance, if the uncertain samples contain more real news samples, the model's high accuracy in predicting real news may lead to a bias in overall accuracy. In addition, to help readers understand these metrics better, we show their mathematical formulas in Appendix D.4. ### Consistency of ChatGPT It has been observed that ChatGPT exhibits inconsistency during various evaluations in recent study Jang and Lukasiewicz (2023); Manakul et al. (2023). Therefore, we first investigated the consistency of ChatGPT in detecting fake news. Here, we define consistency as the situation in that ChatGPT produces the same answer for a given sample in tests of \(n\) times (We show the details of consistency metric in Appendix C). Specifically, we ask ChatGPT to judge whether the given news is fake or real (the prompt template is shown in Appendix D.3). The consistency results are presented in Figure 5, which suggests that _not all_ of ChatGPT's detection results can be fully reliable. We observed that as the test times increased from \(n\)=2 to \(n\)=10, the consistency of most datasets decreased significantly. For instance, the consistency of the Liar Dataset without context dropped to only 66.1\(\%\) when \(n\)=10. In contrast, the Twitter15&16 dataset maintained a high consistency of over 90\(\%\) from \(n=2\) to \(n=10\), suggesting that ChatGPT is highly consistent in this dataset. Additionally, we show the inconsistency distribution in real and fake news in Appendix E. ### Reson-aware Prompt In this section, we propose a reason-aware prompt method to enhance ChatGPT's performance in detecting fake news. We observed that the recall rate Figure 5: Consistency results. We tested the consistency results for \(n\)=2, 5, 10. of ChatGPT on fake news is significantly low when prompted with the normal template (as shown in Appendix D.3), indicating that ChatGPT tends to misclassify fake news as true news. We attribute this to two possible reasons: first, ChatGPT lacks a comprehensive understanding of the distinct characteristics of fake news; second, ChatGPT tends to be conservative when detecting fake news (the number of predictions with "real" are more than "fake"). To address these limitations and improve ChatGPT's detection capability, we introduce a reason-aware prompt method, as illustrated in Figure 3. We have added a summary in Table 1 to our prompt template, which not only describes the features of fake news, but also serves as a cue to subconsciously prompt ChatGPT to increase its inclination in predicting samples as fake news. ### Analysis The results in nine different datasets are shown in Table 4 and Table 3, including the 2-class task (without the "unclear" prediction ) and 3-class task (with the "unclear" prediction). It is noticeable that ChatGPT demonstrates a relatively strong ability to detect fake news, though there remains room for improvement. Overall, ChatGPT achieved satisfactory results on some datasets, with Acc-1 surpassing 70\(\%\) for 8 out of 11 tested datasets in the 3-class scenario, and the highest accuracy reaching 82.6\(\%\). Nonetheless, there is still potential for improvement on certain datasets, such as the Liar dataset and the Chinese Rumor dataset. Also, we observed that the introduction of the "unclear" class improved ChatGPT's prediction performance when comparing Acc-1 with Acc. This suggests that ChatGPT's uncertainty for some samples can negatively impact prediction accuracy. Furthermore, reason-aware prompts enhance ChatGPT's fake news detection capabilities on most datasets. We observed significant improvements in predictions on all datasets with 2-class when using reason-aware prompts. Additionally, reason-aware prompts also yielded improved 3-class results on most datasets. Specifically, the maximum improvement was achieved on the Kaggle dataset, with increases of 19.7\(\%\) in Acc, 9.2\(\%\) in Acc-1, 14.5\(\%\) in Acc-2, and 14.6\(\%\) in Acc-3. In addition, extra information including context and comment generally enhance ChatGPT's fake news detection capabilities. Comparing the results between _(w/o)_ and _(w/)_, the Chinese Rumor dataset and Weibo21 dataset exhibit significant \begin{table} \begin{tabular}{c c|c c|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c|}{**Original**} & \multicolumn{3}{c}{**RA.**} \\ \cline{3-6} & & **Acc. \(\uparrow\)****F1. \(\uparrow\)** & **Acc. \(\uparrow\)****F1. \(\uparrow\)** \\ \hline \multirow{2}{*}{Chinese Rumor} & _(w/o)_ & 0.600 & 0.574 & 0.677 & 0.677 \\ & _(w/)_ & 0.681 & 0.677 & 0.776 & 0.776 \\ \hline \multirow{2}{*}{Liar} & _(w/o)_ & 0.631 & 0.606 & 0.658 & 0.699 \\ & _(w/)_ & 0.644 & 0.615 & 0.630 & 0.624 \\ \hline \multirow{2}{*}{ Weibo21} & _(w/o)_ & 0.620 & 0.601 & 0.722 & 0.721 \\ & _(w/)_ & 0.743 & 0.711 & 0.780 & 0.779 \\ \hline \multirow{2}{*}{Covid-19} & 0.746 & 0.731 & 0.778 & 0.770 \\ & _(w/)_ & 0.610 & 0.571 & 0.646 & 0.620 \\ \hline \multirow{2}{*}{Kaggle} & 0.577 & 0.499 & 0.774 & 0.763 \\ & 0.756 & 0.750 & **0.844** & **0.842** \\ \hline \multirow{2}{*}{FakenewsAMT} & **0.795** & **0.787** & 0.823 & 0.817 \\ & 0.632 & 0.598 & 0.674 & 0.658 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison results without unclear prediction. RA means reason-aware prompt. The value in bold is the highest in each column. \begin{table} \begin{tabular}{c c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c|}{**Original**} & \multicolumn{3}{c}{**RA.**} \\ \cline{2-9} & & **Acc-1**\(\uparrow\)****Acc-2**\(\uparrow\)****Acc-3**\(\uparrow\)****F1. \(\uparrow\)****Acc-1**\(\uparrow\)****Acc-2**\(\uparrow\)****Acc-3**\(\uparrow\)****F1. \(\uparrow\)****F1. improvements in various metrics when utilizing additional information. This implies that additional information may augment the semantic understanding of news. However, for the three-class classification, employing post-context information in the Liar dataset led to a decrease in Acc-1 and Acc-3, but an increase in Acc-2. A possible explanation for this outcome is that context information decrease the probability of examples being predicted as "unclear," yet raised the probability of them being misclassified as "fake" or "real." ### More Information Behind the Unclear To explore how to reduce the "unclear" labels predicted by ChatGPT in the three-classification task ("real", "fake" and "unclear"), we prompt ChatGPT with a question: _"What additional information do you need to make a more accurate judgment?"_. This prompt is presented to ChatGPT for the samples classified as "unclear". Similar to those in Section 4.2, we offer ChatGPT four pre-defined options to choose from, which are listed in Box 5.6. Then we measure the proportions of them on different datasets (as shown in Table 5). **A**: External knowledge refers to factual information, expert suggestions, or data reliability. **B**: Multimodal information includes images, videos, or audio. **C**: Context information encompasses comments, reposts, post time or post location. **D**: Speaker's information includes user actions, information from social media accounts, or the user's history of posts. We find that for most datasets, option A consistently ranks highest, implying that ChatGPT lacks some external knowledge to accurately assess news authenticity. This challenge can be tackled by incorporating extra knowledge like a knowledge graph (Dun et al., 2021) or a knowledge base (Hu et al., 2021). Options A, C, and D tend to occupy the second rank across different datasets. For instance, when addressing fake news originating from social media, one might need to consider using information related to comments (Khoo et al., 2020; Yang et al., 2021), reposts, or posts (option C), or take into account the users' preferences (Dou et al., 2021) and the information about users' profile (Shu et al., 2019) (option D). Additionally, we found that these options are not mutually exclusive, and ChatGPT may yield results for multiple options (we only consider two-option combinations due to the low frequency of the data with three or more options). Consequently, it is crucial to merge various kinds of extra information for fake news detection. ## 6 Conclusion In this study, we conducted an exploration into the capabilities of ChatGPT in generating, explaining, and detecting fake news. We found that some prompts enable ChatGPT to generate deceptive fake news, underscoring its potential harm. Then we identified nine features of fake news via ChatGPT, which may serve as a foundation for future research. Additionally, we enhanced the effectiveness of ChatGPT in detecting fake news by introducing the reason-aware prompt. Despite ChatGPT's promising performance on some datasets, there is still room for improvement. Finally, we investigated the extra information that may help ChatGPT detect fake news better. Overall, this paper provides insights into intelligent information governance and emphasizes the need for further research to fully leverage the capabilities of LLMs. \begin{table} \begin{tabular}{c c|c c c c|c c c c c} \hline \hline \multicolumn{2}{c|}{**Dataset**} & \multicolumn{1}{c}{**A**} & \multicolumn{1}{c}{**B**} & \multicolumn{1}{c|}{**C**} & \multicolumn{1}{c|}{**D**} & \multicolumn{1}{c}{**AB**} & \multicolumn{1}{c}{**AC**} & \multicolumn{1}{c}{**AD**} & \multicolumn{1}{c}{**BC**} & \multicolumn{1}{c}{**BD**} & \multicolumn{1}{c}{**CD**} \\ \hline \multirow{2}{*}{Chinese Rumor} & _(w/o)_ & 27.27 & 17.11 & **16.22** & 18.36 & 3.92 & 4.99 & 4.99 & 2.50 & 2.67 & 1.97 \\ & _(w/o)_ & 35.03 & **12.69** & 20.30 & 18.78 & 1.52 & 3.55 & 5.08 & 0.51 & 1.52 & 1.02 \\ \hline \multirow{2}{*}{Liar} & _(w/o)_ & 31.76 & 7.03 & 18.46 & 21.32 & 1.98 & 6.37 & 7.36 & 1.65 & 0.99 & 3.08 \\ & _(w/o)_ & 31.76 & **12.83** & 17.35 & 19.43 & 2.80 & 4.87 & 6.24 & 1.50 & 1.28 & 1.94 \\ \hline \multirow{2}{*}{Weibo21} & _(w/o)_ & 30.10 & **14.26** & 14.85 & 21.78 & 2.38 & 4.16 & 7.32 & 1.98 & 1.78 & 1.39 \\ & _(w/o)_ & 34.21 & **12.39** & 19.20 & 17.63 & 2.79 & 4.71 & 5.41 & 1.22 & 0.87 & 1.57 \\ \hline \multirow{2}{*}{Covid-19} & 31.43 & **12.56** & 17.46 & 19.33 & 2.92 & 5.14 & 6.19 & 1.46 & 1.29 & 2.22 \\ & 29.97 & **11.36** & 17.98 & 18.93 & 3.47 & 6.31 & 5.99 & 1.26 & 1.26 & 3.47 \\ \multirow{2}{*}{Kage} & 22.22 & 22.59 & **14.81** & 21.85 & 2.96 & 2.96 & 4.44 & 2.59 & 3.35 & 2.23 \\ & Twitter15\&16 & 28.90 & **12.93** & 17.87 & 20.15 & 1.90 & 6.08 & 5.70 & 1.52 & 2.66 & 2.28 \\ \hline \hline \end{tabular} \end{table} Table 5: The percentage (\(\%\)) of different types of additional information., and represents rank 1, 2 and 3 percentage. We didn’t test Celebrity and FakeNewsAMT datasets due to their small size of “unclear” samples. ## Ethics Statement Our findings indicate that ChatGPT can generate extreme and targeted false news. Thus, we advise researchers to use caution when employing language models like ChatGPT and to effectively handle any harmful content that may arise. Simultaneously, we emphasize the potential of language models in combating disinformation and advocate for responsible utilization. Regarding the human evaluation section, we ensured that participants agreed to our data collection agreement before collecting any information, and we treated participant information with utmost care. We promise to be responsible for personal data and will not disclose any personal data ## Limitations In this paper, our primary focus has been on examining the performance of ChatGPT specifically in the domain of fake news generation, explanation, and detection, without evaluating other large language models. Moreover, our evaluation has been limited to a dataset consisting of only 5200 samples, and conducting a larger-scale evaluation would contribute to the overall reliability of the findings. Additionally, given the black-box nature of large language models (LLMs), it remains challenging to definitively ascertain why reason-aware prompts are effective in fake news detection.
2310.15255
An ALMA Survey of M-dwarfs in the Beta Pictoris Moving Group with Two New Debris Disc Detections
Previous surveys in the far-infrared have found very few, if any, M-dwarf debris discs among their samples. It has been questioned whether M-dwarf discs are simply less common than earlier types, or whether the low detection rate derives from the wavelengths and sensitivities available to those studies. The highly sensitive, long wavelength Atacama Large Millimetre/submillimetre Array can shed light on the problem. This paper presents a survey of M-dwarf stars in the young and nearby Beta Pictoris Moving Group with ALMA at Band 7 (880\,$\mu$m). From the observational sample we detect two new sub-mm excesses that likely constitute unresolved debris discs around GJ\,2006\,A and AT\,Mic\,A and model distributions of the disc fractional luminosities and temperatures. From the science sample of 36 M-dwarfs including AU\,Mic we find a disc detection rate of 4/36 or 11.1$^{+7.4}_{-3.3}$\% that rises to 23.1$^{+8.3}_{-5.5}$\% when adjusted for completeness. We conclude that this detection rate is consistent with the detection rate of discs around G and K type stars and that the disc properties are also likely consistent with earlier type stars. We additionally conclude that M-dwarf stars are not less likely to host debris discs, but instead their detection requires longer wavelength and higher sensitivity observations than have previously been employed.
Patrick F. Cronin-Coltsmann, Grant M. Kennedy, Quentin Kral, Jean-François Lestrade, Sebastian Marino, Luca Matrà, Mark C. Wyatt
2023-10-23T18:04:43Z
http://arxiv.org/abs/2310.15255v1
# An ALMA Survey of M-dwarfs in the Beta Pictoris Moving Group with Two New Debris Disc Detections ###### Abstract Previous surveys in the far-infrared have found very few, if any, M-dwarf debris discs among their samples. It has been questioned whether M-dwarf discs are simply less common than earlier types, or whether the low detection rate derives from the wavelengths and sensitivities available to those studies. The highly sensitive, long wavelength Atacama Large Millimetre/submillimetre Array can shed light on the problem. This paper presents a survey of M-dwarf stars in the young and nearby Beta Pictoris Moving Group with ALMA at Band 7 (880 \(\mu\)m). From the observational sample we detect two new sub-mm excesses that likely constitute unresolved debris discs around GJ 2006 A and AT Mic A and model distributions of the disc fractional luminosities and temperatures. From the science sample of 36 M-dwarfs including AU Mic we find a disc detection rate of 4/36 or \(11.1^{+7.4}_{-3.3}\) % that rises to \(23.1^{+8.3}_{-5.5}\)% when adjusted for completeness. We conclude that this detection rate is consistent with the detection rate of discs around GJ 4 and K type stars and that the disc properties are also likely consistent with earlier type stars. We additionally conclude that M-dwarf stars are not less likely to host debris discs, but instead their detection requires longer wavelength and higher sensitivity observations than have previously been employed. keywords: circumstellar matter - planetary systems - stars: individual: GJ 2006A - stars: individual: AT Mic - submillimetre: planetary systems ## 1 Introduction M-dwarfs are the most abundant type of star in the sky (Ledrew, 2001), and these stars have a multitude of detected planets (e.g. Bonfils et al., 2013; Dressing and Charbonneau, 2015; Mulders et al., 2015). However, when it comes to debris discs M-dwarfs are distinctly lacking. The far-IR Herschel DEBRIS survey detected infrared excesses around 17% of FGK type stars (Sibthorpe et al., 2018) and 24% of A-type stars (Thureau et al., 2014), but only detected two excesses around M-types (GJ 581; Fomalhaut C; Lestrade et al., 2012; Kennedy et al., 2013) from a sample of 89 stars for a detection rate of 2%. There are only eight nearby M-dwarf discs published in the literature. Of these 3 have yet to be fully resolved: GJ 581 (Lestrade et al., 2012), GJ 433 and GJ 649 (Kennedy et al., 2018). The remaining 5 have been fully resolved: AU Mic (MacGregor et al., 2013; Daley et al., 2019), Fomalhaut C (Cronin-Coltsmann et al., 2021) and GSC 07396-00759 (Cronin-Coltsmann et al., 2021) with ALMA, and AU Mic (Kalas et al., 2004), TWA 7 (Choquet et al., 2016), TWA 25 (Choquet et al., 2016) and GSC 07396-00759 (Sissa et al., 2018; Adam et al., 2021) in scattered light, confirming that the infrared excesses indeed originate from circumstellar discs. These discs are distinguished from so-called Peter Pan discs around some young M-types (e.g. Silverberg et al. (2020)) as they do not show the evidence of ongoing accretion that Peter Pan discs do. In the case of Peter Pan discs, this accretion is indicative of a long-lived gas component that may be a primordial remnant of the original protoplanetary disc. The low rate of disc detections could be because the discs simply are not there. It is possible that the high incidence of planets around M dwarfs marks a high efficiency of planet formation, limiting leftover material that would constitute a debris disc. Alternatively photoevaporation (Adams et al., 2004) and stellar encounters (Lestrade et al., 2011) could strip material from M star discs that are forming in cluster environments. If discs are present, their underlying physical processes are different to discs around earlier type stars. The low host luminosity is not significant enough for radiation pressure to overcome gravity and instead stellar wind becomes a significant force. It is possible that strong stellar wind drag could remove grains quickly enough that the discs dynamics are different, affecting observability (Plavchan et al., 2009). Alternatively, a population of discs similar to that around early type stars could exist around M-dwarfs but remain difficult to detect with far-IR methods. A lower host luminosity would illuminate the same disc less well and heat it to a lower temperature, requiring more sensitive, longer wavelength observations than those employed by previous surveys. The Atacama Large Millimetre Array is the best suited contemporary telescope to fulfill these requirements. Luppe et al. (2020) investigate the capability of ALMA to detect a population of M-dwarf discs around the DEBRIS sample of M-stars, assuming that those discs have the same properties as the DEBRIS FGK-type systems. They conclude that for 15 minutes of observation at Band 7 there would be a 4-16% detection rate if all the discs were unresolved and a detection rate of 1-6% if some discs are large or close enough to be resolved. If the discs are resolved, the signal per beam would be reduced and/or some flux would be unrecoverable if the angular scale of the disc is larger than the maximum recoverable scale of the observation's interferometry. Debris disc detection rate and fractional luminosity is known to decrease with age as material is lost from the system due to the blow out of dust and the collisional depletion of the reservoir of parent planetesimals (Decin et al., 2003; Rieke et al., 2005; Trilling et al., 2008; Kral et al., 2013; Montesinos et al., 2016). For this reason, if a survey were to be optimised to recover as many disc detections as possible, a sample of young stars should be selected. The \(\beta\) Pictoris Moving Group (BPMG) is both young (\(-20\) Myr, Bell et al., 2015; Mirci-Roig et al., 2020) and nearby (\(\lesssim\)100 pc, Shkolnik et al., 2017a), making it a valuable stellar sample. Pawellek et al. (2021) analyse the F-type population of the BPMG with far-IR photometry and ALMA and find a 75% detection rate, a significantly higher rate than for the old field stars of the DEBRIS F star sample (Sibthorpe et al., 2018), further solidifying the BPMG as a good candidate sample to search for new discs. Indeed, already two of the published M-dwarf discs, AU Mic and GSC 07396-00759, are members of the BPMG. In this paper we present observations of the BPMG M-dwarf sample with ALMA. The observational details are presented in SS2. The results of the survey for individual stars of interest is presented in SS3 and new disc detections and the context of the detection rate is discussed in SS4. ## 2 Observations ### Observation Sample The observation sample of 39 stars was selected in 2017 for ALMA Cycle 5 based on these criteria: the star is identified as a known member from the literature of the BPMG, the star is identified as an M-type, and the star is within ALMA's observable declination range - i.e. between \(\sim\) -65\({}^{\circ}\) and 40\({}^{\circ}\). These sources were used for the sample selection: Binks & Jeffries (2016); Malo et al. (2013); Shkolnik et al. (2012); Schlieder et al. (2010); Lepine & Simon (2009); Zuckerman et al. (2001). The sample selection was not informed by the previous detection of any infrared excesses and thus the sample is unbiased in this regard. The sample that satisfies these criteria is now significantly larger, e.g. approximately doubling later in 2017 with new members confirmed by Shkolnik et al. (2017b). While observing more targets always provides better statistics, our sample is sufficient for our purposes here. AU Mic is a member of the scientific sample used in the analysis but was not chosen to be observed in the survey as it has already been significantly observed with ALMA. Had it been observed, it would definitely have been re-detected and the new re-observation would not significantly build upon previous observations. The sample was observed under project 2017.1.01583.S, with further details to follow in SS2.2. There were 33 individual ALMA observations, of which two contained both stars of a well studied binary within the field of view (HD 139084 AB and AT Mic AB). A further three contained two Gaia DR3 sources with similar parallax measurements of that reside within the field of view (2MASS J05241914-1601153, LP 476-207, GSC 08350-01924), i.e. these stars are newly resolved by Gaia to have binary companions. These bring the total confirmed BPMG member stars observed by our survey to 38. Two more observations contained a second Gaia DR3 source without a parallax but with an appropriate G magnitude and sub-arcsecond separation from the primary (2MASS J19102820-2319486, UCAC3 124-580676), i.e. these are potential but unconfirmed binary companions; these are not included as separate stars in our analysis and so do not add to our total. TYC 7443-1102-1 is listed alternatively as K9IVe (Pecaut & Mamajek, 2013) and M0.0V (Lepine & Simon, 2009), and so was included in this sample and treated as an M-dwarf, it was later noted to have an infrared excess in Herschel PACS (Tanner et al., 2020). One of the observed stars, HD 139084 A is a K0V, and so is not part of the scientific sample; this means that only 37 of the 38 stars observed in this survey are included in the scientific sample. Adding AU Mic brings the scientific sample to a final total of 38 confirmed M-dwarfs to be analysed. UCAC4 345-006842 (AKA Karm J05084-210) was intended to be observed but the ALMA observation was mispointed, so it was not observed. GJ 3305 (AKA StKM 1-497), GJ 182 (AKA V1005 Ori) and TWA 22 (AKA ASAS J101727-5354.4) were intended to be observed with ALMA, but the scheduling blocks were timed out at the end of the observing period. These stars are for these reasons not part of our scientific sample. Table 1 displays details of our sample of stars. Spectral types for this table were taken from SIMBAD (Wenger et al., 2000) unless otherwise noted with an asterisk, luminosities are taken from stellar SED models using available photometry and parallaxes unless otherwise noted with an asterisk. For asterisk noted properties we make estimates using the online 'Modern Mean Dwarf Stellar Color and Effective Temperature Sequence' table1 of Pecaut & Mamajek (2013). The spectral type of TYC 7443-1102-1 marked with two asterisks is derived from Lepine & Simon (2009). Footnote 1: [http://www.pas.rochester.edu/~emamjek/EEM_dwarf_UBVIJHK_colors_Teff.txt](http://www.pas.rochester.edu/~emamjek/EEM_dwarf_UBVIJHK_colors_Teff.txt) ### Observation Details All new observations were performed by ALMA Band 7 (0.87 mm, 345 GHz) under project 2017.1.01583.S. We anticipated of order ten detections (i.e. many non-detections), so did not aim to also obtain spectral information by observing with more than one band. The observations were spread across configurations C43-1, C43-2, and C43-3 depending on stellar distance to retain sensitivity to a similar physical scale and avoid resolving out disc emission. Observation details for individual sources can be found in Table 2. The spectral setup for all observations comprised four windows centred on 347.937, 335.937, 334.042 and 346.042 GHz with bandwidth 2 GHz and 128 channels for all but the last with width 1.875 GHz and 3840 channels. The last window was used to search for CO gas via the J=3-2 emission line, which has also been detected in another young debris disc around the M-dwarf TWA 7 (Matra et al., 2019). The raw data were calibrated with the provided ALMA pipeline script in casa version 5.1.2-4 (McMullin et al., 2007). To reduce the data volume the visibilities were averaged in 30 second intervals and down to two channels per spectral window for the continuum imaging. All images were generated with the clean algorithm in casa. \begin{table} \begin{tabular}{l l l l l l l} \hline Name & Alternative name & Type & Luminosity [\(L_{\odot}\)] & Distance [pc] & Notes \\ \hline 2MASS J05195327+0671258 & GSC2.3.800003170 & M6.55*A & 0.0957 & 96.1 & - \\ 2MASS J052411+1601153 A8B & PM 082543-1601 A8 & M4.5.0 & 0.0433 & 31.1 & GD8 Binary \\ 2MASS J091920202391496 & 15N987919208-231948.0 & M4 & 0.11 & 59.0 & Possible GRB Binary \\ 2MASS J20033379-2556521 & SCR 2003-23566 & M4.5 & 0.0305 & 43.5 & - \\ ASAS J16101-17454.4 & UCAC 43.61-070984 & M0.5 & 0.141 & 71.1 & - \\ Barta 161 12 & UCAC 414-001790 & M4.3V & 0.05 & 37.3 & Spectroscopic Binary \\ BD-30 397 B & V-40 418 & M0 & 0.078 & 40.9 & Companion to BD+30.397 A \\ CD-57 165 & GSC 05813-005527 & MWC & 0.174 & 26.9 & - \\ EDFC 211046195 & JMASS W0353020+224325 & MS5.5V & 0.00402 & 51.2 & - \\ GD 2006 A & \(\sim\) LDS 188 & M3.5Vc & 0.053 & 35.0 & Companion to GJ 2006B \\ GI 2006B & \(\sim\) LDS 188 & M3.5Vc & 0.0429 & 35.0 & Companion to GJ 2006A \\ GI 3076 & \(\sim\) LDS 188 & M3.5Vc & 0.0429 & 35.0 & Companion to GJ 2006A \\ GSC 07396-00759 & ASAS J181422-32462 & M1Vc & 0.135 & 71.4 & Companion to V4064 Sgr \\ GSC 08390142 AB & IRSJ 17219-2101545 & AB & M3Vc & 0.163 & 62.6 & GRG Binary \\ HD 130984 B & CDS 57.6024 & K0V & 0.98 & 39.3 & Companion to HD 139084 B, Spectroscopic Binary \\ HD 130984 B & CD-57 602B & M5Vc & 0.0203 & 39.3 & Companion to HD 139084 \\ HD 135555 C & V82A Aac & M3Vc & 0.044 & 30.3 & Companion to HD 155555 AB \\ L 361-122 & GJ 3832 & M3.5V & 0.015 & 28.6 & - \\ LP 3935-51 & HIP 11512 & MIV & 0.0641 & 27.2 & - \\ LP 476-207 AB & GJ 3322 AB & M3.5V & 0.07 & 33.2 & GD83 Binary/Spectroscopic Binary \\ MCC 124 & HIP 5016 & M0.7V & 0.132 & 23.4 & - \\ AT Mic & GJ 799 A & M4.5Vc & 0.035 & 9.9 & Companion to AT Mic \\ AT Mic & GJ 799 B & M4.5Vc & 0.031 & 9.8 & Companion to AT Mic, companion to AU Mic \\ RXJ02179-1225 & PM 302179+1225 & M0 & 0.0593 & 63.1 & - \\ Sanetello 20 & TYC 9073-7621 & MIV & 0.134 & 50.6 & - \\ TYC 2211-1309-1 & RXJ22007-2714 & M0.0V & 0.0841 & 36.6 & - \\ TYC 6872-1011-1 & IRSJ 1858034-295318 & MWC & 0.275 & 74.2 & Spectroscopic Binary \\ TYC 4743-1102-1 & PM119650-3207 & M0.0V & 0.154 & 51.3 & Companion to UCAC 115-474938 \\ UCAC 19527490 & 2MASS J185810846+2532320 & M3V* & 0.12 & - & Likely Companion to TYC 6872-1011-1 \\ UCAC 20312880 & RXJ06312-2242 & M1.5 & 0.089 & 32.7 & Double star \\ UCAC 116-474938 & 2MASS J1859602+3207186 & M4 & 0.11 & 51.3 & Companion to TYC 7443-1102-1, Double star \\ UCAC3 124-580576 & SCR 20100-2801 & M3.0Vc & 0.11 & 48.0 & Possible Gaia DR3 Binary/Spectroscopic Binary \\ UCAC3 176-23654 & RXJ0534-04221 & M3 & 0.066 & 34.4 & - \\ V-T XPA & \(\sim\) LDS 793 B & MSVc & 0.0203 & 20.8 & Companion to V* WW PAs \\ V-WW PAs & \(\sim\) LDS 793 A & MMVe & 0.0462 & 20.8 & Companion to V* TX PAs \\ AU Mic & HD 197481 & MIVe & 0.0962 & 9.7 & Not observed in this object, companion to AT Mic AB \\ \hline \end{tabular} \end{table} Table 1: Stars observed in our sample. Spectral types are derived from SIMBAD unless marked with asterisks, luminosities are taken from stellar SED models using available photometry and parallaxes unless otherwise noted with an asterisk. For asterisk noted properties we made estimates using the online temperature sequence table of Pecaut & Mamajek (2013). The spectral type of TYC 7443-1102-1 marked with two asterisks is derived from Lequise & Simon (2009) \begin{table} \begin{tabular}{l c c c c c c c c} \hline Name & Integration time [ minutes] & No. Antennae & Min-Max baseline [m] & MRS [arcsec] & Date & PWV [mm] & Calibrators \\ \hline 2MASS J05195327+0617258 & 16.13 & 43 & 15.1 - 782.1 & 4.4 & 26.08.18 & 0.3 & J05524\(\pm\)0313, J0423-0120 \\ 2MASS J051941+160113 A3 & 14.62 & 43 & 15.1 - 3137.2 & 6.6 & 07.018.1 & 0.5 & 80524\(\pm\)0313, J0523-0627 \\ 2MASS J01953202+2319866 & 14.11 & 45 & 15.1 - 500.2 & 5.3 & 190.58.18 & 0.9 & 19294\(\pm\)19751, J0579.099 \\ 2MASS J0233739+2556521 & 14.16 & 44 & 15.1 - 483.9 & 5.6 & 06.041.8 & 0.7 & 20265\(\pm\)038, J1924-2914 \\ 2MASS J0233739+2556521 & 14.16 & 46 & 15.1 - 500.2 & 5.7 & 04.05.18 & 0.3 & 20266\(\pm\)3924, J1924-2914 \\ 3AS J164103+17544 & 14.67 & 45 & 15.1 - 500.2 & 5.3 & 190.518.8 & 0.9 & 17173\(\pm\)1304, J1517-422 \\ Bara 16 11 21 & 44.70 & 46 & 15.0 - 31.7 & 7.0 & 31.05.18 & 0.8 & 3014\(\pm\)0928, J0063 \\ BD4-30 397 B & 30.47 & 44 & 15.1 - 500.2 & 5.6 & 24.08.18.1 & 0.0 & 20423\(\pm\)020, J0288+1636 \\ CD-57 1054 & 17.20 & 46 & 15.1 - 513.7 & 7.0 & 40.71.8 & 1.0 & 10505\(\pm\)073, J0514-5606-609 \\ CD-57 1054 & 17.20 & 43 & 15.1 - 400.4 & 7.0 & 12.08.18 & 0.9 & 10505\(\pm\)073, J0514-5606-6109 \\ EPTC 2110695 & 21.25 & 49 & 15.1 - 783.5 & 4.3 & 31.08.1 & 0.8 & 30336\(\pm\)2183, J0510-1800 \\ GJ 2006 A & 14.61 & 45 & 15.0 - 313.7 & 7.0 & 23.05.18.8 & 0.3 & 30040\(\pm\)323, J2285-7758 \\ GJ 2006 B & 14.61 & 45 & 15.0 - 313.7 & 7.0 & 23.05.18.3 & 0.3 & 30040\(\pm\)323, J2285-7758 \\ GJ 3006 B & 18.20 & 46 & 15.1 - 313.7 & 6.9 & 30.06.18.7 & 0.7 & 1011\(\pm\)148, J0082 \\ GSC J0796-00759 & 14.67 & 44 & 15.1 - 483.9 & 5.6 & 06.05.18.8 & 0.7 & 10172\(\pm\)2941, J1826-082 \\ GSC J0830-01924 A8 & 15.18 & 47 & 15.0 - 313.7 & 7.0 & 190.58.1 & 0.3 & 16500\(\pm\)084, J1711-5155, J1924-2914 \\ GJ 68350-01924 A8 & 16.19 & 46 & 15.1 - 500.2 & 5.2 & 190.518.8 & 0.9 & 16500\(\pm\)084, J1711-5155, J1924-2914 \\ HD 19894 AB & 17.19 & 48 & 15.0 - 313.7 & 7.4 & 180.51.8 & 1.0 & 17152\(\pm\)5903, J1427-2706 \\ HD 15555 C & 21.25 & 44 & 15.1 - 500.2 & 6.0 & 06.05.18.8 & 0.8 & 17102\(\pm\)612, J1724-2406 \\ L 836-122 & 14.67 & 46 & 15.0 - 313.7 & 6.9 & 15.05.15.18.1 & 1.1 & 14108\(\pm\)073, J1337-1257 \\ L 355-351 & 25.20 & 44 & 15.1 - 500.2 & 5.6 & 24.08.18.9 & 0.9 & 17023\(\pm\)023, J0286, J16637-248 \\ LP-460-37 A2 & 17.19 & 44 & 15.1 - 500.2 & 6.0 & 06.05.18.8 & 0.7 & 10510\(\pm\)084, J0049-1121 \\ MCC 124 & 21.75 & 44 & 15.1 - 500.2 & 6.0 & 60.05.18.7 & 0.7 & 11025\(\pm\)253, J1088-0133 \\ ATTAE AB & 14.65 & 47 & 15.0 - 313.7 & 7.0 & 190.58.18 & 0.3 & 19294\(\pm\)294, J2056-3208 \\ RX02179+1225 & 17.70 & 45 & 15.1 - 783.5 & 4.3 & 60.09.18 & 0.6 & 20111\(\pm\)1050, J0062-6052 \\ Shockline B & 18.70 & 44 & 15.1 - 782.1 & 4.7 & 26.01.88.27.018.18 & 0.8 & 18438\(\pm\)055, J1824-9314, J1725-603 \\ TTC 2211-1309-1 & 24.26 & 46 & 15.1 - 783.5 & 4.2 & 05.09.18 & 0.7 & 12253\(\pm\)1608, J2217-4221, J0006-6023 \\ TTC 22101-14 & 14.62 & 45 & 15.1 - 500.2 & 5.3 & 190.41.8 & 0.9 & 11932\(\pm\)2414, J1751-6099 \\ TTC 1443-1102-1 & 12.61 & 48 & 15.1 - 483.9 & 5.6 & 222.08.18.8 & 0.8 & 19394\(\pm\)294, J1755-6074 \\ UCAC 21952700 & 14.62 & 45 & 15.1 - 500.2 & 5.3 & 190.41.8 & 0.9 & 19329\(\pm\)294, J1751-6079 \\ UCAC 2031280 & 13.33 & 46 & 15.0 - 313.7 & 7.0 & 24.05.18 & 0.6 & 19354\(\pm\)300, J0022-6672 \\ UCAC 20312880 & 14.67 & 47 & 15.0 - 330.6 & 6.0 & 05.06.18.7 & 0.8 & 19356\(\pm\)3400, J0022-3627 \\ UCAC 2136-146938 & 14.62 & 48 & 15.1 - 483.9 & 5.6 & 22.08.18.8 & 0.8 & 19394\(\pm\)294, J2056-4714 \\ UCAC 214-880676 & 14.62 & 48 & 15.1 - 483.9 & 5.6 & 220.18.8 & 0.8 & 19394\(\pm\)294, J2056-4714 \\ UCAC 2136-2654 & 12.09 & 43 & 15.1 - 782.1 & 4.4 & 28.08.18 & 0.3 & 08553\(\pm\)033, J0023-0120 \\ V\({}^{*}\) TX 7RAA & 14.62 & 46 & 15.0 - 455.5 & 6.9 & 11.05.18.4 & 0.4 & 22235\(\pm\)278, J0006-0623 \\ V\({}^{*}\) WW 7RAA & 14.62 & 46 & 15.0 - 455.5 & 6.9 & 11.05.18.8 & Figure 1: Naturally weighted ALMA 880\(\mu\)m images of our BPMG M-dwarf sample. For all observations except for BD+30 397B and HD 139084 AB, the star is within 2 arcseconds of the centre of the image. The ellipses in the lower left corners show the restoring beams. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Name & RMS [\(\mu\)Jy beam\({}^{-1}\)] & Stellar Flux [\(\mu\)Jy] & Signal [\(\mu\)Jy beam\({}^{-1}\)] & Beam Semi-Major Axis [arcsec] & Disc radius [m] \\ \hline 2MASS/S01953327-0617258 & 40 & 0.1 & -32 & 0.573 & - \\ 2MASS/S0249141-1601338 & 43 & 8 & 28 & 0.939 & - \\ 2MASS/S1910920-2191486 & 50 & 5 & -9 & 0.607 & - \\ 2MASS/S20333759-2556521 & 23 & 3 & 41 & 0.640 & - \\ 2MASS/S116041-17544.4 & 47 & 3 & 0.6 & 0.626 & - \\ Baris 161 12 & 40 & 6 & -0.5 & 0.985 & - \\ BD3-3079 B & 85 & 6 & 10 & 0.853 & - \\ CD7-571054 & 40 & 2 & 26 & 0.954 & - \\ EPIC 21106195 & 46 & 0.5 & -87 & 0.515 & - \\ G120064 & 33 & 6 & 390 & 0.958 & -<34 \\ GJ2006B & 33 & 6 & 2 & 0.957 & - \\ GJ 3076 & 36 & 6 & 38 & 1.110 & - \\ **GSC/G7306-07579** & 40 & 2 & 1840 & 0.683 & 70 \\ GSC/G85030-01924 AB & 25 & 6 & -12 & 0.840 & - \\ HD 139084 B & 60 & 20 & 133 & 0.960 & - \\ HD 139084 B & 60 & 10 & -22 & 0.960 & - \\ HD 155555 & 40 & 9 & 93 & 0.792 & - \\ L836-122 & 45 & 3 & -60 & 0.938 & - \\ L9L3531 & 57 & 8 & 19 & 0.734 & - \\ LP 476-207 A & 45 & 20 & 28 & 0.772 & - \\ LP 476-207 B & 45 & - & 42 & 0.772 & - \\ MCC 124 & 45 & 20 & 6 & 0.785 & - \\ **AT MicA** & 27 & 70 & 319 & 0.994 & - \\ AT MicB B & 27 & 60 & 120 & 0.994 & - \\ RX J0217.9+1225 & 37 & 2 & -12 & 0.485 & - \\ Sunchell B20 & 47 & 5 & 75 & 0.582 & - \\ TYC2111-190-1 & 37 & 5 & -4 & 0.568 & - \\ TYC2721-11-1 & 47 & 4 & -35 & 0.606 & - \\ TYC4741-1012-1 & 47 & 5 & - & 0.670 & - \\ UCAC2 1952740B & 50 & 3 & -28 & 0.606 & - \\ UCAC23 112880 & 33 & 10 & 39 & 1.042 & - \\ UCAC3 116-47938 & 40 & 6 & 80 & 0.671 & - \\ UCAC3 124-58056 & 47 & 7 & 7 & 0.679 & - \\ UCAC3 176-23654 & 40 & 7 & 25 & 0.519 & - \\ Y+ TX RA & 30 & 8 & 36 & 0.908 & - \\ V+ WW PA & 35 & 20 & 78 & 0.908 & - \\ AU Mic & - & 110 & 13000 & - & 40 \\ \hline \hline \end{tabular} \end{table} Table 3: Sample observational results for 880\(\mu\)m ALMA observations. RMS is the ALMA image root mean square noise as taken from a region surrounding the GAAA DR3 expected stellar location. Beam size is the major axis of the observation. Systems where the day-sky separation is less than the beam size are listed as one with the mean flux of the two _unmodel_fit. Sources in bold have significant excess detections. Parameters for GSC/G7906-0759 are taken from Comini-Colman et al. (2012) and the radius measurement for AU Mic is taken from MacGregor et al. (2013), the expected stellar emission and an 880\(\mu\)m flux for AU Mic are estimated from a combined dust and stellar SED model. ### Initial image analysis Figure 1 shows naturally weighted images of the observational sample generated with the clean algorithm in casa. The sample was also visually inspected with 1 and 2 arcsec \(uv\) tapers to search for extended emission. To extract photometry point source models were fit to the visibilities using the casa _uvmodelfit_ task at each Gaia DR3 stellar location. We do not allow the offset parameters to vary in these fits to avoid fitting to nearby non-stellar point sources except in the cases of detections and near detections as discussed in SS3. Fluxes derived from the _uvmodelfit_ task are consistent with fluxes measured directly from the images. The results of these fits and the image parameters can be found in Table 3. Stellar fluxes are estimated by fitting model atmospheres to photometry as outlined in Yelverton et al. (2019); this method uses synthetic photometry of PHOENIX (Husser et al., 2013) and blackbody disc models, and multinest(Feroz et al., 2009), to derive best-fit star and disc parameters. In this table Gaia DR3 confirmed binaries have been split into their individual components with flux measurements taken at the expected location of each component; significant detections are highlighted in bold; parameters for GSC 07396-00759 are taken from Cronin-Coltsmann et al. (2022); the radius measurement for AU Mic is taken from MacGregor et al. (2013); the expected stellar emission and an 880\(\mu\)m flux for AU Mic are estimated from a combined dust and stellar SED model. Serendipitous sources within 10 arcsec of the phase centre whose flux reached at least 5\(\sigma\) were identified in the primary beam-corrected clean images and are presented in Table 4. Sources are identified in ten of the fields. Two sources are present in the TYC 7443-1102-1 field, one of which is resolved to be 2 arcsec along one axis. The sources are not associated with any stars and so are likely to be background galaxies. The galaxy number count model of Popping et al. (2020) can be used to estimate the expected number of galaxies with a flux of at least 0.5 mJy beam\({}^{-1}\) to be present within a 10 arcsec radius of the phase centre of 33 observations. The expected number of background sources is \(12^{+4}_{-10}\), consistent with our detections. Significant flux at the stellar location is measured for GJ 2006 A, GSC 07396-00759, AT Mic A and AT Mic B, and TYC 7443-1102-1. GSC 07396-00759 shows a clearly resolved edge-on disc. The flux from TYC 7443-1102-1 cannot be differentiated from the background confusion close to the stellar location and so this source is considered significantly confused with no local flux measurement able to be taken. These sources are discussed in more detail in SS3. Where significant flux is measured at the stellar location we check the observations for signs of mm stellar flares, as these can be mistaken for debris discs (e.g. Anglada et al., 2017; MacGregor et al., 2018). The observations were split into their individual scans and re-imaged to check for variance of the flux along the time baseline of the observations. No evidence for flaring was found. The \({}^{12}\)CO J=3-2 transition line was also checked in these observations by producing clean continuum-subtracted images with the _uvcontsub_ algorithm in casa and searching for significant emission at the stellar location and around the expected stellar radial velocity. No CO emission was found in any observation. A stacked image was also made from the non-detections in which the star is expected to lay within 0.5 arcsec of the phase centre. With this criterion 2MASS J05241914-1601153 AB, BD+30 397 B, GJ 2006 B, HD 139084 B, LP 476-207 AB, UCAC2 19527490, UCAC2 20312880 and UCAC3 124-580676 are excluded. We also exclude TYC 7443-1102-1 due to its confusion. The stacked image is thus constituted of the remaining 21 observations and has an RMS of 1\(\sigma\) = 10 \(\mu\)Jy / beam. The mean expected stellar emission is 6 \(\mu\)Jy beam\({}^{-1}\). No significant flux is found at the centre of the stacked image with a measurement of 12 \(\mu\)Jy / beam, the 3\(\sigma\) upper limit on the mean flux for these non-detections is thus 30 \(\mu\)Jy / beam, and the 3\(\sigma\) upper limit on mean flux _excess_ above the stellar flux is 24 \(\mu\)Jy / beam which at a mean distance of 44 pc corresponds to a disc 25 times less bright than AU Mic. ## 3 Results ### Gaia DR3 parallaxes and binary implications The third data release of the Gaia satellite (Gaia Collaboration et al., 2022) has improved our astrometric knowledge of our candidate sample since both the proposal submission and observations. Some stars now have accurate parallaxes where there was none before, and other stars have been resolved as binaries with new measurements of their separation. Multiplicity can cause errors in astrometric solutions (Lindegren et al., 2018) and this is possibly the root cause for previous difficulty in finding accurate parallaxes. A measure for non-standard uncertainty in Gaia observations is the astrometric excess noise, astrometric_excess_noise (epsi), representing modelling errors and measuring the disagreement between observations of the source and its best fitting model expressed as an angle in units of milli-arcseconds2. The epsi in an ideal case should be zero, but for reference the median excess noise for sources with six-parameter solutions is 0.1693. A related parameter is the significance of the astrometric excess noise, astrometric_excess_noise_sig (sepsi), for which a value greater than two indicates that the epsi is significant, i.e. the observations of the star significantly differ from its best fitting model. The epsi, when guided by the sepsis, can be used to infer the presence of companions (e.g. Groenewegen, 2018; Kervella et al., 2019). Footnote 2: [https://gea.esac.esa.int/archive/documentation/GDR3/Gaia_archive/chap_datamodel/sec_dh_main_source_catalogue/ssec_dm_gaia_source.html](https://gea.esac.esa.int/archive/documentation/GDR3/Gaia_archive/chap_datamodel/sec_dh_main_source_catalogue/ssec_dm_gaia_source.html) Multiplicity can also affect the likelihood a system contains a detectable debris disc; enhanced collisional evolution from gravitational perturbations can cause the disc flux to decrease more rapidly, so regardless of whether a disc is completely destroyed or not, the disc becomes harder to detect. Empirically, we are always limited by the sensitivity of our observations, so refer to "detection" rather than "existence". Yelverton et al. (2019) find that disc detection rate is more than halved in comparison to single stars when binary separation is less than 25 au, that the disc detection rate is zero when the separation is between 25 and 135 au, and that larger separations do not affect disc detection rates. However, the systems studied in that paper were for the majority sun-like, and while a small number of M-type systems were included, the conclusion for sun-like stars might not extend to M-types. All binaries in the sample are now discussed below. #### 3.1.1 2mass J05241914-1601153 Ab 2MASS J05241914-1601153 (AKA PM J05243-1601, UCAC4 370-008199) has previously been noted as a double star (Messina et al., 2017; Miret-Roig et al., 2020) and did not have an accurate parallax prior to Gaia DR3. A has Gaia G magnitude of 12.496\(\pm\)0.004 and B has a magnitude of 12.778\(\pm\)0.004, so the stars are of a similar brightness and type. A has a parallax of 32.06\(\pm\)0.80 mas and B has a parallax of 32.27\(\pm\)0.14 mas placing the stars at 31.1 pc and consistent with co-planarity in the plane of the sky, this would equate their separation of 0.37 arcsec at the time of observation to 11.5 au. This separation would reduce the likelihood of there being a detectable disc; if a disc is present there is the possibility that it would be circumbinary, which would be resolved by our observations. #### 3.1.2 2mass J19102820-2319486 2MASS J19102820-2319486 (AKA 1SWASP J191028.18-231948.0, EPIC 215900519) did not have a parallax measurement prior to Gaia DR3, but now has a measured parallax of 17.0\(\pm\)0.2 mas, putting it at 59 pc. Messina et al. (2017) label it as a single star, however Gaia DR3 also revealed a second source at a 0.3 arcsec separation without a parallax or proper motion but with a G magnitude of 12.882\(\pm\)0.006 compared to 2MASS J19102820-2319486's magnitude of 12.528\(\pm\)0.004. The excess astrometric noise for both sources is moderate. The excess astrometric noise is 1.394 mas and the significance of astrometric noise value is 1390 for the source with parallax and the epsi is 2.198 mas and the sepsi is 1900 for the source without parallax. This could explain the lack of a previous Gaia fit for 2MASS J19102820-2319486 and the lack of a Gaia fit for the second source. Multiplicity can be a cause of astrometric noise, and so it is possible the two sources indeed constitute a binary, if approximately in the plane of the sky the separation would be 18 au. This separation would reduce the likelihood of there being a detectable debris disc around either star and any disc could be circumbinary if present. #### 3.1.3 Barta 161 12 Barta 161 12 (AKA UCAC4 414-001790, ASAS J013514-0712.9, 2MASS J01351393-0712517) has parallax 26.82 \(\pm\) 0.05 mas and distance 37.3 pc. It is listed as a double-lined spectroscopic binary by (Malo et al., 2014) and Gaia DR3 detects only one star. Assuming a resolution limit of 0.5 arcsec the binary separation is likely less than 19 au, which would reduce the likelihood of there being a detectable disc and any disc present would likely be circumbinary. #### 3.1.4 Bd+30 397 B BD+30 397 B (AKA 2MASS J02272924+3058246, GSC 02323-00566, AG Tri B) is a companion to the disc hosting star BD+30 397 A (AG Tri, Rebull et al., 2008). The pair's parallax (24.42 \(\pm\) 0.02 and 24.43 \(\pm\) 0.03 mas for A and B respectively, at 40.9 pc) is consistent with them being co-planar in the plane of the sky and their separation of 22.2 arcsec equates to 910 au. Their separation is unlikely to affect the likelihood of there being a detectable disc around either star. BD+30 397 B has a high noise in Table 3 as the observation was pointed near the centre of the binary, placing BD+30 397 B at the edge of the primary beam, raising the local noise. Despite this pointing, BD+30 397 A is outside the 12 arcsec FWHM of the primary beam, and as such is unobserved. #### 3.1.5 Gj 2006 AB Gj 2006 AB (AKA LDS 18A, 2MASS J00275023-3233060, UCAC3 115-1206) have parallax (28.55 \(\pm\) 0.04 and 28.59 \(\pm\) 0.04 mas, 35 pc) consistent with being approximately co-planar in the plane of the sky and their separation of 17.9 arcsec equates to 625 au. Their separation is unlikely to affect the likelihood of there being a detectable disc around either star. #### 3.1.6 Gsc 07396-00759 GSC 07396-00759 (AKA ASAS J181422-3246.2, CAB 25B, UCAC4 287-163100) has parallax 13.92 \(\pm\) 0.02 mas and distance 71.8 pc. As noted in Cronin-Coltsmann et al. (2022), it is a wide separation companion of the well-studied close-binary V4046 Sgr at a distance of 12,300 au (Torres et al., 2006; Kastner et al., 2011). V4046 Sgr possesses both a gas-rich circumbinary disc and evidence of ongoing accretion (e.g. Stempels & Gahm, 2004; Oberg et al., 2011; Rosenfeld et al., 2013; Rapson et al., 2015; Kastner et al., 2018; D'Orazi et al., 2019; Martinez-Brunner et al., 2022). The 12,300 au separation is unlikely to affect the likelihood of there being a detectable disc around either system. #### 3.1.7 Gsc 08350-01924 AB GSC 08350-01924 (AKA 1RXS J172919.1-501454, UCAC2 10274954) has been listed as a binary in previous works (Alonso-Floriano et al., 2015; Messina et al., 2017) and Zuniga-Fernandez et al. (2021) conclude it not to be a spectroscopic binary. Gaia DR3 has resolved the binary and identified parallaxes for each star for the first time. A has a parallax of 16.15\(\pm\)0.06 mas and B has a parallax of 15.95\(\pm\)0.078 mas putting the binary at \begin{table} \begin{tabular}{l c c c c} \hline Observation & RMS [\(\mu\)Jy beam\({}^{-1}\)] & Source flux [\(\mu\)Jy beam\({}^{-1}\)] & Source Ra [hr:min:sec] & Source Dec [\({}^{\circ}\).\({}^{\prime\prime}\) \({}^{\prime\prime}\)] \\ \hline 2MASS J20333759-2556521 & 40 & 600 & 17:29:20:474 & -50.14,51.117 \\ GSC 08350-01924 & 25 & 1600 & 0.33:36:964 & 25.57,03.591 \\ Barta 161 12 & 90 & 1600 & 1:35:14.759 & -7.12,52.529 \\ LP 353-51 & 110 & 800 & 02:23:26.601 & 22.43.54.846 \\ TYC 2211-1309-1 & 80 & 650 & 22:00:41.823 & 27.15,20.179 \\ TYC 7443-1102-1 & 47 & 2200* & 19:56:04.396 & -32.07,37.640 \\ TYC 7443-1102-1 & 47 & 440* & 19:56:04.474 & -32.07,38.475 \\ UCAC2 19527490 & 65 & 3000 & 18:58:05.016 & -29.53,33.824 \\ UCAC2 20312880 & 55 & 760 & 06:13:13.748 & -27.41,59.131 \\ UCAC3 116-474938 & 85 & 800 & 9:56:03.108 & -32.07,29.08 \\ V* TX PsA & 60 & 1300 & 22:44:59.826 & -33.15,32.550 \\ \hline \end{tabular} \end{table} Table 4: Background sources. RMS is local to the background source. Fluxes for the TYC 7443-1102-1 sources noted with an * are integrated fluxes with units \(\mu\)Jy. 62.3 pc (Bailer-Jones et al., 2021). The difference in parallax of the pair, 0.2\(\pm\)0.098 mas, is within two sigma of zero, so if the two are approximately co-planar in the plane of the sky, the binary separation would be 44 au. A has a Gaia G magnitude of 12.295\(\pm\)0.003 and B has a magnitude of 12.573\(\pm\)0.003, so the stars are of a similar brightness and type. If they are widely separated, their separation would be unlikely to affect the likelihood of there being a detectable disc around either star. If they are separated by 44 au, their separation would make it unlikely that the system hosts a debris disc. #### 3.1.8 Hd 139084 Ab HD 139084 AB (AKA CD-57 6042 AB, 2MASS J15385757-5742273 AB) have parallax measurements of 25.8\(\pm\)0.2 mas and 25.55\(\pm\)0.02 mas respectively and are separated by 10.3 arcsec on the sky. The stars therefore constitute a wide binary with a likely separation of at least 50,000 au. Their separation is unlikely to affect the likelihood of there being a detectable disc around HD 139084 B, although HD 139084 A is known to be a single lined spectroscopic binary (Nielsen et al., 2016) which would reduce its likelihood of hosting a detectable disc. HD 139084 AB have a higher noise in Table 3 as the observation was pointed at the centre of the binary, placing both stars at the edge of the primary beam, raising the local noise. #### 3.1.9 Hd 155555 C HD 155555 C (AKA V824 Ara C, UCAC3 47-295205, 2MASS J17173128-6657055) is companion to the short period binary HD 155555 AB with a separation on the sky of 34 arcsec; at a distance of 30.3 pc (parallaxes of \(32.95\pm 0.02\) and \(32.88\pm 0.03\) mas for AB and C respectively) this equates to a separation on the sky of 1000 au. Their separation is unlikely to affect the likelihood of there being a detectable disc around either component. #### 3.1.10 Lp 476-207 AB LP 476-207 (AKA HIP 23418, GJ 3322, 2MASS J05015881+095857) is a literature double lined spectroscopic binary (Delfosse et al., 1999) with an orbital period of 11.9 days (Messina et al., 2017). Gaia DR3 resolves two stars, we will label LP 476-207 AB as these two separated components, making the spectroscopic binary LP 476-207 AaAb (or possibly BaBb). A has a parallax of 42.04\(\pm\)0.03 mas and B has a parallax of 42.10\(\pm\)0.09 mas, thus the two are consistent with being approximately co-planar in the plane of the sky. A has a G magnitude of 10.568\(\pm\)0.003 and B has a magnitude of 11.420\(\pm\)0.004, thus A is likely the primary and dominates the flux from the system. Their separation of 1.4 arcsec on the sky at 33.2 pc equates to 46.5 au. This separation would make it unlikely that the system hosts a debris disc. #### 3.1.14 Mtic AB AT Mic (AKA GJ 799, HD 196982, HIP 102141, CD-32 16135, 2MASS J20415111-3226073) is a literature close binary system and is highly likely to be a distant companion to AU Mic (Adams et al., 1972; Caballero, 2009; Shaya & Olling, 2011; Messina et al., 2016) with an on-sky separation of 0.23 pc which equates to 47,000 au on the sky. The AT Mic AB binary have Gaia G magnitudes of 9.576\(\pm\)0.003 and 9.605\(\pm\)0.003 respectively, so the stars are of a similar brightness and type. The system has been observed to show significant evidence of proper motion (Messina et al., 2016, and references therein) and Malkov et al. (2012) provide an orbital period of 209 yr with a semi-major axis of 3.18 arcsec, corresponding to 31 au, and an eccentricity of \(e=0.26\) for the binary. Gaia DR3 measures the parallaxes for the AT Mic binary of 100.79\(\pm\)0.07 mas and 101.97\(\pm\)0.08 mas, which would be inconsistent with the two being approximately co-planar in the plane of the sky, equating to a separation of 23,300 au. However, the Gaia DR3 observations for AT Mic A have an excess astrometric noise of 0.509 mas and a significance of astrometric noise value of 330, and AT Mic B has values of 0.502 mas and 311 respectively. For comparison, their wide separation companion AU Mic has values of 0.098 mas and 6.1 respectively. The level of astrometric noise is significant and could mean that the uncertainty of the Gaia parallaxes is underestimated. Given the extensive historic observation of the system, observed apparent orbital motion and high excess astrometric noise on the Gaia parallaxes, it is likely that the Gaia parallaxes for this system are untrustworthy. Thus, we will continue with the understanding that the stars are co-planar and so are separated primarily by the 2 arcsec on the sky. Using Malkov et al. (2012)'s orbital parameters the semi-major axis of the binary is 31 au. The separation with AU Mic would be unlikely to affect the likelihood of either system hosting a detectable disc, but the AT Mic binary separation would make it unlikely that the system hosts a debris disc. #### 3.1.12 Txc 6872-1011-1 and UCAC2 19527490 TYC 6872-1011-1 (AKA 1RXS J185803.4-295318, UCAC4 301-253452, 2MASS J18580415-2953045) is reported as a double lined spectroscopic binary in Zuniga-Fernandez et al. (2021). The parallax is \(13.45\pm 0.04\) mas, giving a distance of 74.3 pc. The binary separation is likely less than 25 au as the radial velocity observations were only a few nights apart; this would reduce the likelihood that the system hosts a detectable disc and any disc could be circumbinary. UCAC2 19527490 (AKA 2MASS J18580464-2953320) does not have a reported parallax in either the literature or Gaia DR3. Gaia DR3 measures a very large excess astrometric noise, the psi is 59 mas and the sepsis is 240,000, which could be indicative of a close binary companion. A close companion would reduce the likelihood that the system hosts a detectable disc and any disc could be circumbinary. UCAC2 19527490 is only separated from TYC 6872-1011-1 by 28.3" on the sky, and the two share very similar proper motions and radial velocities, and so it has been posited before that the two are companions (Movic et al., 2013). This would place UCAC2 19527490 at 74.2 pc alongside TYC 6872-1011-1 and their separation would equate to 2100 au. This separation would not reduce the likelihood of either star hosting a detectable disc. #### 3.1.13 Txc 7443-1102-1 and UCAC3 116-474938 TYC 7443-1102-1 (AKA 2MASS J19560438-3207376, PM J19560-3207, UC 4054A) and UCAC3 116-474938 (AKA 2MASS J19560294-3207186, BWL 53) are known to be companions. The two have parallaxes of 19.49\(\pm\)0.02 mas and 19.5\(\pm\)0.7 mas respectively, consistent with being approximately co-planar in the plane of the sky. At a distance of 51.3 pc their separation of 26.3 arcsec equates to 1350 au. This separation would not reduce the likelihood of either star hosting a detectable disc. UCAC3 116-474938 is also listed as a literature double star (Messina et al., 2017). This binarity is not resolved by Gaia DR3 but the star does have a high excess astrometric noise. The epsi is 5.59 mas and the sepsis is 4000, indicating the possible presence of a close companion. A close companion would reduce the likelihood of the system hosting a detectable disc. #### 3.1.14 Uacac2 20312880 UCAC2 20312880 (AKA RX J0613.2-2742, 2MASS J06131330-2742054) is a literature double star (Messina et al., 2017) with parallax \(29.6\pm 0.2\) mas and distance 33.8 pc. This is not resolved by Gaia DR3 but the star has a high excess astrometric noise, the epsi is 2.5 mas and the sepsis is 960, indicating the possible presence of a close companion. A close companion would reduce the likelihood of the system hosting a detectable disc. #### 3.1.15 Uac3 124-580676 UCAC3 124-580676 (AKA SCR J2010-2801, 2MASS J2010002-2801410) is a literature spectroscopic binary and is listed as types M2.5+M3.5 in Messina et al. (2017). Gaia DR3 resolves two stars at a 1 arcsec separation with primary parallax \(21.5\pm 0.3\) mas (46.5 pc) but without a parallax for the secondary. The two stars have Gaia magnitudes of 12.449\(\pm\)0.005 and 12.207\(\pm\)0.004 indicating that the two are of similar type. The excess astrometric noise for the sources is very high, the epsi is 2.02 mas and the sepsis is 490 for the source with parallax and the epsi is 14.2 mas and the sepsis is 7360 for the source without parallax, explaining the lack of fit for the secondary. If approximately in the plane of the sky the separation would be 48 au. This separation would make it unlikely that the system hosts a debris disc. #### 3.1.16 TX PsA and WW PsA TX PsA (AKA GJ 871.1 B, UCAC2 17853886, 2MASS J22450004-3315258 ) and WW PsA (AKA CD-33 16206, GSC 07501-00987, HIP 112312, 2MASS J22445794-3315015) are known companions. Their Gaia DR3 parallaxes are 48.00\(\pm\)0.03 mas and 47.92 \(\pm\)0.03 mas respectively. Bailer-Jones et al. (2021) measure distances of 20.826\(\pm\)0.013 pc and 20.843\(\pm\)0.012 pc respectively, so the stars could be but are not necessarily approximately co-planar in the plane of the sky. The stars are separated in the plane of the sky by 36 arcsec; at a distance of 20.8 pc this equates to 750 au. This separation would not reduce the likelihood of either star hosting a detectable disc. #### 3.1.17 Binaries summary As it is not an M-star, HD 139084 A is excluded from the below summary. Where the parallax measurements of each star in a binary are consistent with each other, we assume that the two stars have equal parallaxes in our analysis. There of course remains the possibility that there is a non-zero separation along the line of sight and so the following separations are strictly speaking minimum possible separations. One system is a Gaia DR3 resolved binary with both parallaxes and a separation of less than 25 au (2MASS J05241914-1601153 AB, this separation is less than the observation beam size). One system is a Gaia DR3 resolved binary with one parallax and a potential separation of less than 25 au (2MASS J19102820-2319486, this separation is less than the observation beam size). Two stars are spectroscopic binaries with no resolved companions in Gaia DR3 (Barta 161 12, TYC 6872-1011-1). Two stars are literature double stars unresolved in Gaia DR3 but with high excess astrometric noises (UCAC2 20312880, UCAC3 116-474938). One star is not previously listed as a multiple star but has very high excess astrometric noise (UCAC2 19527490). In total there are six (seven if 2MASS J05241914-1601153 AB is counted) systems with a binary separation less than 25 au; these are half as likely to possess detectable debris discs than single stars, assuming that the results of Yelverton et al. (2019) extend to M type stars. One star is a spectroscopic binary and has two stars resolved in Gaia DR3 with one parallax and a potential separation between 25 and 135 au (UCAC3 124-580676). One system is a spectroscopic binary and has two stars resolved in Gaia DR3 with both parallaxes and a separation between 25 and 135 au (LP 476-207 AB). One system is a binary and has two stars resolved in Gaia DR3 with both parallaxes (that likely have underestimated uncertainties), has literature orbital parameters and a separation between 25 and 135 au (AT Mic AB, this separation is greater than the observation beam size). In total there are three systems with a binary separation between 25 and 135 au that are very unlikely to possess detectable debris discs, assuming that the results of Yelverton et al. (2019) extend to M type stars. Four of the above stars are also companions to other stars with a separation greater than 135 au (UCAC2 19527490, UCAC3 116-474938, AT Mic AB) A further 9 stars are Gaia DR3 resolved companions to other stars with all parallaxes and a separation greater than 135 au (BD+30 397 B, GJ 2006 A, GJ 2006 B, GSC 07396-00759, HD 139084B, HD 155555 C, TYC 7443-1102-1, TXPsA, WW PsA). The multiplicity of these stars is unlikely to affect the likelihood of the presence of a detectable debris disc. The uncertainty in the parallax measurements of GSC 08350-01924 A and GSC 08350-01924 B allows the possibility that they have a binary separation between 25 and 135 au, but the separation could also be more than 135 au. The multiplicity of these stars may or may not affect the likelihood of the presence of a detectable debris disc. The on-sky separation of GSC 08350-01924 AB is less than the observation beam size. ### Non-significant ALMA excesses We now turn to the observations starting with a few systems that do not have a significant excess, but were close enough to warrant further investigation. The list of non-detections can be obtained from Table 2, i.e. the sources that are not marked in boldface. #### 3.2.1 Tyc 7443-1102-1 This star has an unresolved Herschel PACS excess as reported in Tanner et al. (2020). Two distinct sub-mm sources are clearly detected in the ALMA observation displayed in Figure 2, neither of which are centred at the Gaia DR2 proper-motion adjusted location of the star. The two sources are 1.4" and 0.9" distant from the stellar location and have integrated flux densities of 2.20\(\pm\)0.05 mJy and 0.44\(\pm\)0.05 mJy respectively. The brighter of the two sources is resolved along one axis. The ALMA absolute pointing accuracy for this observation is \(\sim\)30 mas and the error on the Gaia stellar location is sub-milliarcsecond, and so the separation of the sources from the expected stellar location is most likely accurate. The flux of these sources are not inconsistent with the flux expected from a debris disc with a radius equal to their separation from the star. However, if these sources constitute a debris disc such a disc would be more asymmetric than any other observed disc with no other known discs showing similar features. Therefore, we conclude that these mm-wave sources are most likely not associated with the star and constitute background galaxies. For a putative debris disk to be detected with Herschel PACS but not with ALMA, the spectral slope of the dust emission would need to have \(\beta\gtrsim 1\), where the dust emission is described a modified blackbody \(F_{\nu}\propto B_{\nu}(\nu,T)(\frac{\mathrm{d}\nu}{\lambda})^{\beta}\) beyond a turnover wavelength \(\lambda_{0}\). This would be steeper than is seen for well-characterised cases (e.g. Gaspar et al., 2012; MacGregor et al., 2016). Larger surveys (that are less precise) find \(\beta\) values in the range of 0.5 - 1 (Holland et al., 2017; Sibthorpe et al., 2018). Thus a scenario where the PACS detection is of a circumstellar disk that is then not detected by ALMA is improbable. Therefore the Herschel excess most likely also originated from these contaminating sources and the conclusion is drawn that a circumstellar disk around TYC 7443-1102-1 is not detected. As the observation is significantly contaminated at the stellar location we remove the observation and star from the scientific sample going forward. #### 3.2.2 Hd 155555 C The 93\(\pm\)40 \(\mu\)Jy beam\({}^{-1}\) flux at the stellar location of this observation, as displayed in Figure 3, is between 2\(\sigma\) and 3\(\sigma\), and so it warranted a further analysis. We apply the _uvmodelfit_ task again, now allowing the offset parameters to vary, and find a flux of 116 \(\pm\)40\(\mu\)Jy beam\({}^{-1}\) at a separation of 0.21\(\pm\)0.07 arcsec, that could be consistent with the stellar location. The stellar flux is only expected to be 9 \(\mu\)Jy beam\({}^{-1}\) and so if the flux is real it would constitute an excess. As there are multiple 2\(\sigma\) peaks within 2 arcsec of the stellar location, combined with the offset of the flux, we rule the flux measurement to likely be the result of noise. Given 33 observations there is approximately a 10% chance that at least one observation will have a 3\(\sigma\) peak at the stellar location. Given that HD 155555 C is the only source in our sample with a near-detection, we consider it likely that the excess flux in this observation is simply noise and the result of observing a moderately large number of systems. However this star is still worth re-observing in order to discover or rule out the presence of an infrared excess with more significant certainty. #### 3.2.3 AT Mic B A flux of 120\(\pm\)27 \(\mu\)Jy beam\({}^{-1}\) is measured at the stellar location of this observation, as displayed in Figure 4, reaching a significance of 4\(\sigma\). We apply the _uvmodelfit_ task again, now allowing the offset parameters to vary, and find a flux of 125 \(\pm\)27 \(\mu\)Jy beam\({}^{-1}\) at a separation of 0.09\(\pm\)0.06 arcsec, consistent with the expected Gaia DR3 stellar location. However, the expected stellar flux is 60 \(\mu\)Jy beam\({}^{-1}\). The star is therefore confidently detected, but after subtracting the expected stellar flux the remaining mm-wave excess of 65 \(\pm\) 27 \(\mu\)Jy beam\({}^{-1}\) does not reach 3\(\sigma\) for this observation. And so we conclude that an excess is not significantly measured for this star. ### Significant ALMA excesses #### 3.3.1 Gsc 07396-00759 This observation clearly resolves a bright, edge-on debris disc, as displayed in Figure 5 with position angle, inclination and approximate radius consistent with the previous scattered light observations of this disc Sissa et al. (2018); Adam et al. (2021). An in-depth analysis of the ALMA data for this disc is presented in Cronin-Coltsmann et al. (2022). The disc has an integrated mm flux of 1.84\(\pm\)0.22 mJy and a radius of 70.2\(\pm\)4.4 au, an example SED is displayed in Figure 6 and a fractional luminosity-temperature plot with a distribution of dust models is displayed in Figure 7. The fractional luminosity-temperature plot shows different fitted models of the disc's fractional luminosity and the temperature of its mm-dust grains, which is related to the radial Figure 3: Naturally weighted ALMA 880\(\mu\)m image of HD 155555 C. The stellar location is marked with a +. The ellipse in the lower left corner shows the restoring beam. Contours are -3\(\sigma\), -2\(\sigma\), 2\(\sigma\), 3\(\sigma\), 4\(\sigma\), 5\(\sigma\). Figure 2: Naturally weighted ALMA 880\(\mu\)m image of TYC 7443-1102-1. The stellar location is marked with a +. The ellipse in the lower left corner shows the restoring beam. Contours are -3\(\sigma\), -2\(\sigma\), 2\(\sigma\), 3\(\sigma\), 4\(\sigma\), 5\(\sigma\). distance of those grains from the star. These models must be compatible with the SED of the disc but do not take into account resolution effects or radial information derived from the image of the disc. In comparison to these models are displayed models of other well characterised M-dwarf discs and the detection limits of several relevant mid-to-far-infrared instruments. The plot also shows the radius of the disc as observed by ALMA. With a lack of far-IR photometry it is difficult to constrain an SED and model temperature, but with a resolved radius of 70.2 au the mm dust grains would have a temperature of 20 K and so we can limit the likely models to those close to 20 K, i.e. close to the dashed red line in Figure 7. Limited to these models, the fractional luminosity likely ranges from \(\sim 1\times 10^{-4}\)-\(5\times 10^{-3}\). More details on the SED fitting procedure can be found in Yelverton et al. (2019). #### 3.3.2 gj 2006a A flux of 390\(\pm\)33 \(\mu\)Jy beam\({}^{-1}\) is measured at the stellar location of this observation, as displayed in Figure 5, reaching a significance of 11\(\sigma\). We apply the _uvmodelfit_ task again, now allowing the offset parameters to vary, and find a flux of 391\(\pm\)27\(\mu\)Jy beam\({}^{-1}\) at a separation of 0.03\(\pm\)0.02 arcsec, consistent with the expected Gaia DR3 stellar location. Subtracting the expected stellar flux of 6 \(\mu\)Jy beam\({}^{-1}\) from the measured flux leaves a mm excess of 385\(\pm\)33 \(\mu\)Jy beam\({}^{-1}\), remaining at 11\(\sigma\). Having ruled out stellar flaring this mm excess likely constitutes an unresolved debris disc. The beam size of the observation sets an upper limit on the radius of the disc, a beam semi-major axis of 0.96 arcsec sets a radius upper limit of 34 au. An example SED is presented in Figure 6 and a fractional luminosity-temperature plot with a distribution of dust models is displayed in Figure 7. The fractional luminosity-temperature plot shows the upper limit on the radius of the disc as observed by ALMA. With a lack of far-IR photometry it is difficult to constrain an SED and model temperature, but with an upper limit of 34 au on the disc radius we can place a lower limit on the mm grain temperature of 25 K, i.e. to the right of Figure 4: Naturally weighted ALMA 880\(\mu\)m image of ATMic AB. The stellar locations are marked with a + and an A/B. The ellipse in the lower left corner shows the restoring beam. Contours are -3\(\sigma\), -2\(\sigma\), 2\(\sigma\), 3\(\sigma\), 4\(\sigma\), 5\(\sigma\). Figure 5: Naturally weighted ALMA 880\(\mu\)m images of GSC 07396-00759, GJ 2006A and AT Mic AB. The stellar locations are marked with a +. The ellipses in the lower left corners show the restoring beams. Contours are -3\(\sigma\), -2\(\sigma\), 2\(\sigma\), 3\(\sigma\), 4\(\sigma\), 5\(\sigma\). the dashed red line in Figure 7. Limited to these models the fractional luminosity likely ranges from \(\sim\)2\(\times\)10\({}^{-5}\)-1 \(\times\) 10\({}^{-3}\). #### 3.3.3 AT Mic A A flux of 319\(\pm\)27 \(\mu\)Jy beam\({}^{-1}\) is measured at the stellar location of this observation, as displayed in Figure 5, reaching a significance of 11\(\sigma\). We apply the _uvmodelfit_ task again, now allowing the offset parameters to vary, and find a flux of 335\(\pm\)27 \(\mu\)Jy beam\({}^{-1}\) at a separation of 0.13\(\pm\)0.03 arcsec. Subtracting the expected stellar flux of 70 \(\mu\)Jy beam\({}^{-1}\) from the measured flux leaves a mm excess of 265\(\pm\)27 \(\mu\)Jy beam\({}^{-1}\), reaching 8\(\sigma\). We consider the apparent \(\sim\)0.13\(\pm\)0.03 arcsec separation, approximately one eighth of the beam size, between the expected stellar location of AT Mic A and the mm source. The uncertainty of the _uvmodelfit_ is not consistent with the stellar location; however, while Gaia positional astrometric uncertainties are reported as sub-milliarcsecond, the ALMA astrometric precision for this observation (calculated per \(\lx@sectionsign\)10.5.2 of the ALMA Cycle 6 Technical Handbook4) is 0.065 arcsec. Considering also the 0.09\(\pm\)0.06 arcsec offset for AT Mic B's flux, which is in a similar direction, it is likely that the offset for both stars is the result of either uncertain ALMA pointing or possibly the effect of orbital motion. Having also ruled out stellar flaring, we conclude that this excess flux is evidence of an unresolved debris disc around AT Mic A. Footnote 4: [https://almascience.naro.edu/documents-and-tools/cycle6/alma-technical-handbook](https://almascience.naro.edu/documents-and-tools/cycle6/alma-technical-handbook) The beam size of the observation sets an upper limit on the radius of the disc: a beam semi-major axis of 1 arcsec sets a radius upper limit of 10 au. The semi-major axis of Malkov et al. (2012) of 31 au would make this disc the first binary system to have a detected debris disc where the binary separation is between 25 and 135 au, however it is uncertain if Yelverton et al. (2019)'s conclusions extend to M dwarfs and if not, this may not be unusual. An example SED is presented in Figure 6 and a fractional luminosity-temperature plot with a distribution of dust models is displayed in Figure 7. The fractional luminosity-temperature plot shows the upper limit on the radius of the disc as observed by ALMA. With a lack of far-IR photometry it is difficult to constrain an SED and model temperature, but with an upper limit of 10 au on the disc radius we can place a lower limit on the mm grain temperature of 40 K, i.e. to the right of the dashed red line in Figure 7. Limited to these models the fractional luminosity likely ranges from \(\sim\)5\(\times\)10\({}^{-6}\)-5 \(\times\) 10\({}^{-5}\). ## 4 Discussion ### Survey sensitivity and detection fraction To review our BPMG M-dwarf sample, excluding TYC 7443-1102-1 and including AU Mic, we have: 33 observations containing 34 well resolved and well separated literature M dwarfs; an additional three Gaia DR3 M dwarfs with parallaxes (2MASS J05241914-1601153B, LP476-207B, GSC 08350-01924B), although one of these three stars is close enough to the primary that a disc would likely be circumbinary (2MASS J05241914-1601153B); two of the total sample stars are also spectroscopic binaries without resolved companions (Barta 161 12, TYC 6872-1011-1); and there are an additional 2 Gaia DR3 M dwarf candidates without parallaxes (potential companions to 2MASS J19102820-2319486, UCAC 124-580676). We treat binaries where dust is likely circumbinary as one system for the sake of the sample, and we do not include stars without Gaia DR3 parallaxes as we cannot verify that they are local M-dwarfs and not more distant brighter stars. With these constraints our scientific sample is 36 M-dwarf hosts. Of these systems we have four significant detections, GSC 07396-00759, GJ 2006 A, AT Mic A and AU Mic. This makes our detection rate 4/36 or 11.1%. We derive an uncertainty on this using the uncertainty in small number binomial statistics method set out in the appendix of Burgasser et al. (2003), for a result with uncertainties of 11.1\({}^{+7.4}_{-3.3}\)%. We can also calculate a completeness adjusted detection rate, adjusting for the survey's differing sensitivity for different observations. This is calculated by measuring the completeness for each of our detections, i.e. if that disc flux were present for each observation, what fraction of the observations would have significantly detected it? This is exemplified in Figure 8, in which the shading indicates the local completeness. In the dark bottom of the plot no observation would have been able to detect a disc, and in the white top all observations would have been able to detect a disc. We have plotted our four detections with 1\(\sigma\) error bars from the fractional luminosity-temperature distributions seen in Figure 7, after constraining them with our disc radius information. For GSC 07396-00759 only the models with a disc radius within 4.3 au of 70.2 au are considered, in accordance with the radius fitting of Cronin-Coltsman et al. (2022); only the models with a disc radius smaller than 34 au and 10 au are considered for GJ 2006 A and AT Mic A respectively. The completeness fraction for our four sources are: GSC 07396-00759: 36/36, i.e. all our observations could have detected a GSC 07396-00759-like disc if one were present; GJ 2006 A: 33/36; and AU Mic: 33/36; AT Mic A: 7/36, i.e. only seven of our observations were sensitive enough to have detected an AT Mic A-like disc. Dividing through by these completion fractions and summing results in our completeness adjusted detection fraction: 8.3/36 or 23.1%. With the same method of uncertainties applied we get: 23.1\({}^{+8.3}_{-5.3}\)%. Given that much of the weight of this completeness adjusted result derives from AT Mic A alone, an effect that is exacerbated in the small number regime, and as the uncertainties in the disc parameters are not taken into account, the uncertainties on the completeness adjusted detection rate are likely underestimated. To investigate these effects we generated one million sets of four synthetic debris disc detections; we chose sets of four synthetic detections as there were four real detections within our sample. Within each set each disc had a radius selected randomly from between 10 and 100 au with linearly spaced probability and a fractional luminosity selected randomly from between 10\({}^{-3}\) and 10\({}^{-7}\) with logarithmically spaced probability. The host star luminosity was then selected randomly from the luminosities of the stars in our sample without replacement. The completeness adjusted detection rate was calculated for each set and over the one million sets a distribution of synthetic completeness adjusted fractions was formed. The median of this distribution with its distance to the 16th and 84th percentiles was 29.9\({}^{+12.3}_{-8.9}\)%. While the synthetic rate is not significantly larger than the observed fraction, its greater uncertainty does imply that the uncertainties on the observed completeness adjusted detection rate are indeed likely underestimated. This process has made large assumptions about the underlying M-dwarf disc population, however there are not yet enough well-observed M-dwarf debris discs to build a more informed model population. The completeness adjusted detection rate implies that there could be another four AT Mic A-like discs hiding amidst the rest of the sample but that the observations were not sensitive enough to detect them. ### Detection rate in context To begin with, we compare our 11.1\({}^{+7.4}_{-3.3}\) % detection rate and 23.1\({}^{+8.3}_{-5.3}\) % completeness adjusted rate to the DEBRIS M sample. The DEBRIS survey detected just 2/89 (2.2\({}^{+3.4}_{-2.0}\)) M-dwarf discs; immediately our detection rate is significantly higher. However, we cannot conclude that this is due to ALMA's capability to detect M-dwarf discs over Herschel's, as Pawellek et al. (2021) measure a 9/12 (75%) detection rate for F star discs in the BPMG, compared to the 22/92 (23.9\({}^{+5.3}_{-4.7}\)) rate for F stars of the DEBRIS survey presented in Sibutova et al. (2018). If whatever was the root cause of Pawellek et al. (2021)'s high detection rate for BPMG F stars holds for BPMG M stars, be it a matter of youth, formation environment or some other factor, it could raise the base detection rate. In a simple calculation, if the BPMG has an approximately three times higher detection base rate, the DEBRIS M-dwarf rate adjusted to the BPMG M-dwarf sample would only be 6%, still nearly half our non-adjusted rate, although within uncertainty due to the small number statistics. Comparing also to the 1/900 detection rate of Rhee et al. (2007)'s IRAS search for M-dwarf discs and Gautier et al. (2007)'s 0/62 Spitzer detection rate, we do conclude that ALMA has enabled us to probe M-dwarf discs in a way that previous telescopes were not able to due to their wavelength and sensitivity limitations. Comparing our M-dwarf BPMG sample to Pawellek et al. (2021)'s F-type BPMG sample, our detection rate is seven times lower than the F-type rate. However, the F-type sample are all within 25 pc, unlike our M-type sample that ranges up to 100 pc. To account for this we should compare our completeness-corrected rate, but this is still three times lower. F-types have been previously measured to possess greater detections than G and K types, but only by a factor of \(\sim\)1.7 as measured by Sibuthorpe et al. (2018) in the DEBRIS FGK sample. It is possible that the higher rate arises because brighter host stars illuminate the discs more, allowing them to be more easily detected. Pawellek et al. (2021)'s sample ranges from FOV to F9V (5.71 \(L_{\odot}\) to 1.69 \(L_{\odot}\)) while our M-type sample ranges from M0V - M8.5V (0.275 \(L_{\odot}\) to 0.004 \(L_{\odot}\)). M-dwarf samples span a large luminosity range and their luminosities can be several orders of magnitude lower than FGK type star luminosities. It is possible that the F-type BPMG sample and the M-type BPMG sample host similar discs but the host luminosities affect observability too significantly. That is, while ALMA provides an increase in sensitivity over previous far-IR observations for discs around M-type stars, it may still be that far-IR observations of earlier type stars yield a higher detection rate than ALMA observations of later type stars. It is also possible that whatever mechanism boosts the detectability of BPMG F-type discs does not apply to late type stars; this scenario would mean we can more directly compare our results to age-spread field star surveys like DEBRIS. Compared to the Herschel DEBRIS G and K samples' detection rates of 14.3\({}^{+4.7}_{-3.8}\) % and 13.0\({}^{+4.5}_{-3.6}\) % respectively and completeness adjusted rates of 24.6\({}^{+5.3}_{-4.9}\) % and 22.5\({}^{+5.6}_{-4.2}\) %, respectively, our 11.1\({}^{+7.4}_{-3.3}\) % detection rate and 23.1\({}^{+8.3}_{-5.3}\) % completeness adjusted rate are consistent, if not following the slight trend of decreasing detection rate with type. This similarity suggests that the difference between our sample and the BPMG F-types is more related to an unusual property of those F-type stars than a large difference in ALMA versus far-IR sensitivity as a function of spectral type. We now compare to the Luppe et al. (2020) predictions for an ALMA survey of DEBRIS-like M-dwarf discs. Our sample has been observed for approximately 15 minutes per star with ALMA Band 7, and the observations were designed to reduce the likelihood that discs would be resolved. It is unlikely that any discs would be larger than the maximum recoverable scales of our observations, but as evidenced by GSC 07396-00759 discs could still have been resolved, reducing the flux per beam. Without correcting for resolution Luppe et al. (2020) predict 15 minutes of observation at Band 7 of the Herschel DEBRIS sample of M-dwarfs scaled as DEBRIS-like discs to attain a detection rate of 4.3\(\pm\)0.9% to 15.8\(\pm\)0.5%, entirely consistent with our observations. If the DEBRIS sample and the BPMG stellar samples are broadly similar, this would imply that M-dwarf discs are overall similar to earlier type stars' discs in terms of radius, total surface area, temperature and fractional luminosity, when scaled by stellar mass and luminosity. The DEBRIS sample is selected from the closest stars, but over a range of ages. Pawellek et al. (2021) has shown based on their high detection rate for F type discs that the BPMG sample could be significantly different to the DEBRIS sample. Ultimately, to investigate whether M-dwarf discs differ from earlier type discs one would need to use the scaling relationships of Luppe et al. (2020) and apply their process to the known FGK-type BPMG discs to produce a theoretical FGK-like M-dwarf sample to compare our sample to. However, the small number statistics would likely inhibit differentiation of Luppe et al. (2020)'s different scaling relationships. Ultimately we conclude that our ALMA Band 7 detection rate is evidence that M-dwarf discs are not significantly less common than earlier type discs, but that the telescopes employed in previous surveys could not efficiently observe the low temperature and fluxes of M-dwarf discs due to their low host luminosities. ### Radii in context In Figure 9 we plot the mm-wave radii of all mm resolved debris discs against the host luminosity, as first presented in Matra et al. (2018); added to the original sample are the stars presented in Sepulveda et al. (2019), Fomalhaut C (Cronin-Coltsmann et al., 2021) and CPD-72 2713 (Moor et al., 2020). We plot the resolved radius of the GSC 07396-00759 disc and upper limits for GJ 2006 A and AT Mic A. We can see that GSC 07396-00759's radius is consistent with the trend of the earlier type sample, if the disc of GJ 2006 A is close to the upper limit it would also be consistent. Although there is a large scatter, the upper limit on the radius of AT Mic A's disc is very small. However, we note that this is specifically a plot of resolved radii and that many discs of radii less than ten au have been inferred from SEDs, and they could not be resolved due to instrumental constraints, as this disc is not resolved due to instrument constraints. The AT Mic A disc would still be small by mm-wave detection standards, however the sample of discs at this low luminosity is small and it remains unknown whether this radius limit would be unusual for its host luminosity and mass. As the AT Mic binary are only separated by 30 au, their orbits would prevent circumstellar discs larger than approximately 10 au from surviving. ## 5 Conclusion The Beta Pictoris Moving Group provides an excellent candidate sample of M-dwarfs to observe with ALMA to uncover new M-dwarf debris discs and resolve the question as to whether M-dwarf discs are rare or just difficult to detect. In this paper we have presented new ALMA Band 7 observations of 33 M dwarf systems comprising at least 37 M-dwarf stars. We identify one resolved disc, GSC 07296-00759 with an integrated flux of 1.84 mJy, and identify two unresolved mm-wave excess detections around GJ 2006 A with a flux of 385 \(\mu\)Jy beam\({}^{-1}\) and AT Mic A with a flux of 265 \(\mu\)Jy beam\({}^{-1}\). We confirm that none of these stars show evidence of stellar flaring and none of the discs show evidence of \({}^{12}\)CO J=3-2 emission. We explore the fractional luminosity-temperature parameter space for these discs and present fractional luminosity ranges. We note two of our observations come close to our 3\(\sigma\) criterion for detection. The flux at the stellar location of HD 155555 C could be noise or a dim excess, the star may be worth considering for future re-observation. AT Mic B has a 4\(\sigma\) flux at the stellar location, but only a 2\(\sigma\) excess above the expected stellar flux and so cannot be confirmed as a significant excess detection. This small excess, in addition to its proximity at 9.8 pc and its association with AT Mic A and AU Mic, makes this star worth re-observing in the future. If future observations of AT Mic A are made, AT Mic B will naturally be observed due to the small binary separation, and so it may be likely that this star's disc hosting candidacy will be determined in the future. We calculate a detection rate of 4/36, 11.1\({}^{+7.4}_{-3.3}\) %, for our M-dwarf sample including AU Mic. We also present a completeness fractional luminosity-temperature plot for our observations and calculate a completeness adjusted detection rate of 23.1\({}^{+8.3}_{-5.5}\) %, but we note that these errors are very likely to be underestimated. We place our detection rate in context and conclude that it is consistent with the Herschel DEBRIS GK detection rate and the ALMA survey predictions of Luppe et al. (2020). We therefore conclude that M-dwarf debris discs are not significantly less common than earlier type discs but instead require longer wavelength and more sensitive observations to account for the low host luminosity. We examine the disc radius upper limits of our new detections and conclude that GJ 2006 A is likely consistent with the wider luminosity-radius sample and trend. While the upper limit on the disc of AT Mic A is particularly small, it resides in too sparse a parameter space to be fully contextualised. We examine the consequences of new Gaia DR3 astrometric information for the multiplicity of our sample. Due to their binarity we estimate that three of our systems are very unlikely to possess detectable discs due to their separation and that six to seven of our systems have a reduced likelihood of possessing detectable discs assuming that the results of Yelverton et al. (2019) extend to M-dwarfs. Another 13 of our stars have binary companions that should not affect disc detection likelihood. We stack 21 of our non-detection observations with the stars within 0.5 arcsec of the observation phase centre and calculate a 3\(\sigma\) upper limit on the mean mm-wave excess of 24 \(\mu\)Jy beam\({}^{-1}\) for those stars. Finally, we identify 11 background sources, likely sub-mm galaxies, of which one is resolved. The occurrence of background sources is consistent with the predictions of galaxy number count models (Popping et al., 2020). We identify the observation of TYC 7443-1102-1 as severely contaminated by two of these background galaxies. ## Acknowledgements We thank the referee for a careful report and valuable comments. PFCC is supported by the University of Warwick. GMK and SM are supported by the Royal Society as a Royal Society University Research Fellows. LM acknowledges funding from the Irish Research Council under grant IRCLA/2022/3788. SM is supported by a Junior Research Fellowship from Jesus College, University of Cambridge. For the purpose of open access, the author has applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising from this submission. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2017.1.01583.5. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. ## Data Availability The data underlying this article are available in [http://almascience.nrao.edu/aq/](http://almascience.nrao.edu/aq/), and can be accessed with ALMA project ID: 2017.1.01583.S.
2301.03376
Occupant-Oriented Demand Response with Multi-Zone Thermal Building Control
In future energy systems with high shares of renewable energy sources, the electricity demand of buildings has to react to the fluctuating electricity generation in view of stability. As buildings consume one-third of global energy and almost half of this energy accounts for Heating, Ventilation, and Air Conditioning (HVAC) systems, HVAC are suitable for shifting their electricity consumption in time. To this end, intelligent control strategies are necessary as the conventional control of HVAC is not optimized for the actual demand of occupants and the current situation in the electricity grid. In this paper, we present the novel multi-zone controller Price Storage Control (PSC) that not only considers room-individual Occupants' Thermal Satisfaction (OTS), but also the available energy storage, and energy prices. The main feature of PSC is that it does not need a building model or forecasts of future demands to derive the control actions for multiple rooms in a building. For comparison, we use an ideal, error-free Model Predictive Control (MPC), a simplified variant without storage consideration (PC), and a conventional hysteresis-based two-point control. We evaluate the four controllers in a multi-zone environment for heating a building in winter and consider two different scenarios that differ in how much the permitted temperatures vary. In addition, we compare the impact of model parameters with high and low thermal capacitance. The results show that PSC strongly outperforms the conventional control approach in both scenarios and for both parameters. For high capacitance, it leads to 22 % costs reduction while the ideal MPC achieves cost reductions of more than 39 %. Considering that PSC does not need any building model or forecast, as opposed to MPC, the results support the suitability of our developed control strategy for controlling HVAC systems in future energy systems.
Moritz Frahm, Thomas Dengiz, Philipp Zwickel, Heiko Maaß, Jörg Matthes, Veit Hagenmeyer
2023-01-09T14:28:06Z
http://arxiv.org/abs/2301.03376v3
# Occupant-Oriented Demand Response ###### Abstract In future energy systems with high shares of renewable energy sources, the electricity demand of buildings has to react to the fluctuating electricity generation in view of stability. As buildings consume one-third of global energy and almost half of this energy accounts for Heating, Ventilation, and Air Conditioning (HVAC) systems, HVAC are suitable for shifting their electricity consumption in time. To this end, intelligent control strategies are necessary as the conventional control of HVAC is not optimized for the actual demand of occupants and the current situation in the electricity grid. In this paper, we present the novel multi-zone controller Price Storage Control (PSC) that not only considers room-individual Occupants' Thermal Satisfaction (OTS), but also the available energy storage, and energy prices. The main feature of PSC is that it does not need a building model or forecasts of future demands to derive the control actions for multiple rooms in a building. For comparison, we use an ideal, error-free Model Predictive Control (MPC) and a conventional hysteresis-based two-point control as upper and lower benchmarks, respectively. We evaluate the three controllers in a multi-zone environment for cooling a building in summer and consider two different scenarios that differ in how much the permitted temperatures vary. The results show that PSC strongly outperforms the conventional control approach in both scenarios with regard to the electricity costs and OTS. It leads to 50 % costs reduction and 15 % comfort improvements while the ideal MPC achieves costs reductions of 58 % and comfort improvements of 29 %. Considering that PSC does not need any building model or forecast, as opposed to MPC, the results support the suitability of our developed control strategy for controlling HVAC systems in future energy systems. keywords: multi-zone, thermal building model, RC model, model predictive control, price storage control, rule-based control, occupant behavior, demand response, smart grid + Footnote †: journal: Applied Energy ## 1 Introduction Buildings consume one-third of global final energy [1]. Almost half of this energy is used by Heating, Ventilation, and Air Conditioning (HVAC) systems to heat or cool buildings [2]. Especially the cooling demand is expected to increase significantly in many parts of the world, as the climate warms on average [3]. In buildings, the energy consumption results from Occupant Behavior (OB) and Occupants' Thermal Satisfaction (OTS) as they interact with the building's energy systems and require comfortable thermal conditions [4]. The energy demand of buildings can be covered with renewable energies in order to reduce greenhouse gas emissions [5]. Flexible electrical loads are pivotal for future energy systems in view of stability to cope with the increasing share of intermittent renewable energy sources like solar and wind energy. For exploiting flexible electric loads in buildings, the HVAC operation can be integrated into Demand Response (DR) programs. DR refers to the change of electricity demand in response to internal or external factors like the price of electricity [6]. In the building sector, electrical HVAC systems, like heat pumps or air conditioners, are suitable for DR. They can exploit existing infrastructure like the building mass or hot water tanks to shift their electricity demand in time [7]. Thus, they can significantly contribute to better utilization of renewable energy sources and simultaneously help to stabilize the electricity grid. In order to use HVAC systems for DR, optimized control strategies are necessary. In addition to DR, designing the HVAC operation tailored to the actual occupants' needs could significantly reduce energy use. For example in office spaces, often, not all rooms are occupied. The average occupancy rates of offices are rarely over 60 % [8]. However, the HVAC control in offices usually does not consider the actual occupancy of offices. This leads to unnecessary energy use in unoccupied periods. 56 % of the energy consumed by buildings is used during unoccupied hours and 44 % in occupied hours [9]. For the optimization of HVAC to consider DR and individual OTS, advanced control strategies are required instead of standard thermostats [10], for example Model Predictive Control (MPC) [11] or heuristic control strategies [12]. MPC finds the optimal input trajectory for the HVAC system's control outputs over a future time horizon by solving an optimization problem under consideration of future system dynamics, forecasts, and constraints. Therefore, it requires a dynamic thermal building model and forecasts of OB and weather [13]. The development of models and forecasts can make MPC less practicable and more expensive for real-world applications [11]. In contrast, heuristic control strategies are model- and forecast-free heuristic algorithms. They iteratively adjust the power consumption of HVAC systems in order to archive certain goals. In order to do this, they use rule-based control mechanisms and heuristic algorithms that can adapt the HVAC system's heat flows to internal and external signals. Their core advantage is that they do not require a building model to solve an optimization problem [12]. Thus, they are applicable to any building without significant adjustments. ### Related Work Different control approaches are available in the literature for controlling HVAC systems. Tab. 1 compares the most relevant studies for the present paper. The most significant difference between control strategies is whether they require a model to operate or not. Most studies in the literature use a model-based approach as they can find the optimal solution of an optimization problem [7]. Especially MPC is popular in the field of DR. Most authors use MPC for controlling HVAC systems, e.g. Maddalena et al. [14], Hu et al. [15], Pedersen et al. [16], Blum et al. [17], Mork et al. [20], and Zwickel et al. [22]. While model-based approaches generally yield adequate results, they suffer from execution times and require modeling the thermal behavior of a building which is a complex task. Fewer studies use model-free control strategies. Compared to model-based strategies, the controller design process is significantly simplified, as no building-specific model is required. Model-free control algorithms can be found in the studies of Dengiz et al. [12], Rodriguez et al. [18], Nolting et al. [19], and Michailidis et al. [21]. These approaches are rule-based control mechanisms that are in few cases also combined with a heuristic approach for optimizing an objective function. In all studies, the objective is to reduce the energy costs while satisfying OTS. Blum et al. [17] additionally consider the provision of ancillary services. Another essential requirement for most of the optimized control approaches is the availability of forecasts. However, most of the model-free approaches do not rely on any forecast. Our literature review emphasizes the use of control algorithms for multiple zones (see Tab. 1). There are also control approaches in the literature that consider only buildings with one thermal zone (one uniform temperature in the whole building). However, the consideration of multiple zones is closer to the real thermal behavior of buildings and it also increases the complexity of the optimization problem. Another essential feature of control algorithms for DR is their capability of coupling multiple buildings in a coordinated way. While most of the listed studies use a central controller for this, Dengiz et al. [12] define a hybrid control architecture. Zwickel et al. [22] compare central and decentral control approaches for multiple buildings. To evaluate the performance of the developed control approach, all studies, except for two, use a conventional control approach, like simple rule-based control, hysteresis-based two-point controller, or a Proportional Integral (PI) controller as a lower benchmark. The studies using MPC for controlling the heating or cooling device define their results also as an upper benchmark for the optimization problem, as usually a MPC approach is solved by finding the global optimal solution. Most studies use simulated synthetic data for defining the building model and setting up the simulation. Only Maddalena et al. [14] and Michailidis et al. [21] also use measured data for evaluating the OTS. ### Contribution of this Paper The main contribution of the present paper is the introduction of a novel heuristic multi-zone control approach, called Price Storage Control (PSC). It combines external factors (e.g. electricity price) and internal factors (temperatures of different zones in the building) to determine when and how much electricity should be consumed for the generation of heat flows. The approach is model-free and does not need any forecasts. To the best of our knowledge, our study is the only one that introduces a novel control approach for buildings with multiple zones that does not need any model or forecasts and that allows for a coordinated coupling of multiple buildings. This is because of its capability to use any external factor for deriving the HVAC control output. Our study is the first that evaluates an introduced model-free and forecast-free control algorithm by using a lower and upper benchmark that are derived from the use of measured data (see Tab. 1). To evaluate the PSC control performance in terms of OTS and energy costs, we compare three different control strategies in a multi-zone thermal building model. In the evaluation, we use two scenarios with different degrees of variable room usage. In the base scenario, the temperature range is scheduled between comfort and standby mode. The second scenario also allows room-individual temperature ranges, based on the use case for each room. For comparison, we use an ideal, error-free MPC and a hysteresis-based two-point controller as upper and lower benchmarks. ### Structure of this Paper We develop and implement three different control strategies and an evaluation environment in the present work. We present the models in Sec. 2, the controllers in Sec. 3, and evaluate them in Sec. 4. Finally, we conclude the evaluation results in Sec. 5. ## 2 Models In this section, we present the models that we apply for the evaluation (see Sec. 4) of different control strategies (see Sec. 3). The model-based control strategy, the MPC, also uses the models internally to predict future system dynamics (see Sec. 3.2). The modeling section Sec. 2 is separated into three parts: the model for thermal dynamics of building in Sec. 2.1, for the heat pump in Sec. 2.2, and for OTS in Sec. 2.3. ### Multi-Zone Thermal Building Model In this section, we develop a multi-zone thermal building model to evaluate room-individual control strategies in Sec. 4. The model applies the Resistor Capacitor (RC) analogy to describe the heat flows between temperature nodes by resistors \(R\) and thermal dynamics by capacitors \(C\), as exemplarily shown in Eq. (1). \begin{table} \begin{tabular}{p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt} p{71.1pt}} \hline \hline Literature & Model-free control & Forecast-free control & Multi-zone control & Coupling of multiple buildings & Comparison with lower buildings & Comparison with upper benchmark & Use of measured data \\ \hline Maddalena et al., 2022 [14] & ✗ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ \\ Hu et al., 2014 [15] & ✗ & ✗ & ✓ & ✗ & ✓ & (✓) & ✗ \\ Pedersen et al., 2018 [16] & ✗ & ✗ & ✗ & ✗ & (✓) & ✓ & ✓ & ✗ \\ Blum et al., 2016 [17] & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ & ✗ \\ Dengiz et al., 2019 [12] & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ \\ Rodriguez et al., 2018 [18] & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ \\ Nolting et al., 2019 [19] & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ \\ Mork et al., 2022 [20] & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ \\ Michailidis et al., 2018 [21] & ✓ & ✗ & ✓ & (✓) & ✓ & ✗ & ✓ \\ Zwickel et al., 2022 [22] & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & ✗ \\ **Present work** & ✓ & ✓ & ✓ & (✓) & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of relevant papers studying approaches for demand response of HVAC systems thermally defined by the two differential equations Eq. (2) and (3). \[C_{\mathrm{i}_{j}}\frac{\mathrm{d}T_{\mathrm{i}_{j}}}{\mathrm{d}t} =\frac{T_{\mathrm{m}_{j}}-T_{\mathrm{i}_{j}}}{R_{\mathrm{i}_{j}}}+ \frac{T_{\mathrm{a}}-T_{\mathrm{i}_{j}}}{R_{\mathrm{a}_{j}}}+g_{\mathrm{s}_{j}} \dot{q}_{\mathrm{s}}+\dot{Q}_{\mathrm{h}_{j}}, \tag{2}\] \[C_{\mathrm{m}_{j}}\frac{\mathrm{d}T_{\mathrm{m}_{j}}}{\mathrm{d}t} =\frac{T_{\mathrm{i}_{j}}-T_{\mathrm{m}_{j}}}{R_{\mathrm{i}_{j}}}. \tag{3}\] ### Heat Pump Model The modeled air-source heat pump has a maximum electrical power \(P_{\mathrm{max}}\) and an energy efficiency ratio \(\varepsilon_{\mathrm{h}}\) which are both dependent on the ambient temperature. We use the model _AERO SLM 3-11 HGL_ from the Austrian heat pump manufacturer _iDM Energiesysteme GmbH_[25] with a supply temperature of the cooling system of \(18^{\circ}C\). To calculate the efficiency and the maximum cooling power at every time slot, we use the data from the manufacturer's technical fact sheet and linear interpolation. The heat pump can modulate its power consumption \(P_{\mathrm{el}}\) and thus the heat flow \(\dot{Q}_{\mathrm{h}}\) with \(\chi_{\mathrm{mod}}\) between 20 % and 100 %. This leads to the following relation between the heat pump's electrical power \(P_{\mathrm{el}}\) and the thermal building model's heat pump heat flows: \[\sum_{j=1}^{n}\left|\dot{Q}_{\mathrm{h}_{j}}\right|=\left|\dot{Q}_{ \mathrm{h}}\right|=\varepsilon_{\mathrm{h}}\cdot P_{\mathrm{el}}=\varepsilon_ {\mathrm{h}}\cdot(\chi_{\mathrm{mod}}\cdot P_{\mathrm{max}}), \tag{4}\] \[\chi_{\mathrm{mod}} \in[0,[0.2,1]], \tag{5}\] \[P_{\mathrm{el}} =\chi_{\mathrm{mod}}\cdot P_{\mathrm{max}} \tag{6}\] ### Occupants' Thermal Satisfaction (OTS) Model In this section, we define the temperature ranges \([y_{\mathrm{min}},y_{\mathrm{max}}]\) based on international standards for Occupants' Thermal Satisfaction (OTS) modeling. The three most frequently cited OTS standards are _ASHRAE Standard 55_[26], _ISO 7730:2005_[27], and _EN 16798-1:2019_[28]. These standards are fundamentally based on the Predicted Mean Vote (PMV) standard scale, which was first introduced by Fanger's model [29]. The PMV is a static model evaluated from a large group of people with a given combination of thermal environmental and personal parameters. These parameters include metabolic activity, clothing, air temperature, radiant temperature, air velocity, and relative humidity. In a survey, occupants express their thermal sensations on a scale from -3 (too cold) to +3 (too warm), where 0 is optimum. Fanger also developed an equation that relates the PMV to the Predicted Percentage of Dissatisfied (PPD). The standard OTS guidelines aim for a PMV from -0.5 to +0.5 (OTS level II, see Tab. 2). The OTS level can also be within closer or wider PMV boundaries, e.g. \(\pm\)0.2 for level I or \(\pm\)0.7 for level III. Wider temperature limits result in lower energy consumption of HVAC systems. Based on these OTS levels in Tab. 2, we calculate the corresponding lower \(y_{\mathrm{min}}\) and upper \(y_{\mathrm{max}}\) temperature limits that are required for Eq. (21). For the calculation of the temperature limits, we use the CBE Thermal Comfort Tool [30] with EN-16798 standard and summer clothing. In this tool, we set the mean radiant temperature equal to the air temperature \(T_{\mathrm{i}}\). This implies the assumption that the operative temperature is close to the air temperature. For more information about the operative temperature, we refer to our previous work [31]. The resulting temperature limits for different levels of OTS are presented in Tab. 2. Based on the temperature limits, we calculate the reference comfort temperature \(y_{\mathrm{r}_{j}}\) in Eq. (7). This reference temperature is required for the controller design of the PSC in Sec. 3.1. \[y_{\mathrm{r}_{j}}=\frac{y_{\mathrm{min}_{j}}+y_{\mathrm{max}_{j}}}{2} \tag{7}\] ## 3 Control Strategies This section describes the development of three different control strategies: PSC in Sec. 3.1, MPC in Sec. 3.2, and hysteresis-based two point control in Sec. 3.3. The objective is to minimize the electricity costs given by a time-variable electricity price and to maximize the OTS. While we develop the PSC as a novel control methodology for occupant-oriented demand-response with room-individual building control, the MPC and hysteresis-based two-point controller are used as upper and lower benchmarks, respectively. The MPC was implemented in Python with a prediction horizon of 16 hours and solved using Gurobi. Also, PSC and the two-point controller are implemented in Python. \begin{table} \begin{tabular}{c c c c c} \hline \hline OTS level & PMV & PPD & \(y_{\mathrm{min}}\) in \({}^{\circ}\)C & \(y_{\mathrm{max}}\) in \({}^{\circ}\)C \\ \hline I & \(\pm\)0.2 & \(<6\) \% & 25.6 & 26.6 \\ II & \(\pm\)0.5 & \(<10\) \% & 24.8 & 27.4 \\ III & \(\pm\)0.7 & \(<15\) \% & 24.2 & 27.9 \\ \hline \hline \end{tabular} \end{table} Table 2: OTS categories, obtained from CBE Thermal Comfort Tool [30] with EN-16798 and summer clothings Figure 1: Thermal building model for each room \(j\) (\(j=1\dots n\)), obtained from [23] (modified from [24]) In general, the three control strategies are applicable to cooling or heating. For both cases, we use the generic term _heat flows_. A heat flow is the rate of net heat energy transfer between hot and cold sides and can be positive or negative for heating or cooling, respectively. ### Price Storage Control (PSC) The PSC is a heuristic control algorithm for modulating HVAC or heat pump heat flows \(\dot{Q}_{h_{j}}\) in a multi-zone building. It essentially consists of 4 steps which it executes in every time slot. 1. Determine the price factor \(\chi_{\mathrm{p}}(t)\) based on [12]. 2. Determine the storage factor \(\chi_{\mathrm{s}}(t)\). 3. Calculate the modulation degree \(\chi_{\mathrm{mod}}\) using the price factor \(\chi_{\mathrm{p}}(t)\) and the storage factor \(\chi_{\mathrm{s}}(t)\). 4. Distribute the generated heat flow to the different rooms of the multi-zone building. ### Price Factor To obtain the price factor \(\chi_{\mathrm{p}}\), the algorithm calculates the empirical distribution function \(\widetilde{F}(p)\) for the future electricity prices \(p(t)\) of the next 24 hours at the beginning of each day. We assume that we have an electricity tariff with predetermined prices for the next 24 hours (for more information see Section 4.1.1). At every time slot of the day, the value of the \(\widetilde{F}(p)\) is calculated for the current price \(p(t)\). The calculation of the empirical distribution function \(\widetilde{F}(p)\) is illustrated in Fig. 2, exemplarily for one day. \(\widetilde{F}(p)\) quantifies the share of electricity prices for the current day that have a lower or equal value compared to the price \(p\) of the current time slot. PSC sets the price factor at time slot \(t\) as in Eq. (8). A low price results in a high price factor (due to a high value of \(\widetilde{F}(p)\)) and vice versa. \[\chi_{\mathrm{p}}(t)=1-\widetilde{F}(p(t)) \tag{8}\] ### Storage Factor For the calculation of the storage factor \(\chi_{\mathrm{s}}(t)\), the state of thermal charge \(S_{j}(t)\) from Eq. (9) is needed for each room. The state of thermal charge \(S_{j}(t)\) quantifies the "stored" temperature room individually and results in values between 0 and 1. Although the PSC method is applicable for heating or cooling heat flows, we explain this method exemplarily for the cooling case in the following. \[S_{j}(t)=\frac{y_{t_{j}}+\xi_{j}-T_{i_{j}}(t-\Delta t)}{\xi_{j}} \tag{9}\] If the temperature of the room \(j\) from the last time slot \(T_{i_{j}}(t-\Delta t)\) is lower than the reference temperature \(y_{t_{j}}\) the state of thermal charge \(S_{j}(t)\) is set to 1. This means that the thermal storage of this room is full and there is no necessity for applying heat flows to the room 1. Footnote 1: As we are considering cooling in the present work it has to be noted that full thermal storage, in this case, means, that the temperature in the room is low and thus the room already has enough “cooling energy”. If the temperature of the room is higher than the reference comfort temperature \(y_{t_{j}}\) plus an allowed deviation buffer \(\xi_{j}\), for sufficiently high OTS, the state of thermal charge \(S_{j}(t)\) is set to 0. In the cooling case, this results in empty thermal storage as the temperature in the room is too high. For every room temperature that is between the reference temperature and the upper OTS limit (\(y_{t_{j}}+\xi_{j}\)), the algorithm uses Eq. (9) to calculate the state of thermal charge \(S_{j}(t)\) of room \(j\) (\(j=1\dots n\)). The reference temperature for every room \(y_{t_{j}}\) is calculated as in Eq. (7). This value depends on the investigated scenarios (see Sec. 4) 2. Footnote 2: For this internal parameter of the algorithm, a buffer value of \(\xi_{j}=2\,\mathrm{K}\) yields adequate results in the present work. After having determined the state of thermal charge \(S_{j}(t)\) for every room \(n\), the algorithm calculates the storage factor \(\chi_{\mathrm{s}}(t)\) by using Eq. (10). If the temperatures in the different rooms are close to the lower limit, their corresponding state of thermal charge will be high resulting in a low storage factor \(\chi_{\mathrm{s}}(t)\) and vice versa. \[\chi_{\mathrm{s}}(t)=1-\frac{\sum_{j=1}^{n}S_{j}(t)}{n} \tag{10}\] ### Modulation Degree of the HVAC system The third step of the algorithm is the calculation of the heat pump's modulation degree and thus the heat flow and the electrical power using Eq. (11). The modulation degree \(\chi_{\mathrm{mod}}(t)\) results from the multiplication of the price factor \(\chi_{\mathrm{p}}\) and storage factor \(\chi_{\mathrm{s}}\). Because both factors can have values between 0 and 1, the modulation degree \(\chi_{\mathrm{mod}}(t)\) likewise varies between 0 and 1. We choose a multiplication of the two factors instead of a weighted sum as this leads to better results in our case studies. Based on the modulation degree, Eq. (4)) and Eq. (5) calculates the generated heat flows and electrical power. \[\chi_{\mathrm{mod}}(t)=\chi_{\mathrm{p}}(t)\cdot\chi_{\mathrm{s}}(t) \tag{11}\] Two factors influence the heat pump power output. A high electricity price leads to a low price factor which leads to low values of the modulation degree. This results in low electricity consumption at that time. On the contrary, a low price leads to a high price factor which incentives the heat pump to cool down the room. This is desired as we want to generate heat flows when the electricity prices are low. Next to the price factor, the storage factor impacts the generated heat flows and thus consumed electricity. If the temperatures in the rooms are generally low, the storage factor has low values due to the high values of the state of thermal charge \(S_{j}(t)\). A low storage factor leads to low power consumption and vice versa. This is also a desired property of the control algorithm. If the room temperatures are already low, there is no urgent need for cooling whereas high room temperatures tend to lead to higher generation of heat flow using the PSC algorithm. ### Distribution of Heat Flows In the final step, the algorithm distributes the generated heat flows to the different rooms \(j\) (\(j=1\dots n\)). To do this, the caused thermal discomfort of each room \(d_{c_{j}}(t)\) due to possibly too high temperatures is determined. If the temperature of a room from the previous time slot \(T_{\mathrm{i}}(t-\Delta t)\) is higher than the upper temperature limit \(y_{\mathrm{max}_{j}}\), Eq. (12) and Eq. (14) quantify the caused discomfort of the room \(j\) and the total caused discomfort \(d_{\mathrm{c,total}}(t)\) from Eq. (13). \[d_{c_{j}}(t) =T_{\mathrm{i}_{j}}(t-\Delta t)-y_{\mathrm{max}_{j}} \tag{12}\] \[d_{\mathrm{c,total}}(t) =\sum_{j=1}^{n}d_{c_{j}}(t) \tag{13}\] Based on the total caused discomfort \(d_{\mathrm{c,total}}(t)\) the PSC algorithm distributes the generated heat flows \(\dot{Q}_{h}\) of time \(t\) to each room \(j\) with \(\dot{Q}_{h_{j}}\) using Eq. (14). This mechanism assures that especially rooms that have high temperatures, get more heat flow (cooling) than rooms with less need for cooling. If the heat pump generates heat flows although no room has violated its temperature boundaries in the last time slot, it equally distributes the generated heat flows to every room. \[\dot{Q}_{h_{j}}(t)=\frac{d_{c_{j}}(t)}{\sum_{j=1}^{n}d_{c_{j}}(t)}\cdot\dot{Q} _{h}(t) \tag{14}\] Overall, PSC executes the four mentioned steps for every time slot of the day while updating the empirical distribution function of the prices at the beginning of each day. ### Model Predictive Control (MPC) In contrast to PSC, MPC requires a model which is obtained by restructuring the thermal building model from Sec.2 into state-space notation. The resulting equations for \(n\) rooms are \[\begin{split}&\dot{x}(t)=f\left(x(t),u(t),z(t)\right)=Ax(t)+Bu(t)+ Ez(t),\\ & y(t)=g(x(t))=Cx(t)\end{split} \tag{15}\] where \(x\) describes the temperature states of the buildings' air temperatures (\(T_{\mathrm{i}_{j}}\)) and thermal masses (\(T_{\mathrm{m}_{j}}\)), \(u\) the control inputs (\(\dot{Q}_{h_{j}}\)), \(z\) the measurable disturbances (\(\dot{q}_{\mathrm{s}}\), \(T_{\mathrm{a}}\)), and \(y\) the outputs (\(T_{\mathrm{i}_{j}}\)) with \[\begin{split}& x=\left(T_{\mathrm{i}_{1}}\ T_{\mathrm{m}_{1}}\ T_{\mathrm{i}_{1}}\ T_{\mathrm{m}_{2}}\ \dots\ T_{\mathrm{i}_{k}}\ T_{\mathrm{m}_{n}}\right)^{\mathsf{T}},\\ & u=\left(\dot{Q}_{h_{1}}\ \dot{Q}_{h_{2}}\ \dots\ \dot{Q}_{h_{n}} \right)^{\mathsf{T}},\\ & z=\left(\dot{q}_{\mathrm{s}}\ T_{\mathrm{a}}\right)^{\mathsf{T }},\\ & y=\left(T_{\mathrm{i}_{1}}\ T_{\mathrm{i}_{2}}\ \dots\ T_{\mathrm{i}_{k}}\right)^{\mathsf{T}}.\end{split} \tag{16}\] Given the heat pump model (see Eq. (4) and Eq. (5)) and the control model (Eq. (15)), we formulate the optimization problem with a prediction horizon of N time steps, which must be solved at each sampling instant \(t\), based on [22] as \[\min\sum_{k=t}^{t+N-1}l\left(k,\chi_{\mathrm{dis}}(k|t),P_{\mathrm{ el}}(k|t)\right)\] (17a) subject to \[\forall k\in[0,N-1]:\] \[x(k+1|t) =A_{\mathrm{d}}x(k|t)+B_{\mathrm{d}}u(k|t)+E_{\mathrm{d}}z(k|t) \tag{17b}\] \[y(k|t) =C_{\mathrm{d}}x(k|t)\] (17c) \[x(0|t) =x(t),\] (17d) \[P_{\mathrm{el}}(k|t) =\chi_{\mathrm{mod}}(k|t)\cdot P_{\mathrm{max}}(k),\] (17e) \[\chi_{\mathrm{mod}}(k|t) \in\{0,[0.2,1]\},\] (17f) \[u(k|t) \in\mathcal{U},\ y(k|t)\in\mathcal{Y} \tag{17g}\] where \(l(k,\cdot,\cdot)\) is the stage-cost, (17b) and (17c) are the discrete-time control model, (17d) is the initial condition, and (17e) and (17f) are the heat pump model. \(x(t)\) is typically measured at the time \(t\), and \(\mathcal{U}\) as well as \(\mathcal{Y}\) are input and output constraint sets (see (17g)). The k-step ahead prediction for the states, inputs, disturbances, discomfort factor, heat pump modulation degree, and electric power, based on the current initial condition are denoted by \(x(k|t)\), \(u(k|t)\), \(z(k|t)\), \(y(k|t)\), \(\chi_{\mathrm{dis}}(k|t)\), \(\chi_{\mathrm{mod}}(k|t)\), and \(P_{\mathrm{el}}(k|t)\), respectively. We consider the following stage cost \[l\left(k,\chi_{\mathrm{dis}}(k),P_{\mathrm{el}}(k)\right)=\lambda\left(\sum_{j= 1}^{n}\chi_{\mathrm{dis}}(k)\right)+\left(1\text{-}\lambda\right)\left(P_{ \mathrm{el}}^{\prime}(k)p^{\prime}(k)\right) \tag{18}\] where \(\lambda\in[0,1]\) is a user-defined weighting coefficient and \(p(k),k\in t:t+N-1\) is a time-dependent price signal (future Figure 2: Empirical distribution function of the electricity prices electricity prices). \(P_{\mathrm{el}}^{\prime}(k|t)\) and \(p^{\prime}(k)\) are the min-max normalization of \(P_{\mathrm{el}}(k|t)\) and \(p(k)\)3, respectively, calculated as Footnote 3: In the present study, we apply \(\min\{P_{\mathrm{el}}\}=0\) W, \(\max\{P_{\mathrm{el}}\}=3500\) W, \(\min\{p\}=6.9\) Cent/kW h, and \(\max\{p\}=58.1\) Cent/kW h. \[\begin{split} P_{\mathrm{el}}^{\prime}(k)&=\frac{P _{\mathrm{el}}(k)-\min\{P_{\mathrm{el}}\}}{\max\{P_{\mathrm{el}}\}-\min\{P_{ \mathrm{el}}\}}\,,\\ p^{\prime}(k)&=\frac{p(k)-\min\{p\}}{\max\{p\}-\min\{p \}}\,.\end{split} \tag{19}\] Furthermore, the control inputs \(u\) are limited to cooling with the maximum total power constraint by the heat pump model, as formulated in Eq. (4), which leads to \[\mathcal{U}=\Big{\{}u\in\mathbb{R}^{n}\,|\,(\forall j\in[1,n]:u_{j}\leq 0) \wedge\sum_{j=1}^{n}|u|=\varepsilon_{\mathrm{h}}\cdot P_{\mathrm{el}}\Big{\}}. \tag{20}\] In addition, the control outputs should meet predefined time-variant temperature ranges \([y_{\min},y_{\max}]\) for each room \(j\). This leads to the following soft constraints: \[\begin{split}\mathcal{Y}=\Big{\{}y\in\mathbb{R}^{n}\,|& \,(\forall j\in[1,n]:y_{\min},\chi_{\mathrm{dis}_{j}}\leq y_{j}\leq y_{ \max}+\chi_{\mathrm{dis}_{j}})\\ &\wedge\chi_{\mathrm{dis}_{j}}\geq 0\Big{\}}.\end{split} \tag{21}\] ### Hysteresis-based Two-point Controller The hysteresis-based two-point control serves as the lower benchmark for the evaluation. This is a conventional control strategy for cooling (or heating) devices that cools down a room until a lower temperature limit. Afterward, the device switches off and waits until the temperature in the room has reached an upper limit. This triggers the control system to start cooling down again. We use an adaptive hysteresis that uses the upper and lower temperature limits \([y_{\min}(t),y_{\max}(t)]\) depending on the scenarios. These predefined temperature limits for OTS are described in the evaluation scenarios in Sec. 4.1.2. ## 4 Evaluation In this section, we compare the control algorithms and describe the used evaluation environment. First, we introduce the used data, scenarios, and metrics in Sec. 4.1. Then, we present the results in Sec. 4.2, discuss them in Sec. 4.3, and show limitations in Sec. 4.4 ### Data, Scenarios, and Metrics #### 4.1.1 Data We evaluate the control strategies by using weather data during the summer of 2022, obtained from a weather station on an experimental building [32]. This building is located in the _KIT EnergyLab 2.0_ (Karlsruhe, Germany), and is presented in Fig. 2(a). It has a design similar to a single-family home and is used as an office space. For the evaluation, we use the measurements of the weather station (the solar radiation \(\dot{q}_{\mathrm{s}}\) and the ambient temperature \(T_{\mathrm{a}}\)) over a period of 13 weeks (05/30/2022 - 08/22/2022). Due to measurement gaps, we had to sort out three weeks (8(8(8(8(8(8(8(8(8(8(88(88(888(888(8888(8888( _(b) Multi-zone adaptive scenario_: The temperature ranges \([y_{\min_{i}}(t),y_{\max_{i}}(t)]\) in all rooms \(j\) (\(j=1\ldots 5\)) can be different (see Tab. 4). In addition to the comfort and standby mode, we also use an eco mode that schedules the reference temperature by 2 K (+2 K/-2 K for cooling/heating) difference compared to the comfort mode [34]. This eco mode saves energy compared to the comfort mode and also enables fast re-cooling / re-heating compared to the standby mode. In addition, the eco mode can save energy in rooms that are less frequently used than office rooms, e.g. bathrooms or kitchens. In the multi-zone adaptive scenario (see Tab. 4), we let the control operate with a high focus on OTS in occupied rooms and energy saving in unoccupied. Therefore, we use the comfort mode in the offices (rooms 1 and 2) during working hours and the kitchen (room 4) during lunch breaks from 12am to 1pm. In this scenario, the first office (room 1) is used over the entire working day, except lunch break, and the second office (room 2) only from 8am to 12am (part-time job). The bathroom (room 5) and storage (room 3) should be operated in eco mode during working hours (8am to 5pm). #### 4.1.3 Metrics We use two Key Performance Indicators (KPIs) to evaluate (i) how accurately a controller meets the desire OTS and (ii) how much energy the control strategy therefore consumes. Mathematically we define the KPIs as the weekly costs \(c_{\text{m,week}}\) in Eq. (22) and mean weekly discomfort \(d_{\text{m, week}}\) in Eq. (23), \[c_{\text{m,week}} =\sum_{k=1}^{M}\left(p(k)\int_{k}P_{\text{el}}(k)\,dt_{k}\right) \tag{22}\] \[d_{\text{m, week}} =\frac{1}{M}\left(\sum_{k=1}^{M}\sum_{j=1}^{n}d_{c_{j}}(k)\right). \tag{23}\] The KPIs consider energy costs and OTS during each time-step \(k\) for all time steps \(M=672\) of each week. The energy costs \(c_{\text{m,week}}\) depend on a dynamic energy tariff \(p(k)\) and the consumed electric power \(P_{\text{el}(k)}\). The discomfort \(d_{\text{m, week}}\) evaluates the discomfort \(d_{c_{j}}(k)\) of the actual room temperature from the allowed OTS range. This permitted temperature range is time-variant, depending on room-individual usage/attendance profiles, as introduced in the scenarios in Sec. 4.1.2. Both KPIs are competing, which means when one is improved, the other is usually deteriorating. It is the objective to minimize both KPIs simultaneously, to have low costs and low discomfort. ### Results We present the results of the three different control algorithms, MPC (ideal and error-free), PSC, and hysteresis-based two-point controller (see Sec. 3), in Fig. 4 and A.5. The overall results for both scenarios over the entire evaluation period of ten weeks can be obtained from Fig. 4. Fig. A.5 illustrates the dynamic response of the thermal building model to the three applied control strategies, exemplarily for the base scenario during one week. temperature ranges for the air temperatures \([y_{\rm min_{j}}(t),y_{\rm max_{j}}(t)]\). The bottom y-axes present the controlled variable \(P_{\rm el}\), the disturbance variables \(T_{\rm a}\) and \(\dot{q}_{\rm s}\), and the dynamic electric price function \(p\). The control characteristics of the three controllers are distinguishable, although, for all controllers, the controlled variable \(P_{\rm el}\) of the heat pump operates only in three of seven days noticeably (06-29, 06-30, and 07-03). In general, the temperature trajectories controlled by MPC and PSC are more similar than those controlled by the hysteresis-based two-point controller. The MPC cools most frequently but at a lower power \(P_{\rm el}\). In a few cases, the MPC exceeds the upper temperature limits, e.g. on the 06-30. Therefore, the MPC meets temperature ranges more adequately on the next day (07-01), where the temperatures of the PSC and hysteresis-based two-point controller are too low. ### Overall Results for Ten Weeks We perform evaluations for the three controllers in two scenarios over ten different weeks and summarize the results in Fig. 4. On the y-axis in Fig. 4, we visualize the two KPIs, the _mean weekly costs_ ("costs") and the _mean discomfort_ ("discomfort") from Eq. (22) and (23). The results are shown for the two scenarios, the _(a) base scenario_ and the _(b) multi-zone adaptive scenario_, where _(a)_ and _(b)_ are based on the temperature ranges in Tab. 3 and 4, respectively. When evaluating the three control strategies in Fig. 4, the MPC and PSC show superior results in terms of costs and discomfort, compared to the hysteresis-based two-point controller. In both scenarios, the MPC and PSC have lower discomfort and approximately half the costs of the hysteresis-based two-point controller (e.g. in _(a)_ from 2.18 to 1.08 or 1.18). The performance of the MPC depends more on the evaluated scenario _(a)_ vs. _(b)_ than for the other two controllers. In the base scenario, The MPC and PSC have a similar overall performance (1.18 vs. 1.08 costs and 0.52 vs. 0.59 discomfort). In contrast, in the multi-zone adaptive scenario, the MPC outperforms the PSC with 38.5 % lower costs (from 1.09 to 0.67) and also lower discomfort (from 0.19 to 0.13). In summary, we obtain the highest overall performance regarding costs and discomfort with the MPC and PSC, while the hysteresis-based two-point controller shows the lowest performance. The performance difference between MPC and PSC varies depending on the evaluation scenario. In the _(a) base scenario_, the MPC and PSC have similar control results, while in the _(b) multi-zone adaptive scenario_, the MPC outperforms the PSC with 38.5 % lower costs and also lower discomfort. ### Discussion The results in Sec. 4.2 evaluate the performance of the PSC by comparison with the upper and lower benchmarks using ideal, error-free MPC and hysteresis-based two-point controller, respectively. Overall, the control performance of the PSC is significantly superior to the hysteresis-based two-point controller and close to the ideal MPC. In the following, we discuss the differences in control performance. The three controllers differ in their complexity and how much knowledge about future system behavior they require. The hysteresis-based two-point controller uses only minimal and maximal temperatures \([y_{\rm min_{j}}(t),y_{\rm max_{j}}(t)]\) without any forecasts or models. When a maximal temperature is reached, it cools over a defined period. The PSC, on the other hand, tries to meet a reference temperature that is in the middle of the minimal and maximal ranges. The PSC requires knowledge about the temperature ranges, but also about the energy tariff and the heat pump modulation. Exploiting this knowledge reduces the energy costs of the PSC compared to the hysteresis-based two-point controller because the PSC can apply cooling during periods of low energy prices. The MPC uses the largest amount of available information, which increases its performance accordingly. It does not only use temperature ranges, energy tariffs, and heat pump modulation. In addition, the MPC needs a thermal building model and weather forecasts. With that internal control model and the forecasts, the MPC can predict future system behavior in advance and schedule the cooling load optimally. As a result, MPC outperforms the PSC when high variations in the temperature ranges \([y_{\rm min_{j}}(t),y_{\rm max_{j}}(t)]\) occur, as in the _(b) multi-zone adaptive scenario_. The MPC exploits the knowledge about thermal storage inside the building, which enables finding an optimal cooling trajectory. The PSC is suitable for tracking a reference temperature when fewer variations in the temperature ranges occur. In the _(a) base scenario_, the PSC and MPC show similarly high performance. It should be noted that in the base scenario, a major part of the discomfort results from a too-cold temperature, instead of a too-warm one. None of the three controllers was allowed to heat the building. Especially in the morning periods, the low ambient temperatures make it challenging for the controllers to meet warm enough thermal conditions in the building. The MPC can only mitigate temperatures that are too low by allowing temperatures that are too warm at other times (see Fig. A.5, 06-30 and 07-01). Overall, the MPC cannot show its advantage as an optimum controller to its best advantage in the base scenario. In summary, the controllers perform as expected where a higher complexity and use of more information improve the control quality. While the PSC outperforms the hysteresis-based two-point controller, the differences between PSC and ideal MPC are much smaller. On the one hand, the MPC has a superior performance in one of the two scenarios. On the other hand, the MPC is significantly more complex to design, requiring a thermal model for each room and a forecast, which we both assumed to be error-free for our case study. Compared to the conventional hysteresis-based two-point controller, PSC leads to a reduction of the two combined criteria costs and discomfort of 44 % (50 % costs reduction and 15 % comfort improvements) while the MPC achieves combined improvements of 53 % (58 % cost reduction and 29 % comfort improvements). ### Limitations The evaluation of control strategies in this work is based on simulation results, which can neglect several effects from the real application. The control strategies are performed on a multi-zone thermal building model instead of a real building. The model parameters are based on literature values instead of identification from parameter identification. The model and weather forecasts of the MPC are assumed as error-free. The evaluation is limited to a cooling scenario of a single building. Weather data is used for ten weeks during summer in Karlsruhe, Germany. The cooling demand in Germany is lower than in other regions of the world. A heating scenario is not investigated. The evaluated scenarios consider no Photovoltaic (PV), battery, Battery Electric Vehicle (BEV), or thermal water storage in the optimization. ## 5 Conclusion In this study, we investigate how a novel multi-zone Price Storage Control (PSC) can provide Demand Response (DR) while considering room-individual Occupants' Thermal Satisfaction (OTS) without using a thermal building model and weather forecasts. Therefore, we develop three different control strategies, a multi-zone evaluation environment, and two different scenarios to compare the controllers. We compare the PSC with an ideal, error-free Model Predictive Control (MPC) and hysteresis-based two-point controller as upper and lower benchmarks, respectively. The ideal MPC and PSC achieve higher control performance than the hysteresis-based two-point controller in terms of energy costs and mean discomfort. The PSC leads to a reduction of both criteria combined of 44 % while the MPC achieves improvements of 53 %. Under consideration that the PSC requires no models and no forecasts, this control strategy seems especially beneficial for real-world control applications. Our developed control approach is easy to implement and can be used for every building without large-scale adjustments. Further, it can include other external signals in its decision-making like the load of the electricity grid or a generation signal of renewable energy sources. Thus, it can contribute to balancing electricity demand and supply and lead to better utilization of renewable energy sources in future energy systems. In future work, we want to apply the developed control strategy to a real-world application. For the MPC real-world application, we need to perform parameter identification and design a state estimator. For a more realistic scenario, we plan to include more relevant components into the optimization, e.g. thermal water storage, Photovoltaic (PV) self-production and -consumption, and batteries. Finally, we plan to evaluate the controller in a heating scenario. ## Data Availability We added the following supplementary materials to an open-source online repository on GitHub: * results of the three control strategies for all individual weeks in both scenarios, * used input data for the electricity price and weather data, * commented Python code of the three control strategies. [https://github.com/Occupant-Oriented-Demand-Response/Control-Results](https://github.com/Occupant-Oriented-Demand-Response/Control-Results). ## Acknowledgment This work was conducted within the project FlexKalte, funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK). The authors would like to thank their colleagues from the Energy Lab 2.0 and the Institute for Automation and Applied Informatics (IAI) for all the fruitful discussions and collaborations. ## Appendix A Control Results for One Week The control results for one week, as discussed in Sec. 4.2, are presented in Fig. 14. Figure 13: Control results of three controllers, evaluated in two different scenarios (a) and (b)
2303.02611
A Solution to the 1-2-3 Conjecture
We show that for every graph without isolated edge, the edges can be assigned weights from {1,2,3} so that no two neighbors receive the same sum of incident edge weights. This solves a conjecture of Karo\'{n}ski, Luczak, and Thomason from 2004.
Ralph Keusch
2023-03-05T08:46:38Z
http://arxiv.org/abs/2303.02611v4
# A Solution to the 1-2-3 Conjecture ###### Abstract We show that for every graph without isolated edge, the edges can be assigned weights from \(\{1,2,3\}\) so that no two neighbors receive the same sum of incident edge weights. This solves a conjecture of Karonski, Luczak, and Thomason from 2004. ## 1 Introduction Let \(G=(V,E)\) be a simple graph. A \(k\)-edge-weighting is a function \(\omega:E\to\{1,\ldots,k\}\). Given an edge-weighting \(\omega\), for each vertex \(v\in V\) we denote by \(s_{\omega}(v):=\sum_{w\in N(v)}\omega(\{v,w\})\) its _weighted degree_. We say that two vertices \(v,w\in V\) have a coloring conflict if \(s_{\omega}(v)=s_{\omega}(w)\) and \(\{v,w\}\in E\). If there is no coloring conflict in the graph, \(\omega\) is called _vertex-coloring_. We are interested to find the smallest integer \(k\) that admits a vertex-coloring \(k\)-edge-weighting for the graph \(G\). This question arised as the local variant of the graph irregularity strength problem, where one seeks to find a \(k\)-edge-weighting so that _all_ nodes receive different weighted degrees [10]. In 2004, Karonski, Luczak, and Thomason conjectured that for each connected graph with at least two edges, a vertex-coloring \(3\)-edge-weighting exists [17]. Soon after, the problem was referred to as 1-2-3 Conjecture and gained a lot of attention due to its elegant statement. Karonski et al. verified the conjecture for 3-colorable graphs [17]. Afterwards, Addario-Berry, Dalal, McDiarmid, Reed, and Thomason provided the first finite, general upper bound of \(k=30\)[2]. The general result was improved to \(k=16\) by Addario-Berry et al. [3] and further to \(k=13\) by Wang and Yu [28]. In 2010, Kalkowski, Karonski, and Pfender made a big step and proved upper bounds of \(k=6\) and \(k=5\), using a simple algorithmic argument [14, 15]. More results have been dedicated to specific graph classes. For \(d\)-regular graphs, a bound of \(k=4\) has been proven for \(d\leq 3\)[17], for \(d=5\)[7], and then in general [20]. Furthermore, Przybylo gave an affirmative answer to the conjecture for \(d\)-regular graphs, given that \(d\geq 10^{8}\)[20]. In addition, the conjecture was confirmed by Zhong for ultra-dense graphs, i.e., for all graphs \(G=(V,E)\) where the minimum degree is at least \(0.99985|V|\)[31]. Recently, Przybylo asserted the statement as well for all graphs where the minimum degree is sufficiently large [21]. Concretely, by applying the Lovasz Local Lemma, he proved that there exists a constant \(C>0\) such that the conjecture holds for all graphs with \(\delta(G)\geq C\log\Delta(G)\). However, not always 3 weights are necessary. For instance, a random graph \(G(n,p)\) asymptotically almost surely admits a 2-edge-weighting without coloring conflicts [3]. For all \(d\)-regular bipartite graphs [9], Chang et al. have shown that \(k=2\) is possible as well, given that \(d\geq 3\). Regarding the computational complexity, Dudek and Wajc proved that it is NP-complete to determine whether a given graph \(G\) supports a vertex-coloring 2-edge-weighting [11], whereas the same decision problem is in \(P\) for bipartite graphs [26]. Many closely related problems have been analyzed. A natural variant are total weightings as introduced by Przybylo and Wozniak [22], where the edges receive weights from \(\{1,\ldots,k\}\) as before, but additionally each vertex gets a weight from \(\{1,\ldots,\ell\}\). The weighted degree of a vertex is then defined as the sum of all incident edge weights, plus the weight that it received itself. Przybylo and Wozniak conjectured that for each graph there exists a vertex-coloring total weighting with vertices and edges weighted by the set \(\{1,2\}\). While this question is still open, a simple argument from Kalkowski shows that each graph admits a vertex-coloring total weighting with vertex weights from \(\{1,2\}\) and edge weights from \(\{1,2,3\}\)[13]. A weaker version of vertex-coloring edge-weightings can be obtained by defining the vertex colors as multisets of incident edge-weights, instead of sums [17, 1]. Recently, Vuckovic reached the optimal bound \(k=3\) for this variant [27]. Vice versa, a harder variant are list colorings, where each edge \(e\in E\) has its own list \(L(e)\) of allowed edge-weights [5]. In fact, the application of Alon's Nullstellensatz [4] led to significant results on this intriguing problem [25, 8, 32] and on its variant for total weightings [23, 29, 30]. Many more variations of vertex-coloring edge-weightings have been studied, e.g., variations for hypergraphs [16, 6] or directed graphs [5, 19]. For a general overview of the progress on the 1-2-3 Conjecture and on related problems, we refer to the early survey of Seamone [24] and to the recent survey of Grytczuk [12]. Turning back to the original question, the general upper bound was recently shrinked to \(k=4\)[18]. With the present paper, we close the final gap and confirm the conjecture. **Theorem 1**.: _Let \(G=(V,E)\) be a graph without connected component isomorphic to \(K_{2}\). Then there exists an edge-weighting \(\omega:E\to\{1,2,3\}\) such that for each edge \(\{v,w\}\in E\),_ \[\sum_{u\in N(v)}\omega(\{u,v\})\neq\sum_{u\in N(w)}\omega(\{u,w\}).\] In Section 2, we give an overview of the proof strategy, describe how the proof is elaborated upon the ideas from [18], and collect several auxiliary results. Afterwards, in Section 3 we formally prove the theorem. Finally, we conclude with a few remarks in Section 4. ## 2 Main ideas and proof preparations We use the following notation. Let \(G=(V,E)\) be a graph, let \(W\subseteq V\), and let \(C=(S,T)\) be a cut. Then we denote by \(E(W)\) the edge set of the induced subgraph \(G[W]\) and by \(E(S,T)\) the subset of edges having an endpoint in both \(S\) and \(T\) (the cut edges of \(C\)). For a vertex \(v\in V\), \(N(v)\) stands for its neighborhood and \(\deg_{W}(v):=|N(v)\cap W|\) is the number of neighbors in \(W\). Finally, for two disjoint subsets \(S,T\subseteq V\), denote by \(G(S,T)\) be the bipartite subgraph with vertex set \(S\cup T\) and edge set \(E(S,T)\). As a starting point, let us summarize the strategy that was introduced in [18] to construct a conflict-free edge-weighting with weights \(\{1,2,3,4\}\). There, we started with a maximum cut \(C=(S,T)\) and initial weights from \(\{2,3\}\), making the weighted degrees of nodes in \(S\) even and those of nodes in \(T\) odd. Depending on the remaining coloring conflicts, an auxiliary flow problem on \(G(S,T)\) was carefully designed. Then, the resulting maximum flow yielded a collection of edge-disjoint paths, along which the edge-weights could be changed in order to make the edge-weighting vertex-coloring. We are going to extend that approach as follows. We partition the vertex set into two sets \(R\) and \(B\) of _red_ and _blue_ nodes, where the red vertices form an independent set. We start by giving each edge weight \(2\) and apply the strategy from [18] only to the subgraph \(G[B]\), consequently only taking a maximum cut \(C=(S,T)\) of \(G[B]\). However, when putting the weights onto \(E(B)\), we do not yet finalize the vertex-weights of the blue nodes. Instead, we only ensure that there are no coloring conflicts inside \(S\) and inside \(T\), which is possible with the weight set \(\{1,2,3\}\). Afterwards, we cautiously construct a weighting for \(E(R,B)\) such that the weighted degrees of vertices in \(R\) remain even, but the weighted degrees of the blue nodes become odd. More precisely, the weighted degree of nodes in \(S\) should obtain values \(1\pmod{4}\) and those of \(T\) obtain values \(3\pmod{4}\). Consequently, all coloring conflicts will be resolved and the edge-weighting becomes vertex-coloring. Unfortunately, our construction requires that the set \(B\) is of even cardinality, enforcing us to handle several different situations when proving Theorem 1 in Section 3. In some cases, the described strategy only works when one or even two vertices are removed from the graph. Afterwards, when re-inserting the nodes, augmenting the edge-weighting to the full graph sometimes requires an additional round of weight modifications, for instance along a path \(p\). We now start with the formal preparations for the proof. To find a suitable independent set \(R\) of red nodes, we will apply the following simple result. **Lemma 2**.: _Let \(G=(V,E)\) be a connected graph, let \(v,w\in V\), and let \(p\) be a shortest \(v\)-\(w\)-path. Then there exists an independent set \(R\subseteq V\) such that_ 1. _the graph_ \(G(R,V\setminus R)\) _is connected, and_ 2. _the path_ \(p\) _is alternating between_ \(R\) _and_ \(V\setminus R\)_, where we can choose whether_ \(v\in R\) _or_ \(v\in V\setminus R\)_._ Proof.: Let \(p=\{v_{1}:=v,v_{2},\ldots,v_{k}:=w\}\) be a shortest \(v\)-\(w\)-path in \(G\). If \(v=v_{1}\) is required to be in \(R\), we start with \(R:=\{v_{1}\}\), otherwise we start with \(R:=\emptyset\). Next, we put every second vertex of \(p\) into \(R\). Because \(p\) is a shortest \(v\)-\(w\)-path, \(R\) remains an independent set and the required properties hold at least for the induced subgraph \(G[\{v_{1},\ldots,v_{k}\}]\). Let \(v_{k+1},\ldots,v_{n}\) be an ordering of the remaining vertices of \(V\) (if there are any), such that for each \(i>k\), \(v_{i}\) has at least one neighbor \(v_{j}\) with \(j<i\). We are going to proceed the remaining vertices one after another, thereby extending \(R\), and prove by induction that for each \(i>k\), the graph \(G[\{v_{1},\ldots,v_{i}\}]\) achieves property (i). Consider a vertex \(v_{i}\) and assume that \(G[\{v_{1},\ldots,v_{i-1}\}]\) satisfies the precondition. If \(v_{i}\) already has a neighbor \(v_{j}\in R\), we do not extend \(R\), otherwise we extend the set \(R\) by adding the node \(v_{i}\). In both cases, \(R\) obviously remains an independent set and \(G(R,\{v_{1},\ldots,v_{i}\}\setminus R)\) is connected, thus the statement follows by induction. Once having partitioned the vertex set into the independent set \(R\) of red nodes and the set \(B:=V\setminus R\) of blue nodes, we will start assigning weights to the edges \(E(B)\). At the same time, we introduce an odd-valued _designated color_\(f(v)\) for each blue vertex \(v\in B\), so that the function \(f\) is a proper vertex-coloring. We thereby take into account that the blue-red-edges contribute to the weighted degrees as well. Because we only have edge-weights \(\{1,2,3\}\) available, our capabilities are limited and we have to keep the weighted degrees in \(B\) even for the moment. But we can construct the edge-weighting of \(E(B)\) so that the current weights and the designated colors almost coincide and only differ by \(1\). Later, we will overcome the remaining differences when carefully assigning edge-weights to \(E(R,B)\). With the following lemma, we adapt the key ideas from [18] to our setting. We will typically apply it to the subgraph \(G[B]\) and to the function \(h(v):=2\deg_{R}(v)\), to obtain an edge-weighting \(\omega\) of \(E(B)\) and a function of designated colors \(f:B\to\{1,3,5,\ldots\}\). Recall that we denote by \(s_{\omega}(v)\) the weighted degree of a vertex \(v\) under the weighting \(\omega\). **Lemma 3**.: _Let \(G=(V,E)\) be a not necessarily connected graph and let \(h:V\to\{0,2,4,\ldots\}\) be a function attaining only even values. Then there exists an edge-weighting \(\omega:E\to\{1,2,3\}\) and a function \(f:V\to\{1,3,5,\ldots\}\) attaining only odd values such that_ 1. \(f(v)\neq f(w)\) _for each edge_ \(\{v,w\}\in E\)_, and_ 2. \(|s_{\omega}(v)+h(v)-f(v)|=1\) _for all_ \(v\in V\)_._ We prove the lemma with the flow-based strategy that was introduced in [18]. A key step towards the statement is therefore the following auxiliary result. **Lemma 4** (Lemma 2 in [18]).: _Let \(G=(V,E)\) be a graph, let \(C=(S,T)\) be a maximum cut of \(G\), let \(F\subseteq E(S)\cup E(T)\), and let \(\sigma\) be an orientation of the edge set \(F\). Furthermore, let \(G_{C,F,\sigma}\) be the auxiliary directed multigraph network constructed as follows._ 1. _As vertex set, take_ \(V\)_, and add a source node_ \(s\) _and a sink node_ \(t\)_._ 2. _For each edge_ \(\{u,v\}\in E(S,T)\)_, insert the two arcs_ \((u,v)\) _and_ \((v,u)\)_, both with capacity_ \(1\)_._ 3. _For each edge_ \(\{u,v\}\in F\) _with corresponding orientation_ \((u,v)\in\sigma\)_, insert arcs_ \((s,u)\) _and_ \((v,t)\)_, both with capacity_ \(1\)_, potentially creating multi-arcs. Do not insert_ \((u,v)\)_._ _Then in the network \(G_{C,F,\sigma}\), there exists an \(s\)-\(t\)-flow of value \(|F|\)._ Proof of Lemma 3.: Let \(G=(V,E)\) be a graph and let \(h:V\to\{0,2,4,\ldots\}\). In a first step, we define designated colors \(f(v)\) of odd parity such that two neighbors always receive distinct colors. Afterwards, we construct the edge-weighting \(\omega\) that satisfies (ii). Let \(\{v_{1},\ldots,v_{n}\}\) be an arbitrary ordering of \(V\) and let \(C=(S,T)\) be a maximum cut of \(G\). We assign the designated colors \(f(v_{i})\) to all vertices one after another. We aim to define designated colors such that all \(v_{i}\in S\) receive a color \(f(v_{i})\equiv 1\pmod{4}\) and each \(v_{i}\in T\) receives a color \(f(v_{i})\equiv 3\pmod{4}\). Consider a vertex \(v_{i}\in V\), assume that \(v_{1},\ldots,v_{i-1}\) already got a designated color, and let \(s(v_{i}):=h(v_{i})+2\deg(v_{i})\). Denote by \(k_{i}\geq 0\) the number of neighbors \(v_{j}\) with \(j<i\) that are on the same side of the cut as \(v_{i}\). To define a suitable value \(f(v_{i})\), we consider the following set of \(2k_{i}+2\) odd values: \[S(v_{i}):=\{s(v_{i})-2k_{i}-1,\ldots,s(v_{i})-1,s(v_{i})+1,\ldots,s(v_{i})+2k_ {i}+1\}.\] We shall choose a value for \(f(v_{i})\) from \(S(v_{i})\). Because the value of \(f(v_{i})\pmod{4}\) is already determined, half of the elements from \(S(v_{i})\) are not allowed, so \(k_{i}+1\) potential choices are remaining. At most \(k_{i}\) of them are blocked by values \(f(v_{j})\) from the neighbors \(v_{j}\) of \(v_{i}\) with a smaller index. Hence, there exists at least one suitable value \(f(v_{i})\in S(v_{i})\) with the following properties: * \(f(v_{i})\neq f(v_{j})\) for all neighbors \(v_{j}\) of \(v_{i}\) with \(j<i\), * \(f(v_{i})\equiv 1\pmod{4}\) if \(v_{i}\in S\) and \(f(v_{i})\equiv 3\pmod{4}\) if \(v_{i}\in T\), and * \(|f(v_{i})-s(v_{i})|\leq 2k_{i}+1\). Fix a value for \(f(v_{i})\) from \(S(v_{i})\) that satisfies these three properties simultaneously. We repeat this procedure for all vertices one after another to achieve property (i) of the statement. It remains to define the edge-weighting \(\omega\) that fulfills (ii). Let \(V^{+}:=\{v_{i}\in V:f(v_{i})>s(v_{i})\}\) and \(V^{-}:=\{v_{i}\in V:f(v_{i})<s(v_{i})\}\). Moreover, for all \(v_{i}\in V\) let \[g(v_{i}):=\begin{cases}\frac{1}{2}(f(v_{i})-s(v_{i})-1),&\text{if $v_{i}\in V^{+}$},\\ \frac{1}{2}(f(v_{i})-s(v_{i})+1),&\text{if $v_{i}\in V^{-}$}.\end{cases}\] Observe that for all \(1\leq i\leq n\), \(|g(v_{i})|\leq k_{i}\) by construction. In order to apply Lemma 4, we construct a subset \(F\subseteq E[S]\cup E[T]\) and an orientation \(\sigma\) of \(F\) as follows. For each vertex \(v_{i}\in S\), choose \(|g(v_{i})|\) neighbors \(v_{j}\in S\) with smaller index (i.e., \(j<i\)) and add the \(|g(v_{i})|\) edges \(\{v_{i},v_{j}\}\) to \(F\). If \(v_{i}\in V^{+}\), increase the weight of the \(|g(v_{i})|\) edges \(\{v_{i},v_{j}\}\) to \(3\) and add the orientations \((v_{i},v_{j})\) to \(\sigma\). Vice versa, if \(v_{i}\in V^{-}\), decrease the weight of the edges \(\{v_{i},v_{j}\}\) to \(1\) and add the orientation \((v_{j},v_{i})\) to \(\sigma\). After having executed the described modifications on the edge weights for all vertices in \(S\), the weighted degree of each \(v_{i}\in S\) potentially received changes when considering itself and when considering vertices with higher index, resulting in a current value of \[t(v_{i}):=2\deg(v_{i})+2g(v_{i})+|\{w:(w,v_{i})\in\sigma\}|-|\{w:(v_{i},w)\in \sigma\}|,\] no matter whether \(v_{i}\in V^{+}\) or \(v_{i}\in V^{-}\). For each vertex \(v_{i}\in T\), choose as well \(|g(v_{i})|\) neighbors \(v_{j}\in T\) with \(j<i\) and add the \(|g(v_{i})|\) edges \(\{v_{i},v_{j}\}\) to \(F\). If \(v_{i}\in V^{+}\), change the weight of these edges to 3 and always add the orientation \((v_{j},v_{i})\) to \(\sigma\). If \(v_{i}\in V^{-}\), always add the orientation \((v_{i},v_{j})\) to \(\sigma\) and change the edge-weights to 1. Mind the differences compared to \(S\) regarding the orientations. Under the modified weighting, the current weighted degree of \(v_{i}\in T\) is \[t(v_{i}):=2\deg(v_{i})+2g(v_{i})+|\{w:(v_{i},w)\in\sigma\}|-|\{w:(w,v_{i})\in \sigma\}|.\] Having defined \(F\) and \(\sigma\), we proceed by constructing the auxiliary multigraph \(G_{C,F,\sigma}\) as specified in the statement of Lemma 4. Thereby, each edge of \(F\) leads to exactly one arc incident to \(s\) and one arc incident to \(t\), where \(s\) and \(t\) are the two additional nodes inserted into the graph. For each node \(v_{i}\), the construction is such that the number of arcs from \(s\) to \(v_{i}\) is \(|\{w:(v_{i},w)\in\sigma\}|\) and the number of arcs from \(v_{i}\) to \(t\) is \(|\{w:(w,v_{i})\in\sigma\}|\). We now apply Lemma 4 to \(G\) and obtain an \(s\)-\(t\)-flow of size \(|F|\) in the auxiliary multigraph \(G_{C,F,\sigma}\). As all edges have capacity 1, there are \(|F|\) edge-disjoint \(s\)-\(t\)-paths in \(G_{C,F,\sigma}\). Consider such a directed path \(p=(s,u_{1},\ldots,u_{m},t)\), and let \(p^{\prime}=\{u_{1},\ldots,u_{m}\}\) be its induced, undirected subpath in the bipartite graph \(G(S,T)\). Unless \(u_{1}=u_{m}\) (which happens when \(p^{\prime}\) is an empty path), we modify the weighting \(\omega\) of each edge \(\{u_{i},u_{i+1}\}\in p^{\prime}\) as follows: increase the weight to 3 if \(u_{i}\in S\), and decrease the weight to 1 if \(u_{i}\in T\). In other words, we alternately increase or decrease the edge weights along the path. The weighted degrees of the internal nodes \(u_{2},\ldots,u_{m-1}\) thereby do not change, in contrast to those of \(u_{1}\) and \(u_{m}\). The weighted degree of \(u_{1}\) increases by 1, if \(u_{1}\in S\), and decreases by 1, if \(u_{1}\in T\). Regarding \(u_{m}\), its weighted degree increases by 1, if \(u_{m}\in T\), and decreases by 1, if \(u_{m}\in S\). When \(u_{1}=u_{m}\), there is no change on the weighted degree of this node. We repeat the described modification on \(\omega\) for all \(|F|\) paths provided by Lemma 4. Denote by \(\omega\) the resulting edge-weighting. As we found \(|F|\) edge-disjoint \(s\)-\(t\)-paths in the auxiliary network \(G_{C,F,\sigma}\), each arc starting at \(s\) and each arc arriving at \(t\) is included in exactly one of the \(s\)-\(t\)-paths on which we modified edge-weights. Looking at a vertex \(v_{i}\in S\), each of the arcs \((s,v_{i})\) is contained in an \(s\)-\(t\)-path \(p\), so for each arc \((s,v_{i})\) we increased the weighted degree of \(v_{i}\) by 1 when handling \(p\). Similarly, for each arc \((v_{i},t)\) we decreased the weighted degree of \(v_{i}\) by 1 when handling the \(s\)-\(t\)-path which contained that arc. The only exceptions are \(s\)-\(t\)-paths of the form \(\{s,v_{i},t\}\) or \(\{s,v_{i},\ldots,v_{i},t\}\) where we didn't make any weight modifications. But there, the arcs \((s,v_{i})\) and \((v_{i},t)\) cancel each other out. Summing up the changes on the weighted degree of \(v_{i}\), we deduce that it holds \[s_{\omega}(v_{i})=t(v_{i})+|\{w:(v_{i},w)\in\sigma\}|-|\{w:(w,v_{i})\in\sigma \}|=2\deg(v_{i})+2g(v_{i}).\] Vice versa, consider a vertex in \(v_{i}\in T\). By the same argument, for each arc \((s,v_{i})\) we decreased the weighted degree of \(v_{i}\) by 1 and for each arc \((v_{i},t)\) we increased the weighted degree of \(v_{i}\) by 1. Adding up the changes, we again have \[s_{\omega}(v_{i})=t(v_{i})+|\{w:(w,v_{i})\in\sigma\}|-|\{w:(v_{i},w)\in\sigma \}|=2\deg(v_{i})+2g(v_{i}).\] Putting everything together and plugging in the definition of \(s(v_{i})\) and then the definition of \(g(v_{i})\), we conclude that for each \(v_{i}\in V\) it holds \[|s_{\omega}(v_{i})+h(v_{i})-f(v_{i})|=|2\deg(v_{i})+2g(v_{i})+h(v_{i})-f(v_{i}) |=|2g(v_{i})+s(v_{i})-f(v_{i})|=1.\] With Lemma 3, we get an edge-weighting for \(E(B)\) and designated weights \(f(v)\) for the vertices \(v\in B\). For each blue node \(v\), some amount \(\alpha(v)\) of incident edge weights is missing to actually achieve its designated color with its weighted degree. This additional weight \(\alpha(v)\) will be gained via the edges between \(B\) and \(R\). With Lemma 5 below, we indeed find an edge-weighting for the subgraph \(G(R,B)\) where the weighted degree of each \(v\in B\) is \(\alpha(v)\). As discussed above, we also have to cover some uncomfortable cases where some vertices will be removed from the graph, whereupon the situation becomes more complex. In some of these situations, there will be a set \(R^{\prime}\subseteq R\) of vertices that should attain a weighted degree of odd parity (instead of even parity as expected). Moreover, for a few blue vertices \(v\in B\), \(\alpha(v)\) may be even. In some other cases, the edge-weighting is forced to accomplish some requirements on a fixed path \(p\), to guarantee that the weights can be further changed along the edges of \(p\) without destroying the entire structure. Therefore, Lemma 5 contains several additional properties that will help us to handle the exceptional cases. **Lemma 5**.: _Let \(G=(V,E)\) be a connected bipartite graph with parts \(B\) and \(R\). Let \(\alpha:B\to\mathbb{N}\) be a function such that \(\alpha(v)\in\{2\deg(v)-1,2\deg(v),2\deg(v)+1\}\) for all \(v\in B\). Moreover, let \(R^{\prime}\subseteq R\) such that \(|R^{\prime}|+\sum_{v\in B}\alpha(v)\) is even. Then there exists an edge-weighting \(\omega:E\to\{1,2,3\}\) such that_ 1. \(s_{\omega}(v)\) _is even for all_ \(v\in R\setminus R^{\prime}\)_,_ 2. \(s_{\omega}(v)\) _is odd for all_ \(v\in R^{\prime}\)_, and_ 3. \(s_{\omega}(v)=\alpha(v)\) _for all_ \(v\in B\)_._ _Moreover, let \(p=\{v_{1},\ldots,v_{k}\}\) be a fixed path with \(k\geq 3\) and \(v_{1},v_{k}\in B\). Then \(\omega\) satisfies in addition_ 1. \(\omega(\{v_{1},v_{2}\})\neq 1\) _if_ \(\alpha(v_{1})=2\deg(v_{1})+1\) _and_ \(\omega(\{v_{1},v_{2}\})\neq 3\) _otherwise,_ 2. \(\omega(\{v_{i-1},v_{i}\})+\omega(\{v_{i},v_{i+1}\})\in\{3,4,5\}\) _for each_ \(1<i<k\) _with_ \(v_{i}\in B\)_, and_ 3. \(\omega(\{v_{k-1},v_{k}\})\neq 1\) _if_ \(\alpha(v_{k})=2\deg(v_{k})+1\) _and_ \(\omega(\{v_{k-1},v_{k}\})\neq 3\) _otherwise._ Proof.: We start by setting the initial weight of all edges to \(2\). Let \(T\) be a spanning tree of \(G\) that includes the fixed path \(p\). We construct \(\omega\) by only changing the weights of a subset \(E_{o}\) of \(T\)-edges. Consider \(T\) as a rooted tree with arbitrary root \(r\) and denote for each \(v\neq r\) by \(par(v)\) its parent in the rooted tree. Let \(V_{o}:=R^{\prime}\cup\{v\in B:\alpha(v)\neq 2\deg(v)\}\) be the set of all vertices that shall obtain an odd weighted degree. Note that the assumption on \(R^{\prime}\) guarantees that \(|V_{o}|\) is even. While constructing the set \(E_{o}\), denote for each \(v\in V\) by \(E_{o}(v)\) the subset of edges from \(E_{o}\) that are incident to \(v\). We want to arrange the set \(E_{0}\) so that for all nodes \(v\in V\), \(|E_{o}(v)|\) is odd if and only if \(v\in V_{o}\). We start with the leafs of \(T\). For each leaf \(\ell\), put \(\{\ell,par(\ell)\}\) into \(E_{o}\) if and only if \(\ell\in V_{o}\). We then iterate to the internal nodes of \(T\) and repeat the idea: we consider each node \(v\) only after having handled all its children, and then decide whether we put \(\{v,par(v)\}\) into \(E_{o}\) or not, thereby always ensuring that \(|E_{o}(v)|\equiv 1\pmod{2}\) if and only if \(v\in V_{o}\). For the root \(r\), the argument does not work, since \(r\) has no parent node. However, because each edge from \(E_{o}\) contributes to two sets \(E_{o}(v)\), the sum \(\sum_{v\in V}|E_{o}(v)|\) must be even. Thus, \[0\pmod{2}\equiv\sum_{v\in V}|E_{o}(v)|\pmod{2}\equiv\big{(}|E_{o}(r)|+|V_{o }\setminus\{r\}|\big{)}\pmod{2},\] and since \(|V_{o}|\) is even, we see that also for the root \(r\), the value \(|E_{o}(r)|\) is odd if and only if \(r\in V_{o}\). We are now going to modify the weighting of \(E_{o}\). As the graph \(G\) is bipartite with parts \(R\) and \(B\), it is sufficient to only consider sets \(E_{o}(v)\) where \(v\in B\). For each \(v\in B\), we change the weights of the edges in \(E_{o}(v)\) according to the following rules. If \(\alpha(v)=2\deg(v)-1\), decrease the weights of \(\frac{1}{2}(|E_{o}(v)|+1)\) edges in \(E_{o}(v)\) to \(1\), and increase the weights of all other edges in \(E_{o}(v)\) to \(3\). Then indeed the weighted degree of \(v\) becomes \[2|N(v)-E_{o}(v)|+\tfrac{1}{2}(|E_{o}(v)|+1)+\tfrac{3}{2}(|E_{o}(v)|-1)=2|N(v)-E _{o}(v)|+2|E_{o}(v)|-1=\alpha(v).\] If \(v\) is not a vertex of the fixed path \(p\), we can distribute these weight modifications arbitrarily among \(E_{o}(v)\), otherwise there are some restrictions. In situations where \(v\) is an internal vertex of \(p\) and both incident edges are contained in \(E_{o}(v)\), we ensure that one of the two edges gets weight \(1\) and the other \(3\). If \(v=v_{1}\) is the starting vertex of \(p\) and \(\{v_{1},v_{2}\}\in E_{o}(v_{1})\), set \(\omega(\{v_{1},v_{2}\})=1\) and distribute the other weights (if there are any) arbitrarily among \(E_{o}(v)\). Similarly, if \(v=v_{k}\) is the ending vertex of \(p\) and \(\{v_{k-1},v_{k}\}\in E_{o}(v_{k})\), our only restriction is to put weight \(1\) on the edge \(\{v_{k-1},v_{k}\}\). If \(\alpha(v)=2\deg(v)+1\), decrease the weights of \(\frac{1}{2}(|E_{o}(v)|-1)\) edges of \(E_{o}(v)\) to \(1\), and increase all other weights of edges of \(E_{o}(v)\) to \(3\). Then, again it holds \(s_{\omega}(v)=\alpha(v)\). Similarly as above, ensure that if \(v\) is an internal node of \(p\), not both edges on \(p\) incident to \(v\) get the same odd weight. Moreover, if \(v\) is the starting or ending vertex of \(p\), ensure that the edge on \(p\) incident to \(v\) does not receive weight \(1\) if it is contained in \(E_{o}(v)\). Finally, if \(\alpha(v)=2\deg(v)\), \(|E_{o}(v)|\) is even and we assign the weight \(1\) to one half of the edges in \(E_{o}(v)\) and weight \(3\) to the other half, to assure \(s_{\omega}(v)=\alpha(v)\). Once more, if \(v\) is an internal vertex of \(p\), ensure that not both incident edges on \(p\) get the same odd weight. Furthermore, if \(v\) is the starting or ending vertex of \(p\), again take care of that the edge on \(p\) incident to \(v\) does not receive weight \(3\). With the described weight modifications, the edge-weighting \(\omega\) is defined and (iii)-(vi) are fulfilled. Regarding the red vertices, for each node \(v\in R\), \(|E_{o}(v)|\) is odd if and only if \(v\in R^{\prime}\), and exactly the incident edges that are contained in \(E_{o}(v)\) have been weighted with an odd value. Thus, \(\omega\) achieves properties (i) and (ii) as well. ## 3 Proof of Theorem 1 In Section 2, we prepared the proof with several auxiliary results. The plan is now to define an independent set \(R\) with Lemma 2, to define \(B:=V\setminus R\), to use Lemma 3 for finding an edge-weighting for \(G[B]\), and finally to apply Lemma 5 when extending the edge-weighting to the remaining edges. The crucial point behind this strategy is that the set \(B\) is required to have even cardinality, making the proof significantly more technical. We therefore describe three basic situations regarding the sets \(R\) and \(B\), and, in some cases, one additional vertex \(v_{0}\notin R\cup B\). We demonstrate for each situation that a vertex-coloring edge-weighting with weights \(\{1,2,3\}\) can be constructed, always using Lemma 3 and Lemma 5 in combination. Afterwards in the actual proof of Theorem 1, we will show by a case distinction that for each graph, the problem can actually be reduced to one of the three basic situations. We use the following definition to annotate a partition \(V=R\cup B\) that achieves all required properties. **Definition 6**.: _Let \(G=(V,E)\) be a connected graph and let \(V=R\cup B\) be a partition of the vertex set into two disjoint subsets of red and blue nodes. We say that \((R,B)\) is a good \(R\)-\(B\)-partition of \(G\) if \(R\) is an independent set, the bipartite subgraph \(G(R,B)\) is connected, and \(|B|\equiv 0\pmod{2}\)._ Obviously, the ideal situation occurs when there is a good \(R\)-\(B\)-partition known for the entire graph. Lemma 7 shows how we then find a suitable edge-weighting. **Lemma 7**.: _Let \(G=(V,E)\) be a connected graph and let \(R\cup B\) be a good \(R\)-\(B\)-partition of \(G\). Then there exists an edge weighting \(\omega:E\to\{1,2,3\}\) such that the weighted degrees \(s_{\omega}\) yield a proper vertex coloring of \(G\) and such that \(s_{\omega}(v)\) is even if and only if \(v\in R\)._ Proof.: For all \(v\in B\), let \(h(v):=2\deg_{R}(v)\). We apply Lemma 3 to \(G[B]\) and to \(h\), and obtain a weighting \(\omega_{1}\) of \(E(B)\) together with a function \(f\) on \(B\), standing for the designated final weighted degrees of the nodes. Next, for each \(v\in B\) let \(\alpha(v):=f(v)-s_{\omega_{1}}(v)\) be the difference between the designated weighted degree and the already received incident edge weights. By Lemma 3 (ii), we know that \(\alpha(v)\in\{2\deg_{R}(v)-1,2\deg_{R}(v)+1\}\). Moreover, by putting \(R^{\prime}:=\emptyset\) and using the assumption that \(|B|\) is even, it follows that the value of \(|R^{\prime}|+\sum_{v\in B}\alpha(v)\) is even. We apply Lemma 5 to the bipartite subgraph \(G(R,B)\) and to \(\alpha\), without considering any path \(p\), to obtain a weighting \(\omega_{2}\) for \(G(R,B)\) where each vertex \(v\in R\) receives an even-valued weighted degree \(s_{\omega}(v):=s_{\omega_{2}}(v)\). Each vertex \(v\in B\) gets an additional weight of \(\alpha(v)\), hence combining the two weightings \(\omega_{1}\) and \(\omega_{2}\), for \(v\in B\) we have \(s_{\omega}(v):=s_{\omega_{1}}(v)+s_{\omega_{2}}(v)=f(v)\). By Lemma 3, \(f\) only attains odd values and for any two neighbors \(v,w\in B\) we have \(f(v)\neq f(w)\). Because \(R\) is an independent set, the weighted degrees \(s_{\omega}\) indeed properly color the vertices of the graph \(G\). In the next situation, there exists an additional vertex \(v_{0}\) which is not included in \(R\) or \(B\), i.e., we only have a good \(R\)-\(B\)-partition of \(G[V\setminus\{v_{0}\}]\). Consequently, the weighted degree of \(v_{0}\) must be even and coloring conflicts between \(v_{0}\) and its neighbors in \(R\) can arise. To solve these conflicts, we put odd weights on the edges between \(v_{0}\) and \(R\). Carefully choosing weights \(1\) or \(3\), we can ensure that the weighted degree of \(v_{0}\) is different from all of its neighbors. However, the argument only works when \(v_{0}\) has at least \(2\) neighbors in \(R\). Furthermore, if \(\deg_{R}(v_{0})\) is odd, \(v_{0}\) is required to have at least one neighbor in \(B\), as the respective edge is needed for making \(s_{\omega}(v_{0})\) even-valued. **Lemma 8**.: _Let \(G=(V,E)\) be a graph and let \(v_{0}\in V\) be a vertex such that \(G[V\setminus\{v_{0}\}]\) is connected. Moreover, let \((R,B)\) be a good \(R\)-\(B\)-partition of \(G[V\setminus\{v_{0}\}]\) such that \(\deg_{R}(v_{0})\geq 2\) and such that either \(\deg_{R}(v_{0})\) is even or \(\deg_{B}(v_{0})\geq 1\). Then there exists a vertex-coloring edge weighting \(\omega:E\to\{1,2,3\}\) of \(G\)._ Proof.: Let \(h(v):=2|N(v)\setminus B|\) for all \(v\in B\). We will construct the weighting \(\omega\) in three steps: a weighting \(\omega_{1}\) for \(E(B)\), a weighting \(\omega_{2}\) for the edges between \(R\) and \(B\), and a weighting \(\omega_{3}\) for the edges incident to \(v_{0}\). The final weighting \(\omega\) is then the combination of \(\omega_{1}\), \(\omega_{2}\), and \(\omega_{3}\). We first apply Lemma 3 to \(G[B]\) and \(h\) to obtain a weighting \(\omega_{1}\) of the edges set \(E(B)\), together with designated final weighted degrees \(f(v)\) for the blue nodes. By Lemma 3 (ii) and our choice of \(h\), for all \(v\in B\setminus N(v_{0})\) it holds \[f(v)-s_{\omega_{1}}(v)=h(v)\pm 1\in\{2\deg_{R}(v)+1,2\deg_{R}(v)-1\},\] whereas for all \(v\in N(v_{0})\cap B\) we have \[f(v)-s_{\omega_{1}}(v)=h(v)\pm 1\in\{2\deg_{R}(v)+3,2\deg_{R}(v)+1\}.\] Since the edges between \(v_{0}\) and \(R\) will receive an odd weight, we put \(R^{\prime}:=N(v_{0})\cap R\), taking care of that the weighted degrees of these nodes will be even-valued at the end. Next, for all \(v\in B\setminus N(v_{0})\) we let \(\alpha(v):=f(v)-s_{\omega_{1}}(v)\). Regarding \(N(v_{0})\cap B\), we distinguish two cases. * If \(|R^{\prime}|\) is even, we put \(\alpha(v):=f(v)-s_{\omega_{1}}(v)-2\) for all \(v\in N(v_{0})\cap B\). Note that by construction, it holds \(\alpha(v)\in\{2\deg_{R}(v)+1,2\deg_{R}(v)-1\}\). * If \(|R^{\prime}|\) is odd, then by assumption \(\deg_{B}(v_{0})>0\). Fix a vertex \(u_{0}\in N(v_{0})\cap B\) and set \(\alpha(u_{0}):=2\deg_{R}(u_{0})\). For all \(v\in B\cap N(v_{0})\setminus\{u_{0}\}\), let again \(\alpha(v):=f(v)-s_{\omega_{1}}(v)-2\). Because \(|B|\) is even, we ensured in both cases that \(|R^{\prime}|+\sum_{v\in B}\alpha(v)\) is even. We apply Lemma 5 to the graph \(G(R,B)\) and to \(\alpha\) (but without any path \(p\)) and obtain a weighting \(\omega_{2}\) for the bipartite graph such that for a vertex \(v\in R\), \(s_{\omega_{2}}(v)\) is odd if and only if \(v\in R^{\prime}\). Moreover, for each vertex \(v\in B\), it holds \(s_{\omega_{2}}(v)=\alpha(v)\). We now introduce the third weighting \(\omega_{3}\) for the edges that are incident to \(v_{0}\). For all \(v\in N(v_{0})\cap R\) put \(\omega_{3}(\{v,v_{0}\}):=1\). If \(\deg_{R}(v_{0})\) is odd, we have specified a distinct vertex \(u_{0}\in N(v_{0})\cap B\). Set \(\omega_{3}(\{v_{0},u_{0}\}):=f(u_{0})-s_{\omega_{1}}(u_{0})-s_{\omega_{2}}(u_{0})\) and observe that this value is indeed either \(1\) or \(3\). For all remaining edges \(e\) between \(v_{0}\) and \(B\), set \(\omega_{3}(e):=2\). In both of our cases, the value \(s_{\omega_{3}}(v_{0})\) thereby becomes even. We combine \(\omega:=\omega_{1}+\omega_{2}+\omega_{3}\) to a full edge-weighting of \(G\). For \(v\in B\) we then have \(s_{\omega}(v)=f(v)\), no matter whether \(v\) is connected to \(v_{0}\) or not. By Lemma 3, \(f\) attains only odd values on \(B\) and for any two neighbors \(v,w\in B\) we have \(f(v)\neq f(w)\). For \(v\in R\), \(s_{\omega_{2}}(v)\) is odd if and only if \(v\in R^{\prime}\), by Lemma 5. However, for all \(v\in R^{\prime}\) we set \(s_{\omega_{3}}(v)=1\), so \(s_{\omega}(v)=s_{\omega_{2}}(v)+s_{\omega_{3}}(v)\) is even again. Because \(s_{\omega}(v_{0})=s_{\omega_{3}}(v_{0})\) is even and \(R\) is an independent set, it remains to guarantee that there are no coloring conflicts between \(v_{0}\) and its neighbors in the set \(R\). Let \(N_{R}(v_{0}):=N(v_{0})\cap R=\{v_{1},\ldots,v_{k}\}\), where \(k\geq 2\) by assumption, and assume w.l.o.g. that \(s_{\omega}(v_{1})\leq s_{\omega}(v_{2})\leq\ldots\leq s_{\omega}(v_{k})\). If \(s_{\omega}(v_{i})\neq s_{\omega}(v_{0})\) for all \(1\leq i\leq k\), there are no coloring conflicts and we are done. Otherwise, we increase some edge-weights. Let \(x>0\) be the smallest integer such that \(s_{\omega}(v_{0})+2x\) is different from all values \(s_{\omega}(v_{1}),\ldots,s_{\omega}(v_{k})\), and let \(i^{\prime}\leq k\) be maximal such that \(s_{\omega}(v_{i^{\prime}})<s_{\omega}(v_{0})+2x\). Because at least one \(s_{\omega}(v_{i})\) is equal to \(s_{\omega}(v_{0})\), the index \(i^{\prime}\) is well-defined. First consider the case \(i^{\prime}\leq k-x\). For each \(v_{i}\in N_{R}(v_{0})\) with \(i>k-x\), increase the weight of \(\{v_{i},v_{0}\}\) from \(1\) to \(3\) and denote by \(\omega^{\prime}\) the resulting edge-weighting. Then, for \(i>k-x\geq i^{\prime}\), it holds \[s_{\omega^{\prime}}(v_{i})>s_{\omega}(v_{i})>s_{\omega}(v_{0})+2x,\] whereas for \(i\leq k-x\), we have \[s_{\omega^{\prime}}(v_{i})=s_{\omega}(v_{i})\neq s_{\omega}(v_{0})+2x.\] Hence, \(s_{\omega^{\prime}}(v_{0})=s_{\omega}(v_{0})+2x\) is different from \(s_{\omega^{\prime}}(v_{i})\) for all \(1\leq i\leq k\). Next, consider the case \(i^{\prime}>k-x\) and \(x<k\). We change the weight from \(1\) to \(3\) for all edges \(\{v_{0},v_{i}\}\) where \(v_{i}\in N_{R}(v_{0})\) and \(i\geq k-x\). Again, denote by \(\omega^{\prime}\) the resulting edge weighting. For \(i\geq k-x\) we then have \[s_{\omega^{\prime}}(v_{i})=s_{\omega}(v_{i})+2\neq s_{\omega}(v_{0})+2x+2,\] whereas for all \(i<k-x\) it holds \[s_{\omega^{\prime}}(v_{i})=s_{\omega}(v_{i})\leq s_{\omega}(v_{i^{\prime}})<s _{\omega}(v_{0})+2x.\] Thus, for all \(1\leq i\leq k\) we achieved \(s_{\omega^{\prime}}(v_{0})=s_{\omega}(v_{0})+2x+2\neq s_{\omega^{\prime}}(v_{ i})\). It remains the case \(x=k\). Here, for each \(0\leq y<k\), the value \(s_{\omega}(v_{0})+2y\) is attained by one \(s_{\omega}(v_{i})\). So we have \(s_{\omega}(v_{i})=s_{\omega}(v_{0})+2i-2\) for all \(1\leq i\leq k\). We only increase the weight of \(\{v_{0},v_{2}\}\) to \(3\). For the new edge-weighting \(\omega^{\prime}\), it holds \(s_{\omega^{\prime}}(v_{0})=s_{\omega}(v_{0})+2\), but \(s_{\omega^{\prime}}(v_{1})=s_{\omega}(v_{0})\), \(s_{\omega^{\prime}}(v_{2})=s_{\omega}(v_{2})+2=s_{\omega}(v_{0})+4\), and, for all \(i\geq 3\), \[s_{\omega^{\prime}}(v_{i})=s_{\omega}(v_{i})=s_{\omega}(v_{0})+2i-2\geq s_{ \omega}(v_{0})+4.\] The last of our three base situations is again given by a vertex \(v_{0}\) which is not contained in \(B\cup R\). In contrast to the setting of Lemma 8, this time we have \(\deg_{R}(v_{0})=1\). Here, a coloring-conflict can only appear between \(v_{0}\) and its single neighbor \(u_{0}\in R\). If this conflict occurs, we repair it by changing the weights along a cycle that includes \(v_{0}\), so that the weighted degree of \(v_{0}\) changes but that of \(u_{0}\) remains the same. **Lemma 9**.: _Let \(G=(V,E)\) be a graph and let \(v_{0}\in V\) be a vertex such that the induced subgraph \(G[V\setminus\{v_{0}\}]\) is connected. Let \((R,B)\) be a good \(R\)-\(B\)-partition of \(G[V\setminus\{v_{0}\}]\) and let \(u_{0}\in R\) such that \(N(v_{0})\cap R=\{u_{0}\}\). Suppose that there exists a non-trivial path in \(G(R,B)\) that starts and ends in \(N(v_{0})\cap B\) and does not include \(u_{0}\). Then there exists an edge weighting \(\omega:E\to\{1,2,3\}\) such that the weighted degrees \(s_{\omega}\) yield a proper vertex coloring of \(G\)._ Proof.: For all \(v\in B\), put \(h(v):=2|N(v)\setminus B|\). We apply Lemma 3 to \(G[B]\) and to \(h\), and receive a weighting \(\omega_{1}\) of \(E(B)\), together with a function \(f\) on \(B\) where \(f(v)-s_{\omega_{1}}(v)=h(v)\pm 1\) holds for all \(v\in B\). At the end, for each \(v\in B\) its weighted degree \(s_{\omega}(v)\) shall coincide with \(f(v)\). Let \(k\geq 3\) and let \(p=\{v_{1},v_{2},\ldots,v_{k}\}\) be a path in \(G(R,B)\) which does not have \(u_{0}\) as internal node and whose starting and endpoint vertices \(v_{1},v_{k}\) are in \(N(v_{0})\cap B\). Next, for all \(v\in N(v_{0})\cap B\) define \(\beta(v):=f(v)-s_{\omega_{1}}(v)-2\deg_{R}(v)\). Since \(v_{1}\) and \(v_{k}\) are connected to \(v_{0}\), we have \(\beta(v_{1}),\beta(v_{k})\in\{1,3\}\). In the next step, we define a subset \(R^{\prime}\subseteq R\) and a weighting \(\omega_{2}\) for the edges incident to \(v_{0}\), thereby considering two cases. 1. If \(\beta(v_{1})=\beta(v_{k})\), let \(R^{\prime}:=\emptyset\) and set \(\omega_{2}(e)=2\) for each edge \(e\) incident to \(v_{0}\). 2. If \(\beta(v_{1})\neq\beta(v_{k})\), assume w.l.o.g. that \(\beta(v_{1})=3\) and \(\beta(v_{k})=1\). Set \(\omega_{2}(\{v_{0},u_{0}\}):=3\) and \(\omega_{2}(\{v_{0},v_{1}\}):=3\). For all other edges \(e\) incident to \(v_{0}\) (including in particular \(\{v_{0},v_{k}\}\)), put \(\omega_{2}(e):=2\). Finally, let \(R^{\prime}:=\{u_{0}\}\). Moreover, for all \(v\in B\cap N(v_{0})\) we define \(\alpha(v):=f(v)-s_{\omega_{1}}(v)-s_{\omega_{2}}(v)\), whereas for all \(v\in B\setminus N(v_{0})\) we set \(\alpha(v):=f(v)-s_{\omega_{1}}(v)\). Note that for all blue vertices we have \(\alpha(v)\in\{2\deg_{R}(v)+1,2\deg_{R}(v)-1\}\), except for \(v_{1}\) in case (b) where \(\alpha(v_{1})=2\deg_{R}(v_{1})\). We claim that \(|R^{\prime}|+\sum_{v\in B}\alpha(v)\) is even in both cases (a) and (b). Indeed, in case (a), this follows because \(\alpha(v)\) is odd for all \(v\in B\), \(|B|\) is even, and \(R^{\prime}\) is empty. In case (b), it is true because \(|B|\) is still even, \(|R^{\prime}|\) is odd, but \(s_{\omega_{2}}(v_{1})\) is odd and thus \(\alpha(v_{1})\) is even. We apply Lemma 5 to \(G(R,B)\), \(\alpha\), and \(p\), and obtain a weighting \(\omega_{3}\) for the bipartite graph with various properties. Having \(\omega_{3}\) on hand, let \(\omega\) be the edge-weighting that combines \(\omega_{1}\), \(\omega_{2}\), and \(\omega_{3}\). For each vertex \(v\in R\), \(s_{\omega_{3}}(v)\) is odd if and only if \(v\in R^{\prime}\), implying that for all \(v\in R\setminus N(v_{0})\), \(s_{\omega}(v)=s_{\omega_{3}}(v)\) is even. Moreover, for the only vertex \(u_{0}\) in \(R\cap N(v_{0})\), \(s_{\omega}(u_{0})=s_{\omega_{2}}(u_{0})+s_{\omega_{3}}(u_{0})\) is even as well, in both cases (a) and (b). Therefore, \(s_{\omega}\) indeed attains even values on the red vertices. Regarding the blue vertices, we have \(s_{\omega_{3}}(v)=\alpha(v)\) for all \(v\in B\) and thus \(s_{\omega}(v)=s_{\omega_{1}}(v)+s_{\omega_{2}}(v)+s_{\omega_{3}}(v)=f(v)\). By Lemma 3, \(f\) only attains odd values on \(B\) and for any two neighbors \(v,w\in B\) we have \(f(v)\neq f(w)\). All together, under \(\omega\) a coloring conflict can only arise between \(v_{0}\) and \(u_{0}\). If \(s_{\omega}(v_{0})\neq s_{\omega}(u_{0})\), there is nothing more to do and \(s_{\omega}\) properly colors the vertices of \(G\). So assume that the two values are equal. Our goal is to create a modified edge-weighting \(\omega^{\prime}\) without coloring conflicts, by only changing the weights on \(p\), on \(\{v_{0},v_{1}\}\), and on \(\{v_{0},v_{k}\}\). We start with the edges that are incident to \(v_{1}\) or \(v_{k}\). There are three sub-cases. * If \(\beta(v_{1})=\beta(v_{k})=1\), we decrease the weights of \(\{v_{0},v_{1}\}\) and \(\{v_{0},v_{k}\}\) by \(1\), so we put \(\omega^{\prime}(\{v_{0},v_{1}\}):=\omega^{\prime}(\{v_{0},v_{k}\}):=1\). However, we also have \(\alpha(v_{1})=2\deg_{R}(v_{1})-1\) and \(\alpha(v_{k})=2\deg_{R}(v_{k})-1\), implying \(\omega(\{v_{1},v_{2}\})\neq 3\) and \(\omega(\{v_{k-1},v_{k}\})\neq 3\) by properties (iv) and (vi) of Lemma 5. Hence we can _increase_ the weights of those two edges by \(1\) in order to achieve that \(s_{\omega^{\prime}}(v_{1})=s_{\omega}(v_{1})\) and \(s_{\omega^{\prime}}(v_{k})=s_{\omega}(v_{k})\). At the same time, we have \(s_{\omega^{\prime}}(v_{0})=s_{\omega}(v_{0})-2\). * If \(\beta(v_{1})=\beta(v_{k})=3\), we put \(\omega^{\prime}(\{v_{0},v_{1}\}):=\omega^{\prime}(\{v_{0},v_{k}\}):=3\). Observe that here, it holds \(\alpha(v_{1})=2\deg_{R}(v_{1})+1\) and \(\alpha(v_{k})=2\deg_{R}(v_{k})+1\). By Lemma 5 (iv) and (vi), the weights of \(\{v_{1},v_{2}\}\) and \(\{v_{k-1},v_{k}\}\) are not \(1\), hence we can _decrease_ the weights of the two edges by \(1\) to keep the weighted degrees of \(v_{1}\) and \(v_{k}\) the same, whereas \(s_{\omega^{\prime}}(v_{0})=s_{\omega}(v_{0})+2\). * If \(\beta(v_{1})\neq\beta(v_{k})\), we assumed w.l.o.g. that \(\beta(v_{1})=3\) and \(\beta(v_{k})=1\). Recall that in this situation we put \(\omega_{2}(\{v_{0},v_{1}\})=3\) and \(\omega_{2}(\{v_{0},v_{k}\})=2\), implying \(\alpha(v_{1})=2\deg_{R}(v_{1})\) and \(\alpha(v_{k})=2\deg_{R}(v_{k})-1\). Then Lemma 5 yields \(\omega(\{v_{1},v_{2}\})\neq 3\) and \(\omega(\{v_{k-1},v_{k}\}\neq 3\). We now _decrease_ the weights of \(\{v_{0},v_{1}\}\) and \(\{v_{0},v_{k}\}\) both by \(1\) and _increase_ the weights of \(\{v_{1},v_{2}\}\) and \(\{v_{k-1},v_{k}\}\) by \(1\). Again, the weighted degrees of \(v_{1}\) and \(v_{k}\) remain the same, while \(s_{\omega^{\prime}}(v_{0})=s_{\omega}(v_{0})-2\). In all three cases we achieved \(s_{\omega^{\prime}}(v_{0})=s_{\omega}(v_{0})\pm 2\). It remains to consider the internal nodes of \(p\). So far, we changed the weighted degrees of \(v_{2}\) and \(v_{k-1}\) by \(+1\) or \(-1\) (or, if \(k=3\), we changed the weighted degree of \(v_{2}=v_{k+1}\) by \(+2\) or \(-2\)). For each internal node \(v_{i}\in B\) of \(p\) (if there are any), it holds \(\omega(\{v_{i-1},v_{i}\})+\omega(\{v_{i},v_{i+1}\})\in\{3,4,5\}\) by Lemma 5 (v). Therefore, there is always a choice to modify the weights of both \(\{v_{i-1},v_{i}\}\) and \(\{v_{i},v_{i+1}\}\) by \(1\) while the weighted degree of \(v_{i}\) remains the same. After these modifications, it holds \(|\omega^{\prime}(e)-\omega(e)|=1\) for each edge \(e\in p\). Because \(R\) is an independent set, the weighted degree of each internal node \(v\in R\) of \(p\) remains even and we did not create any new coloring conflicts. Since \(u_{0}\notin p\), it holds in particular \(s_{\omega^{\prime}}(u_{1})=s_{\omega}(u_{1})\), and we conclude that the coloring conflict between \(v_{0}\) and \(u_{0}\) has been solved by changing the edge weights along the cycle \(\{v_{0},v_{1},v_{2},\ldots,v_{k},v_{0}\}\). Before starting with the main proof, we need one last minor lemma. **Lemma 10**.: _Let \(G=(V,E)\) be a connected graph with minimum degree at least \(2\). Then there exist two vertices \(x,y\in V\) such that \(\{x,y\}\in E\) and \(G[V\setminus\{x,y\}]\) is connected._ Proof.: We do a depth-first-search on \(G\) with arbitrary starting vertex \(r\) and consider a leaf \(x\) on the resulting search tree \(T\) with largest depth, i.e., with largest distance on \(T\) to the root \(r\) among all nodes. Let \(y\) be the parent of \(x\) in \(T\). We claim that \(G[V\setminus\{x,y\}]\) is connected. Clearly, removing \(x\) does not disconnect \(G\). All other children of \(y\) (if there are any) are leafs of \(T\) as well, due to our choice of \(x\). As they all have degree at least \(2\), they are all connected to at least one other node in \(G\). However, because we consider a depth-first search tree, all neighbors of children of \(y\) must be ancestors of \(y\) in \(T\). Hence, we can also remove \(y\) without destroying the connectivity of the remaining graph. Proof of Theorem 1.: Assume w.l.o.g. that \(G\) is connected and contains at least three vertices. The plan is to show that it is always possible to find a good \(R\)-\(B\)-partition of either the entire graph \(G\) or a large portion of it. Then the good \(R\)-\(B\)-partition should allow us to apply either Lemma 7, Lemma 8, or Lemma 9 to find a vertex-coloring edge-weighting. However, the fact that \(|B|\) needs to be even requires a careful preparation, leading to a subtle case distinction. We first assume that there exists \(x\in V\) with \(\deg(x)=1\). Let \(y\) be its only neighbor. Define \(V^{\prime}:=V\setminus\{x\}\) and \(G^{\prime}:=G[V^{\prime}]\). By Lemma 2 applied to \(G^{\prime}\) and to the trivial path \(p:=\{y\}\), there exists an independent set \(R\) such that \(y\in R\) and \(G(R,V\setminus R)\) is connected. Let \(B:=V^{\prime}\setminus R\). We distinguish two sub-cases. * If \(|B|\) is odd, we can add \(x\) to the set \(B\) and make \(|B|\) even. Afterwards, \((R,B)\) is a good \(R\)-\(B\)-partition of \(G\), so we can apply Lemma 7 directly to find a vertex-coloring edge-weighting \(\omega\) of \(G\). * If \(|B|\) is even, then \((R,B)\) is a good \(R\)-\(B\)-partition of \(G^{\prime}\). We apply Lemma 7 to \(G^{\prime}\) and receive a vertex-coloring edge-weighting \(\omega\) of \(G^{\prime}\) where \(s_{\omega}(y)\) is even. To extend \(\omega\) to the full edge set \(E\), we put \(\omega(\{x,y\}):=2\). Clearly, the weighted degree of \(y\) remains even. As \(R\) is an independent set, the only potential coloring conflict is between \(x\) and \(y\). But \(\deg(x)=1\) and \(\deg(y)>1\), hence \(s_{\omega}(x)=2\), \(s_{\omega}(y)>2\), and \(s_{\omega}\) is a proper vertex coloring of the full graph \(G\). For the remainder, we can assume that the minimum degree of \(G\) is at least \(2\). By Lemma 10, there are two vertices \(x,y\in V\) sharing an edge such that removing them does not disconnect the graph. We define \(V^{\prime\prime}:=V\setminus\{x,y\}\) and \(G^{\prime\prime}:=G[V^{\prime\prime}]\). We first study the special situation where \(\deg(x)=\deg(y)=2\). Let \(z_{x}\in N(x)\setminus\{y\}\) and \(z_{y}\in N(y)\setminus\{x\}\) be the two other neighbors of \(x\) and \(y\), where \(z_{x}=z_{y}\) is possible. We apply Lemma 2 to \(G^{\prime\prime}\) and to the trivial path \(p:=\{z_{x}\}\), thereby requiring \(z_{x}\in R\), and get an independent set \(R\) such that \(G(R,V\setminus R)\) is connected. We set \(B:=V^{\prime\prime}\setminus R\) and distinguish four different sub-cases. * If \(|B|\) is even and \(z_{y}\in R\), we add both \(x\) and \(y\) to \(B\). \(|B|\) remains even, \((B,R)\) is a good \(R\)-\(B\)-partition, and with Lemma 7, we find an edge-weighting \(\omega\) without coloring conflicts. * If \(|B|\) is even and \(z_{y}\in B\) (and thus \(z_{x}\neq z_{y}\)), we put \(y\) into \(R\), so both neighbors of \(x\) are in \(R\) while \(R\) remains an independent set. By Lemma 8 applied to \(G\) and \(v_{0}:=x\), there exists a vertex-coloring \(\omega\) without coloring conflicts. * If \(|B|\) is odd and \(z_{y}\in B\), we put \(x\) into \(B\) and \(y\) into \(R\) to obtain a good \(R\)-\(B\)-partition of \(G\). Then we are done with Lemma 7. * If \(|B|\) is odd but \(z_{y}\in R\), we create an auxiliary graph \(\tilde{G}=(\tilde{V},\tilde{E})\) by adding an additional vertex \(w\) and setting \(\tilde{V}:=V^{\prime\prime}\cup\{w\}\) and \(\tilde{E}:=E(V^{\prime\prime})\cup\{\{w,z_{x}\}\}\). After adding \(w\) to \(B\), \(|B|\) becomes even. We apply Lemma 7 to \(\tilde{G}\) and get a vertex-coloring edge-weighting \(\tilde{\omega}\) for \(\tilde{G}\) where \(s_{\tilde{\omega}}(v)\) is even if and only if \(v\in R\). Note that \(\tilde{\omega}(\{w,z_{x}\})\) must be odd since \(w\in B\). Going back to the original graph \(G\), we set \(\omega(e):=\tilde{\omega}(e)\) for all edges \(e\in E(V^{\prime\prime})\). Moreover, let \(\omega(\{y,z_{y}\}):=2\) and \(\omega(\{x,z_{x}\}):=1\), making \(s_{\omega}(z_{x})\) and \(s_{\omega}(z_{y})\) even and thus keeping \(V^{\prime\prime}\) conflict-free. Finally, if \(s_{\omega}(z_{x})=2\), we put \(\omega(\{x,y\}):=3\), in all other cases we set \(\omega(\{x,y\}):=1\). Then \(s_{\omega}(x)\) is even, \(s_{\omega}(y)\) is odd, and \(s_{\omega}(x)\neq s_{\omega}(z_{x})\), so \(s_{\omega}\) is indeed vertex-coloring. It remains to handle the situations where \(\deg(x)+\deg(y)>4\). We assume w.l.o.g. that \(\deg(x)>2\) and again define specific vertices \(z_{x}\) and \(z_{y}\), this time together with a \(z_{x}\)-\(z_{y}\)-path \(p\). Let \(z_{x}\in N(x)\setminus\{y\}\) and \(z_{y}\in N(y)\setminus\{x\}\), but now we require in addition that either \(z_{x}=z_{y}\) (then \(p\) would be the trivial path), or \(\{x,z_{y}\}\notin E\), \(\{y,z_{x}\}\notin E\), and there exists a shortest \(z_{x}\)-\(z_{y}\)-path \(p\) in \(G^{\prime\prime}\) without internal node that is connected to \(x\) or \(y\). The path \(p\) is only needed for solving a few exceptional sub-cases. To obtain the independent set \(R\), we apply Lemma 2 to \(G^{\prime\prime}\) and to the path \(p\), this time requiring \(z_{x}\notin R\). Having received the independent set \(R\), we set \(B:=V^{\prime\prime}\setminus R\), thus \(z_{x}\in B\). Moreover, by Lemma 2 (ii), the path \(p\) is alternating between \(B\) and \(R\). We start with the cases where \(|B|\) is odd. First, we assume that at most one vertex of \(x\) and \(y\) has neighbors in \(R\). * If \(\deg_{R}(x)=0\), no matter whether \(\deg_{R}(y)=0\) or not, we put \(x\) into \(R\) and \(y\) into \(B\), consequently making \(|B|\) even and keeping \(R\) an independent set. Moreover, the graph \(G(R,B)\) is connected thanks to the edge \(\{x,y\}\). Hence, we have a good \(R\)-\(B\)-partition of \(G\) and with Lemma 7 we directly find a suitable edge-weighting \(\omega\). * If \(\deg_{R}(y)=0\) and \(\deg_{R}(x)\neq 0\), we put \(y\) into \(R\) and \(x\) into \(B\). Again, we find a suitable edge-weighting \(\omega\) with Lemma 7. Next, suppose that \(|B|\) is odd, \(\deg_{R}(x)\geq 1\), \(\deg_{R}(y)\geq 1\), and \(\deg_{R}(x)+\deg_{R}(y)\geq 3\). * If \(\deg_{R}(x)\geq 2\), put \(y\) into \(B\). Then the bipartite subgraph \(G(R,B)\) is connected and \(\deg_{B}(x)\geq 1\), so we can apply Lemma 8 to \(G\) and to \(v_{0}:=x\), no matter whether \(\deg_{R}(x)\) is even or odd, and find a suitable edge-weighting \(\omega\). * If \(\deg_{R}(y)\geq 2\), the same argument as above applies by exchanging \(x\) and \(y\). Note that here, the assumption \(z_{x}\in B\) does not have any impact. We now assume that \(|B|\) is odd, \(\deg_{R}(x)=\deg_{R}(y)=1\), and \(x\) and \(y\) actually have a joint neighbor \(q\) in \(R\). * If \(N(x)\cap N(y)\cap R=\{q\}\), we create an auxiliary graph \(\tilde{G}=(\tilde{V},\tilde{E})\) as follows. Let \(\tilde{V}:=V\cup\{w\}\), where \(w\) is an additional vertex, and set \(\tilde{E}:=E\setminus\{\{x,y\},\{x,q\}\}\cup\{\{w,y\}\}\). Furthermore, put \(x\) and \(w\) into \(R\) and \(y\) into \(B\). Because we removed the edge \(\{x,q\}\), the set \(R\) is an independent set. Moreover, thanks to \(y\), \(|B|\) becomes even, thus \((R,B)\) is a good \(R\)-\(B\)-partition of \(\tilde{G}\). We apply Lemma 7 to \(\tilde{G}\) and receive a vertex-coloring edge-weighting \(\tilde{\omega}\) such that \(s_{\tilde{\omega}}\) attains odd values exactly on \(B\). Because \(\deg_{\tilde{G}}(w)=1\), we have \(\tilde{\omega}(\{w,y\})=2\). We now construct an edge-weighting \(\omega\) for the original graph \(G\) as follows. For each edge \(e\in E\cap\tilde{E}\), we let \(\omega(e)=\tilde{\omega}(e)\). On the two edges \(\{x,q\}\) and \(\{x,y\}\), we put weight \(2\). We observe that \(s_{\omega}(y)\) has now the same value in \(G\) as \(s_{\omega^{\prime}}(y)\) had in \(\tilde{G}\). Moreover, \(s_{\omega}(q)\) is still even, hence the only potential coloring conflict is between \(x\) and \(q\). If \(s_{\omega}(x)\neq s_{\omega}(q)\), we are done. Otherwise, let \(\mu:=\omega(\{y,q\})\). If \(\mu=2\), we can modify \(\omega\) by resetting \(\omega(\{y,q\}):=3\), \(\omega(\{x,y\}):=1\), and \(\omega(\{x,q\}):=1\). The effect is that the weighted degrees of \(y\) and \(q\) remained the same whereas \(s_{\omega}(x)\) decreased by \(2\), so we resolved the coloring conflict between \(q\) and \(x\). If \(\mu\) is odd, we adapt \(\omega\) by resetting \(\omega(\{x,y\}):=\mu\), \(\omega(\{x,q\}):=\mu\), and \(\omega(\{y,q\}):=2\). Thereby, the weighted degree of \(x\) changes by \(\pm 2\) whereas all other weighted degrees remain the same. Thus, in both situations, the coloring conflict is resolved. Under the assumption that \(|B|\) is odd, it remains to handle the situations where \(x\) and \(y\) both have exactly one neighbor in \(R\) but not a joint neighbor therein. Denote by \(z^{\prime}_{x}\) the neighbor of \(x\) in \(R\). Recall that in \(G^{\prime\prime}\), there exists a \(B\)-\(R\)-alternating path \(p\) from \(z_{x}\) to \(z_{y}\) without internal vertices that are connected to \(x\) or \(y\). * If \(z_{y}\in R\), put \(y\) into \(B\) and observe that \((R,B)\) is a good \(R\)-\(B\)-partition of \(G[V\setminus\{x\}]\). Let \(p^{\prime}:=p\cup\{y,z_{y}\}\). Then the preconditions of Lemma 9 are satisfied with \(v_{0}:=x\), \(u_{0}:=z^{\prime}_{x}\), and \(p^{\prime}\), hence we find a vertex-coloring edge-weighting \(\omega\). * If \(z_{y}\in B\), let \(T\) be a spanning tree of \(G(R,B)\) containing \(p\). By Lemma 2 (i), \(G(R,B)\) is connected, so indeed this spanning tree exists. Let \(z^{\prime}_{y}\neq z^{\prime}_{x}\) be the neighbor of \(y\) in \(R\). By construction, the path \(p\) does not contain \(z^{\prime}_{x}\) or \(z^{\prime}_{y}\) as internal node. Therefore, on \(T\) there exists a \(z_{y}\)-\(z^{\prime}_{x}\)-path without \(z^{\prime}_{y}\) as internal vertex, or there exists a \(z_{x}\)-\(z^{\prime}_{y}\)-path without \(z^{\prime}_{x}\) as internal vertex. Assume w.l.o.g. that we are in the first situation and denote by \(p^{\prime}\) our \(z_{y}\)-\(z^{\prime}_{x}\)-path. Define \(p^{\prime\prime}:=p^{\prime}\cup\{x,z^{\prime}_{x}\}\), put \(x\) into \(B\), and observe that by setting \(v_{0}:=y\) and \(u_{0}:=z^{\prime}_{y}\), with the path \(p^{\prime\prime}\) all preconditions of Lemma 9 are fulfilled. Again, there exists an edge-weighting \(\omega\) that properly colors the vertices of \(G\). We now turn to cases where \(|B|\) is even. For the remainder of the proof, we don't need the path \(p\) anymore, but we still use the vertex \(z_{x}\in N(x)\cap B\). We start with the sub-cases where \(\deg_{R}(x)\geq 1\). * If \(\deg_{R}(y)\geq 1\), we simply put \(x\) and \(y\) to \(B\) to get a good \(R\)-\(B\)-partition of \(G\). We directly find \(\omega\) with Lemma 7 applied to \(G\). * If \(\deg_{R}(y)=0\), we add \(y\) to \(R\) and achieve \(\deg_{R}(x)\geq 2\). Thanks to \(z_{x}\), \(\deg_{B}(x)\) is at least \(1\), allowing us to use Lemma 8 with \(v_{0}:=x\) no matter whether \(\deg_{R}(x)\) is even or odd. Again, we find again a vertex-coloring edge-weighting \(\omega\). Finally, we are left with the situations where \(|B|\) is even and the set \(N(x)\cap R\) is empty. Due to the assumption that \(\deg(x)>2\), we have \(\deg_{B}(x)\geq 2\). We distinguish the following four sub-cases. * If \(\deg_{R}(y)=0\), we add \(y\) to \(R\). Let \(z_{x}^{\prime}\in N(x)\setminus\{y,z_{x}\}\). Since \(G(R,B)\) is connected, there exists a \(z_{x}\)-\(z_{x}^{\prime}\)-path \(p^{\prime}\) in \(G^{\prime\prime}\) that alternates between \(B\) and \(R\). We apply Lemma 9 to \(G\), to the fixed path \(p^{\prime}\), to \(v_{0}:=x\), and to \(u_{0}:=y\) and see that there exists an edge-weighting \(\omega\) without coloring conflicts. * If \(y\) has both neighbors in \(R\) and in \(B\), we add \(x\) to \(R\). Then \(\deg_{R}(y)\geq 2\) and \(\deg_{B}(y)\geq 1\), so we can apply Lemma 8 with \(v_{0}:=y\), no matter whether \(\deg_{R}(y)\) is even or odd, and find a vertex-coloring edge-weighting \(\omega\). * If \(\deg_{B}(y)=0\) and \(\deg(y)\) is even, we put \(x\) into \(R\). Observe that the value of \(\deg_{R}(y)\) must be even in this sub-case. Thus, by Lemma 8 applied to \(v_{0}:=y\), there is an edge-weighting \(\omega\) with the desired properties. * At last, if \(\deg_{B}(y)=0\) and \(\deg(y)\) is odd (and thus at least \(3\)), we have to be careful. Note that here, we have \(N(x)\cap N(y)=\emptyset\). We create an auxiliary graph \(\tilde{G}\) as follows. Let \(\tilde{E}:=E\setminus\{\{x,y\},\{y,z_{y}\}\}\cup\{x,z_{y}\}\) and let \(\tilde{G}:=(V,\tilde{E})\). We keep \(R\) as it is, but we add both \(x\) and \(y\) to \(B\). Observe that \(\tilde{G}\) is connected, because the degree of \(y\) was sufficiently large. Moreover, \((R,B)\) is a good \(R\)-\(B\)-partition of \(\tilde{G}\) because \(x\in B\) is connected to \(R\) thanks to the edge \(\{x,z_{y}\}\). With Lemma 7 we then obtain a vertex-coloring edge-weighting \(\tilde{\omega}\) for \(\tilde{G}\). Note that since \(y\in B\), \(s_{\tilde{\omega}}(y)\) is odd, so there exists at least one edge \(\tilde{e}\) incident to \(y\) where \(\tilde{\omega}(\tilde{e})\) is odd. This edge \(\tilde{e}\) will be helpful below. Let \(\mu:=\tilde{\omega}(\{x,z_{y}\})\). We use \(\tilde{\omega}\) to construct a weighting \(\omega\) for \(G\) as follows. We set \(\omega(\{x,y\}):=\omega(\{y,z_{y}\}):=\mu\), whereas for all edges \(e\in E\cap E^{\prime}\), we keep the edge-weight from \(\tilde{\omega}\). Consequently, for all \(v\in V\setminus\{y\}\) we have \(s_{\omega}(v)=s_{\tilde{\omega}}(v)\), whereas \(s_{\omega}(y)=s_{\tilde{\omega}}(y)+2\mu\) remains odd. Because \(N(y)\cap B=\{x\}\), the only potential coloring conflict is between the two nodes \(x\) and \(y\). However, if \(x\) and \(y\) attain the same weighted degree, we can replace the weight of edge \(\tilde{e}\) by the other possible odd value. Then the conflict vanishes, and since the other endpoint of \(\tilde{e}\) lies in the independent set \(R\) of even-weighted nodes, we did not create any new conflicts. No matter how the structure of the graph is at \(x\) and \(y\), we have seen by an exhaustive case distinction that it is always possible to reduce the situation to one of the three basic situations that have been covered by Lemma 7, Lemma 8, and Lemma 9. Hence, for each connected graph \(G=(V,E)\) with \(|V|\geq 3\), the presented approach leads to a vertex-coloring edge-weighting \(\omega\) only with weights from \(\{1,2,3\}\). ## 4 Concluding remarks With this paper, we present a constructive solution to the 1-2-3 Conjecture that is built on top of the ideas from [18] and uses the independent set \(R\) of red nodes as additional key ingredient. We mentioned in Section 1 that the decision problem whether there exists a vertex-coloring edge-weighting only from \(\{1,2\}\) is NP-hard [11]. In our proof, the critical step in terms of algorithmic complexity is to find a maximum cut \(C=(S,T)\) in the proof of Lemma 3. Finding a maximum cut is NP-complete, but our strategy can be adjusted as follows. Instead of starting directly with a maximum cut, we could begin with an arbitrary cut. Then we find either a sufficiently large flow, or a strictly larger cut, as demonstrated in the proof of Lemma 4 in [18]. By repeating the argument, at some point we get a cut \(C^{\prime}=(S^{\prime},T^{\prime})\) that leads to a suitable flow, because the size of the cut can not become arbitrarily large. This should actually happen in polynomial time. We therefore believe that using the ideas of our proof, a polynomial-time algorithm for the construction of a suitable edge-weighting can be derived.
2303.01318
A Continuous-Time Stochastic Process for High-Resolution Network Data in Sports
Technological advances have paved the way for collecting high-resolution network data in basketball, football, and other team-based sports. Such data consist of interactions among players of competing teams indexed by space and time. High-resolution network data are vital to understanding and predicting the performance of teams, because the performance of a team is more than the sum of the strengths of its individual players: Whether a collection of players forms a strong team depends on the strength of the individual players as well as the interactions among the players. We introduce a continuous-time stochastic process as a model of interactions among players of competing teams indexed by space and time, discuss basic properties of the continuous-time stochastic process, and learn the stochastic process from high-resolution network data by pursuing a Bayesian approach. We present simulation results along with an application to Juventus Turin, Inter Milan, and other football clubs in the premier Italian soccer league.
Nicholas Grieshop, Yong Feng, Guanyu Hu, Michael Schweinberger
2023-03-02T14:49:52Z
http://arxiv.org/abs/2303.01318v2
# A Continuous-Time Stochastic Process for High-Resolution Network Data in Sports ###### Abstract Technological advances have paved the way for collecting high-resolution tracking data in basketball, football, and other team-based sports. Such data consist of interactions among players of competing teams indexed by space and time. High-resolution tracking data on interactions among players are vital to understanding and predicting the performance of teams, because the performance of a team is more than the sum of the strengths of its individual players. We introduce a continuous-time stochastic process as a model of interactions among players of competing teams indexed by space and time, discuss properties of the continuous-time stochastic process, and learn the stochastic process from high-resolution tracking data by pursuing a Bayesian approach. We present an application to Juventus Turin, Inter Milan, and other Italian football clubs. Continuous-time stochastic processes; Relational event data; Soccer games; Spatio-temporal data; Sport analytics. ## 1 Introduction Sport analytics has witnessed a surge of interest in the statistics community (Albert et al., 2017), driven by technological advances that have paved the way for collecting high-resolution tracking data in basketball, football, and other team-based sports. Traditional sport analytics has focused on predicting match outcomes based on summary statistics (Dixon and Coles, 1997; Karlis and Ntzoufras, 2003; Baio and Blangiardo, 2010; Cattelan et al., 2013). In more recent times, the advent of high-resolution tracking data has expanded the role of statistics in sport analytics (Albert et al., 2017) and has enabled granular evaluations of player and team performance (Cervone et al., 2014; Franks et al., 2015; Cervone et al., 2016; Wu and Bornn, 2018; Hu et al., 2023), along with in-game strategy evaluations (Fernandez and Bornn, 2018; Sandholtz et al., 2020). High-resolution tracking data fall into two categories: optical player- and ball-tracking data obtained from video footage collected by multiple cameras in sport arenas, and data collected by wearable devices. A number of recent papers has used high-resolution tracking data to evaluate the defensive strength of teams (Franks et al., 2015); constructing a dictionary of play types (Miller and Bornn, 2017); assessing the expected value of ball possession in basketball (Cervone et al., 2016; Santos-Fernandez et al., 2022); and constructing deep generative models of spatio-temporal trajectory data (Santos-Fernandez et al., 2022). As a case in point, we focus on soccer--that is, European football. Soccer is a fast-paced sport that generates high-resolution network data indexed by space and time. To the best of our knowledge, the network of interactions among soccer players has not been studied before, which may not be surprising: The statistical analysis of soccer data poses unique challenges. First, scoring a goal in a soccer match is a rare event and useful predictors are hard to come by: e.g., a soccer team may score 0, 1 or 2 goals in a typical match and scoring a goal requires a sequence of complex interactions among players of two competing teams. Second, soccer teams consist of more players and the interactions among soccer players are more complex than in basketball and other team-based sports. Third, the fact that soccer teams are larger than teams in many other team-based sports (e.g., basketball teams) implies that the actions of players on the court need to be coordinated. To facilitate coordination, each soccer team adopts a formation, which assigns each player in the team to a specific position (e.g., goal keeper, striker). Two popular formations of soccer teams, known as 4-4-2 and 3-5-2, are shown in Figure 1. The chosen formation affects the defensive and offensive strategy of a soccer team and can hence affect the outcome of a match. In addition, players may have different roles in different formations, and the formations of teams may change during matches. Last, but not least, changes in the compositions of teams due to substitutions of players, changes in the formations of teams (e.g., from 4-4-2 to 3-5-2), and goals can affect the pace of a match: e.g., a team on track to winning a match may decrease its pace and play more defensive, while its opponent may increase its pace and play more offensive to change the outcome of the match in its favor. We make first steps to addressing the lack of a comprehensive statistical analysis of soccer data by introducing a continuous-time stochastic process to model entire soccer matches, including: * which player controls the ball and how long, and how ball control depends on the player's attributes (including the player's position in the team's formation and the player's spatial position on the court); * whether a change in ball control is a failure (i.e., the ball is lost to the opposing team) or a success (i.e., the ball remains within the team in control of the ball), and how the probability of a failure or a success depends on attributes of players; * which player succeeds in securing control of the ball, depending on the player's attributes; * whether a team on track to winning a match decreases its pace and plays more defensive, while its opponent increases its pace and plays more offensive to change the outcome of the match in its favor; * unobserved attributes of players that may affect ball control and interactions among players. Figure 1: Two popular formations of soccer teams, known as 4-4-2 and 3-5-2. The abbreviations of player positions are detailed in Appendix A. Comparison with literatureIn contrast to the literature on basketball and other team-based sports, we do not focus on individual summaries, such as the expected ball possession of individual players (e.g., Cervone et al., 2016; Santos-Fernandez et al., 2022). Instead, we focus on ball control; whether a change in ball control is a failure (i.e., the ball is lost to the opposing team) or a success (i.e., the ball remains within the team in control of the ball); and who passes the ball to whom. The proposed stochastic modeling framework may be closest to relational event models (e.g., Butts, 2008; Perry and Wolfe, 2013), although relational event models neglect the asymmetry of interactions in soccer and other team-based sports (e.g., a single player is in control of the ball and can initiate events, e.g., passes); the fact that two teams compete with each other and that events of interest (e.g., passes) can be failures or successes; and the fact that interactions among players depend on the positions of players in the formations of the teams and the spatial locations of players on the court. Structure of paperThe remainder of the paper is organized as follows. We first introduce the data that motivated the proposed stochastic modeling framework (Section 2) and then introduce the stochastic modeling framework (Section 3). A Bayesian approach to learning the proposed stochastic modeling framework from high-resolution tracking data is described in Section 4, and an application to the motivating data is presented in Section 5. We conclude with an extended discussion of open questions and directions for future research in Section 6. ## 2 Data To motivate the proposed stochastic modeling framework, we leverage data provided by Hudl & Wyscout ([https://footballdata.wyscout.com/](https://footballdata.wyscout.com/)). The data set consists of 380 matches played by 20 teams in Serie A, which is the top league of the Italian football league system. We focus on the 2020/21 season. The data consist of high-resolution tracking data of a number of actions, including positional coordinates at the start and end of each action as well as its outcome: e.g., which player is in control of the ball and how long; when the player in control of the ball passes the ball; whether a pass is a failure in that the ball is lost to the opposing team or whether a pass is a success; which player secures control of the ball; and where the players are located on the court when passes take place. Figure 2 shows a subset of the data: passes between the players of Juventus Turin (with 4-4-2 formation) and Inter Milan (with 3-5-2 formation). These data are based on the home games of Juventus Turin versus AC Milan and Inter Milan versus AC Milan in 2020/21. The figure reveals that passes depend on the formations of teams. Figure 2(a) shows that the midfield players and defenders of Juventus Turin (with 4-4-2 formation) dominate ball control. By contrast, strikers do not control the ball all too often, but are key to scoring goals and hence winning matches. Figure 2(b) reveals that the midfield players of Inter Milan (with 3-5-2 formation) likewise dominate ball control. In addition, the right wing of Inter Milan plays an important role in Inter Milan's 3-5-2 formation, by passing the ball to the strikers and in so doing helping the team launch counterattacks straight out of the backscatter. Other descriptive summaries, including detailed information on the formations and players of Juventus Turin, Inter Milan, and other soccer clubs in Serie A are presented in Appendices B-D. ## 3 Continuous-time stochastic process We introduce a continuous-time stochastic process to model soccer matches starting at time \(t_{0}\coloneqq 0\) and stopping at time \(T\in[90,+\infty)\). Figure 2: The numbers of passes between the positions of (a) Juventus Turin (with 4-4-2 formation) and (b) Inter Milan (with 3-5-2 formation). These data are based on the home games of (a) Juventus Turin versus AC Milan and (b) Inter Milan versus AC Milan in 2020/21. The 4-4-2 and 3-5-2 formations are shown in Figure 1. The sizes of the positions are proportional to the numbers of passes, while the widths of the edges are proportional to the numbers of passes between the positions. Soccer matches involve two competing teams. Each team consists of 11 players and can substitute up to 5 players during a match, effective 2022/23. Let \(\mathcal{T}_{1,t}\) be the set of players of one of the two teams and \(\mathcal{T}_{t,2}\) be the set of players of the opposing team at time \(t\in[0,\,T)\). The two sets \(\mathcal{T}_{1,t}\) and \(\mathcal{T}_{2,t}\) are disjoint, that is, \(\mathcal{T}_{1,t}\,\cap\,\mathcal{T}_{2,t}=\{\}\) for all \(t\in[0,\,T)\). The compositions of the two teams \(\mathcal{T}_{1,t}\) and \(\mathcal{T}_{2,t}\) can change during a match, because players may be injured; players may be substituted; and the referee may remove players from the court due to violations of rules. We consider changes in the compositions of \(\mathcal{T}_{1,t}\) and \(\mathcal{T}_{2,t}\) to be exogenous. ### Generic continuous-time stochastic process We introduce a generic continuous-time stochastic process that captures salient features of soccer matches. **Scoring goals: rare events.** We focus on who is control of the ball, whether a change in ball control is a failure or a success, and who secures control of the ball, but we do not model the process of scoring goals. While scoring goals is important for winning matches, the event of scoring a goal is a rare event and useful predictors are hard to come by, because scoring a goal requires a sequence of complex interactions among players of two competing teams. We leave the construction of models for scoring goals to future research and focus here on ball control and interactions among players, which are important for scoring goals and winning matches. **Ball control and interactions among players.** We first describe a generic continuous-time stochastic process. We then introduce a specification of the generic continuous-time stochastic process in Section 3.2 and discuss properties of the continuous-time stochastic process in Section 3.3. A generic continuous-time stochastic process of a soccer match starting at time \(t_{0}\coloneqq 0\) and stopping at time \(T\in[90,+\infty)\) takes the following form: 1. At time \(t_{0}\coloneqq 0\), the referee starts the match. The player who secures control of the ball at time \(t_{0}\) is chosen at random from the set \(\mathcal{T}_{1,t_{0}}\,\cup\,\mathcal{T}_{2,t_{0}}\) and is denoted by \(i_{1}\). 2. At time \(t_{m}\coloneqq t_{m-1}+h_{m}\) (\(m=1,2,\dots\)), the ball passes from player \(i_{m}\in\mathcal{T}_{1,t_{m}}\,\cup\,\mathcal{T}_{2,t_{m}}\) to player \(j_{m}\in\mathcal{T}_{1,t_{m}}\,\cup\,\mathcal{T}_{2,t_{m}}\,\setminus\,\{i_{m}\}\), where \(h_{m}\sim\text{Exponential}(\lambda_{i_{m}})\) and \(i_{m}=j_{m-1}\) (\(m=2,3,\dots\)). The process of passing the ball from player \(i_{m}\) to player \(j_{m}\) is decomposed as follows: 1. The change in ball control is either a failure (indicated by \(S_{i_{m}}=0\)) in that player \(i_{m}\) loses the ball to a player of the opposing team, or is a success (indicated by \(S_{i_{m}}=1\)) in that \(i_{m}\) succeeds in passing the ball to a player of \(i_{m}\)'s own team. 2. Conditional on \(S_{i_{m}}\in\{0,1\}\), player \(i_{m}\) cedes control of the ball to player \(j_{m}\in\mathcal{T}_{1,t_{m}}\cup\mathcal{T}_{2,t_{m}}\setminus\,\{i_{m}\}\), indicated by \(i_{m}\to j_{m}\). 3. The referee stops the match at time \(T\in[90,+\infty)\). We consider the decision of the referee to stop the match to be exogenous, so that the stopping time \(T\in[90,+\infty)\) of the match is non-random. In practice, soccer matches last 90 minutes, but disruptions of matches due to injuries and substitutions of players may result in overtime. ### Specification of continuous-time stochastic process We introduce a specification of the generic continuous-time stochastic process introduced in Section 3.1, by specifying the distributions of the holding times \(h_{m}\), the success probabilities \(\mathbb{P}(S_{i_{m}}=s_{i_{m}})\), and the pass probabilities \(\mathbb{P}(i_{m}\to j_{m}\mid S_{i_{m}}=s_{i_{m}})\). The properties of the resulting continuous-time stochastic process are discussed in Section 3.3. Throughout, we denote by \(\mathcal{I}_{m}\) the team of player \(i_{m}\) in control of the ball at time \(t_{m}\), that is, \(\mathcal{I}_{m}\coloneqq\mathcal{T}_{1,t_{m}}\) if \(i_{m}\in\mathcal{T}_{1,t_{m}}\) and \(\mathcal{I}_{m}\coloneqq\mathcal{T}_{2,t_{m}}\) otherwise. Holding time distributionsA natural specification of the holding time distributions is \[h_{m}\ \stackrel{{\text{\tiny ind}}}{{\sim}}\ \ \text{Exponential}( \lambda_{i_{m}}).\] To allow the rate \(\lambda_{i_{m}}\in(0,+\infty)\) of player \(i_{m}\)'s holding time \(h_{m}\) to depend on observed attributes of \(i_{m}\) (e.g., the position of \(i_{m}\) in the team's formation and the spatial location of \(i_{m}\) on the court), we assume that \[\lambda_{i_{m}}(\boldsymbol{\omega})\ \ \coloneqq\ \exp(\boldsymbol{\omega}^{ \top}\boldsymbol{c}_{i_{m}}),\] where \(\boldsymbol{\omega}\in\mathbb{R}^{p}\) is a vector of \(p\) parameters and \(\boldsymbol{c}_{i_{m}}\in\mathbb{R}^{p}\) is a vector of \(p\) observed attributes of player \(i_{m}\). Success probabilitiesThe probability of a successful pass \(\{S_{i_{m}}=1\}\) by player \(i_{m}\) can be specified by a logit model: \[\text{logit}(\mathbb{P}_{\boldsymbol{\alpha},\boldsymbol{\eta}}(S_{i_{m}}=1)) \ \ \coloneqq\ \ \boldsymbol{\alpha}^{\top}\boldsymbol{x}_{1,i_{m}}+\eta_{1,i_{m}},\] where \(\boldsymbol{\alpha}\in\mathbb{R}^{d_{1}}\) is a vector of \(d_{1}\) parameters and \(\boldsymbol{x}_{1,i_{m}}\in\mathbb{R}^{d_{1}}\) is a vector of \(d_{1}\) observed attributes of player \(i_{m}\). The random effect \(\eta_{1,i_{m}}\in\mathbb{R}\) captures the effect of unobserved attributes of player \(i_{m}\) on the success probability. Pass probabilitiesThe conditional probability of event \(\{i_{m}\to j_{m}\}\), given \(\{S_{i_{m}}=0\}\), can be specified by a multinomial logit model: \[\mathbb{P}_{\boldsymbol{\beta},\boldsymbol{\eta}}(i_{m}\to j_{m}\mid S_{i_{m}}=0 )\ \coloneqq\ \left\{\begin{array}{ll}\frac{\exp(\boldsymbol{\beta}^{\top} \boldsymbol{x}_{2,i_{m},j_{m}}+\eta_{2,j_{m}})}{\sum_{j\,\not\in \,\mathcal{I}_{m}}\exp(\boldsymbol{\beta}^{\top}\boldsymbol{x}_{2,i_{m},j}+\eta _{2,j})}&\mbox{if $j_{m}\not\in\mathcal{I}_{m}$}\\ 0&\mbox{if $j_{m}\in\mathcal{I}_{m}$},\end{array}\right.\] where \(\boldsymbol{\beta}\in\mathbb{R}^{d_{2}}\) is a vector of \(d_{2}\) parameters and \(\boldsymbol{x}_{2,i_{m},j}\in\mathbb{R}^{d_{2}}\) is a vector of \(d_{2}\) observed attributes of players \(i_{m}\) and \(j\). The random effect \(\eta_{2,j}\in\mathbb{R}\) captures the effect of unobserved attributes of player \(j\) on the conditional probability of securing control of the ball. Along the same lines, the conditional probability of event \(\{i_{m}\to j_{m}\}\), given \(\{S_{i_{m}}=1\}\), can be specified by a multinomial logit model: \[\mathbb{P}_{\boldsymbol{\gamma},\boldsymbol{\eta}}(i_{m}\to j_{m}\mid S_{i_{m} }=1)\coloneqq\left\{\begin{array}{ll}0&\mbox{if $j_{m}\not\in\mathcal{I}_{m}$}\\ \frac{\exp(\boldsymbol{\gamma}^{\top}\boldsymbol{x}_{3,i_{m},j_{m}}+\eta_{3,j_ {m}})}{\sum_{j\,\in\,\mathcal{I}_{m}\setminus\{i_{m}\}}\exp(\boldsymbol{\gamma }^{\top}\boldsymbol{x}_{3,i_{m},j}+\eta_{3,j})}&\mbox{if $j_{m}\in\mathcal{I}_{m}$}, \end{array}\right.\] where \(\boldsymbol{\gamma}\in\mathbb{R}^{d_{3}}\) is a vector of \(d_{3}\) parameters and \(\boldsymbol{x}_{3,i_{m},j}\in\mathbb{R}^{d_{3}}\) is a vector of \(d_{3}\) observed attributes of players \(i_{m}\) and \(j\). The random effect \(\eta_{3,j}\in\mathbb{R}\) captures the effect of unobserved attributes of player \(j\) on the conditional probability of securing control of the ball. Random effectsLet \(\boldsymbol{\eta}_{i}\coloneqq(\eta_{1,i},\,\eta_{2,i},\,\eta_{3,i})\in \mathbb{R}^{3}\) and assume that \[\boldsymbol{\eta}_{i}\mid\boldsymbol{\Sigma}\ \ \stackrel{{ \mbox{\tiny{iid}}}}{{\sim}}\ \ \mathrm{MVN}_{3}(\boldsymbol{0}_{3},\,\boldsymbol{\Sigma}),\] where \(\boldsymbol{0}_{3}\in\mathbb{R}^{3}\) is the three-dimensional null vector and \(\boldsymbol{\Sigma}\in\mathbb{R}^{3\times 3}\) is a positive-definite variance-covariance matrix. ### Properties of continuous-time stochastic process We discuss properties of the continuous-time stochastic process specified in Sections 3.1 and 3.2. Throughout Section 3.3, we suppress the notational dependence of all quantities on the parameters \(\boldsymbol{\alpha},\,\boldsymbol{\beta},\,\boldsymbol{\gamma},\,\boldsymbol{ \omega},\,\boldsymbol{\Sigma}\) and the random effects \(\boldsymbol{\eta}_{1},\boldsymbol{\eta}_{2},\ldots\) We assume that the continuous-time stochastic process satisfies two assumptions: 1. In a time interval of length \(t_{1}>0\), the compositions of teams \(\mathcal{T}_{1,t}\) and \(\mathcal{T}_{2,t}\) are constant in the sense that \(\mathcal{T}_{1,t}\equiv\mathcal{T}_{1}\) and \(\mathcal{T}_{2,t}\equiv\mathcal{T}_{2}\) for all \(t\in[0,\,t_{1})\) and the \(22\) players of the two teams are labeled \(1,\ldots,22\). * The rates \(\lambda_{i}\), success probabilities \(\mathbb{P}(S_{i}=s_{i})\), and pass probabilities \(\mathbb{P}(i\to j\mid S_{i}=s_{i})\) depend on attributes of players and teams, but do not depend on time \(t\in[0,\,t_{1})\). Assumption A.1 states that the compositions of the teams do not change in a time interval of positive length. When the compositions of the teams do change, the continuous-time stochastic process restarts, although the properties of the stochastic process may change as a result of changes in the compositions of the teams. Assumption A.2 ensures that the continuous-time stochastic process is time-homogeneous in a time interval of positive length. While it can be dropped, the understanding of time-inhomogeneous stochastic processes is less developed than the understanding of time-homogeneous processes (e.g., Stroock, 2014, Section 6.5.3). **Proposition 1**.: _Consider the continuous-time stochastic process described in Sections 3.1 and 3.2 satisfying Assumptions A.1 and A.2. The stochastic process is a right-continuous and time-homogeneous Markov process \(\{Y(t),\,t\in[0,\,t_{1})\}\) with finite state space \(\mathcal{Y}\coloneqq\{1,\ldots,22\}\), where the state \(Y(t)\in\mathcal{Y}\) of the Markov process at time \(t\) indicates which player is in control of the ball at time \(t\). The initial distribution of the Markov process at time \(t_{0}\coloneqq 0\) is the Uniform distribution on \(\mathcal{Y}\) and the elements \(q_{i,j}\) of the generator matrix \(\boldsymbol{Q}\in\mathbb{R}^{[\mathcal{Y}|\times|\mathcal{Y}|}\) of the Markov process are of the form_ \[q_{i,j} \coloneqq \left\{\begin{array}{ll}\lambda_{i}\,\,\mathbb{P}(S_{i}=0)\, \,\mathbb{P}(i\to j\mid S_{i}=0)&\mbox{if $i\neq j$ and $j\not\in\mathcal{I}_{i}$}\\ \lambda_{i}\,\,\mathbb{P}(S_{i}=1)\,\,\mathbb{P}(i\to j\mid S_{i}=1)&\mbox{if $i\neq j$ and $j\in\mathcal{I}_{i}$}\\ -\lambda_{i}&\mbox{if $i=j$},\end{array}\right.\] _where \(\mathcal{I}_{i}\) denotes the team of player \(i\in\mathcal{Y}\). Consider any \(t\in[0,\,t_{1})\) and any \(h\in[0,\,t_{1})\). Then, for all \((i,j)\in\mathcal{Y}^{2}\), conditional on \(\{Y(t)=i\}\), the event \(\{Y(t+h)=j\}\) is independent of \(\{Y(s),\,s\leq t\}\) and, as \(h\downarrow 0\), the conditional probability of event \(\{Y(t+h)=j\}\) given \(\{Y(t)=i\}\) is_ \[\mathbb{P}(Y(t+h)=j\mid Y(t)=i) = \delta_{i,j}+q_{i,j}\,h+o(h),\] _where \(\delta_{i,j}\) is defined by \(\delta_{i,j}\coloneqq 1\) if \(i=j\) and \(\delta_{i,j}\coloneqq 0\) if \(i\neq j\)._ The proposition follows from the construction of the continuous-time stochastic process in Sections 3.1 and 3.2--including the memoryless property of the independent Exponential holding time distributions--along with Assumptions A.1 and A.2 and Theorem 2.8.2 of Norris (1997, p. 94). The proposition reveals that the continuous-time stochastic process focuses on ball control and who passes the ball to whom, by specifying the rates \(q_{i,j}\) of passing the ball between pairs of players \((i,j)\in\mathcal{Y}^{2}\). It is worth noting that the stochastic modeling framework is not limited to Markov and time-homogeneity assumptions. Both the Markov and time-homogeneity assumptions can be removed by specifying model terms that induce non-Markovian behavior and time-heterogeneity. ## 4 Bayesian learning We pursue a Bayesian approach to learning the stochastic modeling framework introduced in Section 3 from high-resolution tracking data. A Bayesian approach is well-suited to online learning, that is, updating the knowledge about the parameters \(\boldsymbol{\alpha},\,\boldsymbol{\beta},\,\boldsymbol{\gamma},\,\boldsymbol{ \omega},\,\boldsymbol{\Sigma}\) and the random effects \(\boldsymbol{\eta}_{1},\boldsymbol{\eta}_{2},\dots\) as soon as additional data points roll in. To demonstrate, consider two teams of interest and let \(\boldsymbol{x}_{1}\coloneqq(h_{1,m},\,i_{1,m},\,j_{1,m})_{m=1}^{M_{1}}\) be the outcome of the first match of the two teams (with \(M_{1}\geq 1\) passes) and \(\boldsymbol{x}_{2}\coloneqq(h_{2,m},\,i_{2,m},\,j_{2,m})_{m=1}^{M_{2}}\) be the outcome of the second match of the two teams (with \(M_{2}\geq 1\) passes). To ease the presentation, assume that the compositions of the two teams do not change during the first and second match; that the 22 players of the two teams are labeled \(1,\dots,22\); and that the random effects are denoted by \(\boldsymbol{\eta}\coloneqq(\boldsymbol{\eta}_{1},\dots,\boldsymbol{\eta}_{22})\). In addition, assume that the outcomes of the first and second match \(\boldsymbol{x}_{1}\) and \(\boldsymbol{x}_{2}\) satisfy \[\pi(\boldsymbol{x}_{1},\,\boldsymbol{x}_{2}\mid\boldsymbol{\alpha },\,\boldsymbol{\beta},\,\boldsymbol{\gamma},\,\boldsymbol{\omega},\, \boldsymbol{\eta}) = \pi(\boldsymbol{x}_{1}\mid\boldsymbol{\alpha},\,\boldsymbol{\beta},\,\boldsymbol{\gamma},\,\boldsymbol{\omega},\,\boldsymbol{\eta})\] \[\times \pi(\boldsymbol{x}_{2}\mid\boldsymbol{\alpha},\,\boldsymbol{\beta },\,\boldsymbol{\gamma},\,\boldsymbol{\omega},\,\boldsymbol{\eta},\, \boldsymbol{x}_{1}),\] where \(\pi\) denotes a generic probability density function. The conditional probability density function \(\pi(\boldsymbol{x}_{1}\mid\boldsymbol{\alpha},\,\boldsymbol{\beta},\, \boldsymbol{\gamma},\,\boldsymbol{\omega},\,\boldsymbol{\eta})\) is of the form \[\pi(\boldsymbol{x}_{1}\mid\boldsymbol{\alpha},\,\boldsymbol{\beta },\,\boldsymbol{\gamma},\,\boldsymbol{\omega},\,\boldsymbol{\eta})\;=\;\prod_{ m=1}^{M_{1}}\,\left[\lambda_{i_{1,m}}(\boldsymbol{\omega})\,\exp(-\lambda_{i_{1,m}}( \boldsymbol{\omega})\,h_{1,m})\right.\] \[\times \left.\mathbb{P}_{\boldsymbol{\alpha},\boldsymbol{\eta}}(S_{i_{1,m}}=s_{i_{1,m}})\right.\] \[\times \left.\mathbb{P}_{\boldsymbol{\beta},\boldsymbol{\eta}}(i_{1,m} \to j_{1,m}\mid S_{i_{1,m}}=0)^{1(S_{i_{1,m}}=\,0)}\right.\] \[\times \left.\mathbb{P}_{\boldsymbol{\gamma},\boldsymbol{\eta}}(i_{1,m} \to j_{1,m}\mid S_{i_{1,m}}=1)^{1(S_{i_{1,m}}=\,1)}\right]\] \[\times \exp\left(-\lambda_{i_{1,M_{1}+1}}(\boldsymbol{\omega})\,\left( T_{1}-\sum_{k=1}^{M_{1}}h_{1,k}\right)\right),\] assuming that the start time \(t_{0}\coloneqq 0\) and the stopping time \(T_{1}\in[90,+\infty)\) of the match are determined by the referee and are both non-random. The function \(\mathbb{1}(.)\) is an indicator function, which is \(1\) if its argument is true and is \(0\) otherwise. The conditional probability density function \(\pi(\mathbf{x}_{2}\mid\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{ \eta},\,\mathbf{x}_{1})\) is of the same form as \(\pi(\mathbf{x}_{1}\mid\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{ \eta})\), but is based on \(M_{2}\) passes rather than \(M_{1}\) passes and can depend on the outcome of the first match \(\mathbf{x}_{1}\). The posterior of \(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma},\,\mathbf{\eta}\) based on the outcome of the first match \(\mathbf{x}_{1}\) is proportional to \[\begin{array}{rcl}\pi(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega}, \,\mathbf{\Sigma},\,\mathbf{\eta}\mid\mathbf{x}_{1})&\propto&\pi(\mathbf{x}_{1}\mid\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\eta})\\ &\times&\pi(\mathbf{\eta}\mid\mathbf{\Sigma})\;\pi(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{ \gamma},\,\mathbf{\omega},\,\mathbf{\Sigma}),\end{array}\] where \(\pi(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma})\) denotes the prior of \(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma}\). The prior of \(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma}\) used in Section 5 is described in Appendix E. As soon as the outcome of the second match \(\mathbf{x}_{2}\) is observed, the knowledge about \(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma},\,\mathbf{\eta}\) in light of \(\mathbf{x}_{2}\) can be updated as follows: \[\begin{array}{rcl}&\pi(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega}, \,\mathbf{\Sigma},\,\mathbf{\eta}\mid\mathbf{x}_{1},\,\mathbf{x}_{2})\\ \propto&\pi(\mathbf{x}_{1},\,\mathbf{x}_{2}\mid\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma}, \,\mathbf{\omega},\,\mathbf{\eta})\;\pi(\mathbf{\eta}\mid\mathbf{\Sigma})\;\pi(\mathbf{\alpha},\, \mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma})\\ \propto&\pi(\mathbf{x}_{2}\mid\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega}, \,\mathbf{\eta},\,\mathbf{x}_{1})\;\pi(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{ \omega},\,\mathbf{\Sigma},\,\mathbf{\eta}\mid\mathbf{x}_{1}).\end{array}\] In other words, as soon as the outcome of the second match \(\mathbf{x}_{2}\) is observed, we can update the knowledge about \(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma},\,\mathbf{\eta}\) in light of \(\mathbf{x}_{2}\) via \(\pi(\mathbf{x}_{2}\mid\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{ \eta},\,\mathbf{x}_{1})\), with the knowledge about \(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma},\,\mathbf{\eta}\) prior to the second match \(\mathbf{x}_{2}\) being quantified by \(\pi(\mathbf{\alpha},\,\mathbf{\beta},\,\mathbf{\gamma},\,\mathbf{\omega},\,\mathbf{\Sigma},\,\mathbf{ \eta}\mid\mathbf{x}_{1})\), the posterior based on the outcome of the first match \(\mathbf{x}_{1}\). As a result, a Bayesian approach is a natural approach to updating knowledge about the stochastic modeling framework as additional data points roll in. More than two teams with changing compositions can be handled, and multiple matches in parallel. To approximate posteriors, we use Hamiltonian Monte Carlo implemented in R package rstan(Stan Development Team, 2023). ## 5 Application We apply the stochastic modeling framework introduced in Section 3 to data on matches of selected soccer teams during the 2020/21 season: * Juventus Turin (Juventus F.C.; 15,832 observations); * Inter Milan (Internazionale Milano; 13,564 observations); * Crotone (Crotone S.r.l.; 8,125 observations); * Fiorentina (ACF Fiorentina; 8,107 observations). Juventus Turin and Inter Milan belong to the most storied Italian soccer clubs, while Crotone and Fiorentina were mid- and low-level teams during the 2020/21 season, respectively. The numbers of observations mentioned above refer to the total numbers of passes during the 2020/21 season, aggregated over all matches played by the selected teams with the dominant formation. The selected teams have in common that all of them were proficient users of the 4-4-2 formation (Juventus Turin) or the 3-5-2 formation (Inter Milan, Crotone, Fiorentina). Additional descriptive statistics are presented in Section 2 and Appendices B-D. We use the following specification of the stochastic modeling framework described in Sections 3.1 and 3.2: * The Exponential model of the holding times \(h_{m}\) uses the following covariates: indicators of who is in control of the ball (11 indicators for 11 positions) and indicators of whether the player's team is on track to winning or losing the match (i.e., whether the player's team has scored at least one more or one less goal than its opponent). * The logit model of the probability of a successful pass \(\{S_{i_{m}}=1\}\) uses the following covariates, in addition to an intercept: the length of the pass in terms of two-dimensional Euclidean distance; an indicator of whether player \(i_{m}\) initiates the pass in the opposing team's half of the court and an indicator of whether the ball ends up in the opposing team's third of the court; an indicator of whether the pass is a forward pass; an indicator of whether the pass is an air pass; indicators of whether the player's team is on track to winning or losing the match (i.e., whether the player's team has scored at least one more or one less goal than its opponent); and a position-specific random effect. * The multinomial logit model of the conditional probability of event \(\{i_{m}\to j_{m}\}\), given \(\{S_{i_{m}}=1\}\), uses the following predictors: the graph distance between players \(i_{m}\) and \(j_{m}\)--defined as the length of the shortest path between \(i_{m}\) and \(j_{m}\)--based on the nearest-neighbor graph shown in Figure 3; the number of times \(j_{m}\) received the ball before the \(m\)-th pass; and a position-specific random effect. We focus here on all matches involving the four mentioned teams with the dominant formation, but we do not use the data of the opposing teams. As a consequence, we do not specify the conditional probabilities of events \(\{i_{m}\to j_{m}\}\) given \(\{S_{i_{m}}=0\}\). In addition, note that the random effects are position-specific rather than player-specific, because the data do not include complete information about which position is filled by which player. Tables 1 and 2 present posterior summaries of the parameters of the stochastic modeling framework, based on the 2020/21 matches of Fiorentina, Crotone, and Inter Milan (with 3-5-2 formation) and Juventus Turin (with 4-4-2 formation). Among other things, these results suggest that all teams reduce the pace of passing the ball when being on track to winning a match. By contrast, when being on track to losing a match, Juventus Turin and Inter Milan reduce the pace, whereas Fiorentina and Crotone do not. The Figure 3: The nearest-neighbor graph, which connects pairs of positions that are considered to be nearest neighbors on the court. The graph distance between a pair of positions is the length of the shortest path between them. The abbreviations of player positions are detailed in Appendix A. difference suggests that the strategies of Juventus Turin and Inter Milan to dealing with adverse situations deviate from the strategies of Fiorentina and Crotone. There is an additional observation suggesting that Juventus Turin and Inter Milan deviate from the others: Starting a pass in the opponent's half of the court does not increase the probability of a successful pass among Fiorentina and Crotone players, but it does increase the probability of a successful pass among Juventus Turin and Inter Milan players. The increase in the probability of a successful pass in the opponent's half of the court may be due to the offensive strength of Juventus Turin and Inter Milan. To conclude, the top teams (e.g., Juventus Turin, Inter Milan) may be able to control the pace of a match with more ease than other teams and may have other strategies for dealing with adverse situations, e.g., by relying on their offensive strength in the opponent's half of the court. Among the position-specific effects, it is worth noting that the length of time the goal keeper controls the ball tends to be lower than the length of time other positions control the ball. This observation makes sense, because the goal keeper has an incentive to remove the ball from the penalty area as soon as possible, so that the opposing team cannot gain control of the ball in the penalty area and score an easy goal. ## 6 Discussion: open questions and directions for future research We have introduced a continuous-time stochastic modeling framework for soccer matches. While the proposed stochastic modeling framework is tailored to soccer matches, many components of the framework can be adapted to other team-based sports (e.g., basketball and football), though some of the components will have to be tailored to the specific application at hand. We view the proposed stochastic modeling framework as a first step to modeling soccer matches and other team-based sports as space- and time-indexed network processes and hope that it will stimulate more research in an area inundated by high-resolution tracking data. To stimulate future research, we conclude with an extended discussion of open questions and directions for future research. \begin{table} \begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{**Fiorentina**} & \multicolumn{2}{c}{**Crotone**} & \multicolumn{2}{c}{**Inter Milan**} \\ & M & CI & M & CI & M & CI \\ \hline **Successful passes** & \(\{S_{i_{m}}=1\}\)**:** & & & & & \\ Intercept & 2.93 & (2.47, 3.39) & 3.27 & (2.85, 3.68) & 3.34 & (2.82, 3.86) \\ Length of pass & 0.00 & (-0.01, 0.00) & 0.00 & (-0.01, 0.00) & 0.00 & (0.00, 0.01) \\ Forward pass & -0.57 & (-0.74, -0.40) & -0.88 & (-1.07, -0.70) & -0.84 & (-0.99, -0.70) \\ Start: half & 0.17 & (-0.02, 0.36) & -0.03 & (-0.23, 0.17) & 0.29 & (0.12, 0.46) \\ End: third & -0.67 & (-0.85, -0.49) & -0.64 & (-0.84, -0.45) & -0.79 & (-0.96, -0.62) \\ Air pass & -1.76 & (-1.93, -1.59) & -1.90 & (-2.07, -1.73) & -1.84 & (-1.98, -1.70) \\ Winning & -0.13 & (-0.30, 0.04) & -0.25 & (-0.46, -0.04) & -0.13 & (-0.26, 0.00) \\ Losing & -0.01 & (-0.17, 0.15) & -0.11 & (-0.26, 0.04) & 0.02 & (-0.16, 0.21) \\ \hline **Passes \(\{i_{m}\to j_{m}\}\) given** & \(\{S_{i_{m}}=1\}\)**:** & & & & \\ Graph distance & -0.69 & (-0.73, -0.65) & -0.70 & (-0.74, -0.65) & -0.98 & (-1.02, -0.95) \\ Pass received & 0.00 & (-2.3e-3, 2.0e-3) & 0.00 & (-1.7e-3, 3.2e-4) & 0.00 & (-1.6e-3, 8.6e-5) \\ \hline **Holding times** & \(h_{m}\)**:** & & & & \\ GK & -3.23 & (-3.34, -3.13) & -2.86 & (-2.95, -2.77) & -2.98 & (-3.06, -2.90) \\ LCB & -2.62 & (-2.68, -2.56) & -2.77 & (-2.83, -2.71) & -2.32 & (-2.37, -2.27) \\ CB & -2.62 & (-2.70, -2.55) & -2.70 & (-2.76, -2.64) & -2.39 & (-2.44, -2.34) \\ RCB & -2.86 & (-2.92, -2.79) & -2.44 & (-2.51, -2.37) & -2.34 & (-2.39, -2.30) \\ LWB & -2.33 & (-2.40, -2.26) & -2.61 & (-2.69, -2.53) & -2.27 & (-2.34, -2.20) \\ LCMF & -2.51 & (-2.59, -2.43) & -2.50 & (-2.57, -2.42) & -2.07 & (-2.13, -2.01) \\ DMF & -2.62 & (-2.68, -2.55) & -2.42 & (-2.49, -2.36) & -2.13 & (-2.18, -2.08) \\ RCMF & -2.34 & (-2.41, -2.26) & -2.62 & (-2.70, -2.54) & -2.21 & (-2.26, -2.16) \\ RWB & -2.48 & (-2.56, -2.40) & -2.29 & (-2.37, -2.20) & -2.01 & (-2.07, -1.95) \\ SS & -2.63 & (-2.72, -2.54) & -2.37 & (-2.46, -2.28) & -2.11 & (-2.19, -2.03) \\ CF & -2.62 & (-2.71, -2.54) & -2.98 & (-3.08, -2.88) & -2.30 & (-2.38, -2.22) \\ Winning & -0.42 & (-0.47, -0.36) & -0.37 & (-0.44, -0.30) & -0.36 & (-0.40, -0.33) \\ Losing & 0.05 & (0.00, 0.10) & -0.01 & (-0.05, 0.04) & -0.08 & (-0.13, -0.03) \\ \hline **Random effects:** & & & & & \\ Correlation & -0.36 & (-0.83, 0.12) & -0.25 & (-0.75, 0.25) & -0.03 & (-0.53, 0.47) \\ SD: success & 0.58 & (0.31, 0.86) & 0.62 & (0.34, 0.89) & 0.80 & (0.44, 1.17) \\ SD: pass & 0.51 & (0.28, 0.74) & 0.24 & (0.13, 0.36) & 0.47 & (0.25, 0.69) \\ \hline \end{tabular} \end{table} Table 1: Posterior summaries for Fiorentina, Crotone, and Inter Milan (with 3-5-2 formation): M refers to posterior medians and CI refers to 95% posterior credible intervals. \begin{table} \begin{tabular}{l c c} & \multicolumn{2}{c}{**Juventus Turin**} \\ & M & CI \\ \hline **Successful passes** & \(\{S_{i_{m}}=1\}\)**:** & \\ Intercept & 3.36 & (2.90, 3.81) \\ Length of pass & 0.00 & (-0.01, 0.00) \\ Forward pass & -0.61 & (-0.75, -0.47) \\ Start: half & 0.26 & (0.10, 0.42) \\ End: third & -0.92 & (-1.07, -0.76) \\ Air pass & -2.04 & (-2.18, -1.89) \\ Winning & -0.04 & (-0.17, 0.09) \\ Losing & 0.04 & (-0.13, 0.20) \\ \hline **Passes** & \(\{i_{m}\to j_{m}\}\) **given** & \(\{S_{i_{m}}=1\}\)**:** \\ Graph distance & -0.70 & (-0.73, -0.67) \\ Pass received & 0.00 & (-1.6e-3, 8.6e-05) \\ \hline **Holding times** & \(h_{m}\)**:** & \\ GK & -2.80 & (-2.88, -2.72) \\ LB & -2.08 & (-2.13, -2.03) \\ LCB & -2.38 & (-2.42, -2.33) \\ RCB & -2.37 & (-2.41, -2.32) \\ RB & -2.09 & (-2.14, -2.05) \\ LW & -2.06 & (-2.12, -2.00) \\ LCMF & -2.19 & (-2.24, -2.14) \\ RCMF & -2.26 & (-2.30, -2.21) \\ RW & -2.17 & (-2.23, -2.11) \\ SS & -1.81 & (-1.87, -1.74) \\ CF & -1.94 & (-2.01, -1.87) \\ Winning & -0.20 & (-0.23, -0.16) \\ Losing & -0.15 & (-0.19, -0.10) \\ \hline **Random effects:** & & \\ Correlation & -0.33 & (-0.82, 0.15) \\ SD: success & 0.69 & (0.38, 0.99) \\ SD: pass & 0.44 & (0.24, 0.64) \\ \hline \end{tabular} \end{table} Table 2: Posterior summaries for Juventus Turin (with 4-4-2 formation): M refers to posterior medians and CI refers to 95% posterior credible intervals. ### Predicting goals and match outcomes The holy grail of sport statistics is to predict match outcomes. In soccer, the greatest obstacle to predicting goals and hence match outcomes is the fact that the event of scoring a goal is a rare event and useful predictors are hard to come by, because scoring a goal requires a sequence of complex interactions among players of two competing teams. As a first step, we have therefore focused on ball control and interactions among players, which are important for scoring goals and winning matches. That said, we hope that advances in data collection, statistical modeling, and statistical computing help understand and predict such rare events (Brechot and Flepp, 2020). ### Model specification The deluge of high-resolution tracking data generated by soccer and other team-based sports implies that there are many possible features that may be relevant for predicting ball control, goals, and match outcomes. The specific features used in Section 5 make sense as a starting point, but sound model selection procedures and more data are needed to shed light on which features are useful for predicting ball control, goals, and match outcomes. ### Data-related challenges A Bayesian approach is well-suited to online learning, helping update knowledge as soon as additional data points roll in. Having said that, there are at least two challenges to online learning. First, current technology does not supply data without delay and without human intervention (i.e., without data cleaning by humans) that could be used by a Bayesian algorithm to update knowledge and make predictions. Second, Bayesian computing may not be fast enough to update knowledge about the stochastic process and make model-based predictions without delay. ### Computational challenges While a Bayesian approach is well-suited to online learning, Bayesian computing comes at a cost. We used Hamiltonian Monte Carlo (Congdon, 2019; Stan Development Team, 2023) to approximate posteriors. More scalable Bayesian algorithm exist--e.g., Variational Bayes (VB; see, e.g., Beal, 2003; Blei et al., 2017) and Approximate Bayesian Computing (ABC; see, e.g., Toni et al., 2009; Sisson et al., 2007)--but such methods may require tuning.
2308.14301
Artificial Intelligence in Career Counseling: A Test Case with ResumAI
The rise of artificial intelligence (AI) has led to various means of integration of AI aimed to provide efficiency in tasks, one of which is career counseling. A key part of getting a job is having a solid resume that passes through the first round of programs and recruiters. It is difficult to find good resources or schedule an appointment with a career counselor to help with editing a resume for a specific role. With the rise of ChatGPT, Bard, and several other AI chat programs it is possible to provide specific, automated feedback on various concerns to suggest places for improvement within the context of career counseling. This paper begins with a quick literature review on the ethical considerations and limitations of AI in career counseling. The authors also have created their own website service, called ResumAI, to test and review the functionality of an AI career counselor. The findings of this study will contribute to the understanding of chat AI ResumAI reviewer programs and sites. The implications of the findings for the field of career counseling, AI development, and ethical practice will be discussed.
Muhammad Rahman, Sachi Figliolini, Joyce Kim, Eivy Cedeno, Charles Kleier, Chirag Shah, Aman Chadha
2023-08-28T04:35:20Z
http://arxiv.org/abs/2308.14301v1
# Artificial Intelligence in Career Counseling: A Test Case with ResumAI ###### Abstract The rise of artificial intelligence (AI) has led to various means of integration of AI aimed to provide efficiency in tasks, one of which is career counseling. A key part of getting a job is having a solid resume that passes through the first round of programs and recruiters. It is difficult to find good resources or schedule an appointment with a career counselor to help with editing a resume for a specific role. With the rise of ChatGPT, Bard, and several other AI chat programs it is possible to provide specific, automated feedback on various concerns to suggest places for improvement within the context of career counseling. This paper begins with a quick literature review on the ethical considerations and limitations of AI in career counseling. The authors also have created their own website service, called ResumAI, to test and review the functionality of an AI career counselor. The findings of this study will contribute to the understanding of chat AI ResumAI reviewer programs and sites. The implications of the findings for the field of career counseling, AI development, and ethical practice will be discussed. ## 1 Introduction The rise of artificial intelligence (AI) has led to increasing integration of AI in mobile and web technology aimed to enhance efficiency in tasks. The recent launch and utilization of ChatGPT, rapidly reaching 100,000,000 users within two months of launch is evidence of the massive increase in interest in AI and its applications [14]. An under-explored area of AI utilization is in career counseling. As of this paper's writing, there are currently no AI chat programs with a focus of career counseling. A key part of getting a job is having a resume resume that passes through the first round of screening. Many companies utilize AI to screen resumes, which have raised concern about bias over resumes being rejected before being seen by human eye [13]. For many students, especially at less well-funded schools, it is difficult to schedule an appointment with a career counselor or receive personalized feedback to help with editing a resume for a specific role. However, when career counselors at universities and colleges are able to provide this service, students report high satisfaction [12]. Career counseling is a field which would benefit from increased amounts of support, and an AI chatbot would help in that regard due to 24/7 availability. We define career counseling in this paper as services and activities intended to assist individuals, to make educational, training and occupational choices and to manage their careers.[15] An AI career counselor would ideally be able to provide specific, automated feedback on various concerns to suggest places for improvement. However, there are ethical concerns and gaps in human and AI career counseling that need to be examined to ensure responsible and equitable use of these technologies. The current literature is sparse on the considerations and limitations of AI in career counseling. In general, the most identified issues of AI usage include justice, fairness, transparency, and non-maleficence, responsibility and privacy which would also apply to AI career counseling [16]. This research paper will investigate the literature about the current usage and explorations of AI career counselors within the context of a chat AI ResumAI reviewer service. In addition to identifying select relevant papers in the literature review about this topic, the authors of the paper have also created their own service called _ResumeAI_ to retain user feedback on their experience and goals of the usage of such a tool. ## 2 Literature Review Using Google Scholar and PubMed, we searched relevant key terms for AI and career counseling for the literature review. Our aim was not to be fully all-encompassing but rather to achieve a high-level overview of the state of affairs for AI, ethics, and career counseling. A study from 2021 utilizing AI and real time data to predict with 76% accuracy occupational transitions to help people find new jobs in times of major labor demand shifts [17]. Another paper reported creating a framework based on explainable AI to analyze educational factors such as aptitude, skills, expectations and educational interests to help students opt for right decisions for career growth, including choosing the correct classes. The authors used White and Black box model techniques for explainability and interpreting AI results on their career counseling-based dataset [14]. An AI chatbot was proposed and tested in study in 2020, allowing students confused about what career to choose to answer questions and receive feedback on their potential career pathways [15]. A similar study found AI and human counselors generally agreed on recommendations, but on disagreements, the AI performed better than the prediction made by counselors [13]. A study from Finland published in 2021 investigating AI in career guidance in higher education found, based on their literature review, AI being beneficial for student self-regulation, motivation and well-being as well as personalized learning support and feedback [23]. Their own research found that students desired timely and accessible guidance, whether it be AI or human. They also found that students wanted to utilize AI to compare their skills to the requirements of specific positions. Guidance staff themselves expressed hope AI would help them assist in relying information to students for them to better handle case management and create relationships with students. This is where services such as ResumAI will prove useful. ## 3 Methods The authors of the paper created a website utilizing the OpenAI API text-davinci-003 ([https://openai.com/blog/openai-api](https://openai.com/blog/openai-api)) to create a career counselor chatbot website. The website aims to guide the user through the career counseling service, with specific prompts on a drop down menu the user could choose to begin their session. This was to ensure the user follows specific steps to decrease the likelihood of generic advice being given. The process began with following the guide on the OpenAI's website on creating a chatbot. Key steps are repeated here. First, the OpenAI Github was cloned ([https://github.com/openai/openai-quickstart-node](https://github.com/openai/openai-quickstart-node)). The Github contains many packages and node modules that provided the base for our application. We installed Node.js ([https://nodejs.org/en](https://nodejs.org/en)) and used npm install to install the necessary requirements. Next our API key was added in an.env file and the app was run using npm run dev The main components we worked with were the generate.js file which can be found under the 'API' folder and the index.js file which is located within the 'Pages' folder. The generate.js file's main functionality is to connect to the API. Error handling is also done at multiple steps such as checking to ensure the OpenAI API key is configured, validating a question was entered, and ensuring a resume was inputted through pdf upload or copy pasting text. If all these conditions are met, the API call occurs, leading to a generation of a response based on the prompt. There is a function, generate Prompt function takes a question and a resume as arguments that constructs a prompt to instruct the AI to act as a career advisor named ResumAI, using the resume to personalize the answer to the question. The index.js was modified to create the appearance of the chat page component. It was made to handle pdf parsing of uploaded resumes and sample questions were also listed as a dropdown menu for users to choose from if they did not want to type their own custom question. A download to.txt file button was also added at the end of the page to allow for downloading all the chats made by the AI and user if the user wishes to save their conversation. A separate Github was created for the static pages which can be found here: [https://github.com/Eivy1234/Resumai-site](https://github.com/Eivy1234/Resumai-site). Pages were added for the Home Page, the How to Page, and the About Us page. The main functionality remained on the index.js file which is where the ResumAI Github ([https://github.com/Eivy1234/capstone/tree/master](https://github.com/Eivy1234/capstone/tree/master)) is located. Further components were added outside of the index.js and the generate.js such as a spinner to represent loading when the user pressed the button for submitting their question. CSS style changes were made to improve the appearance of the website. A visual representation of this process can be found in Figure 1. ## 4 Results and Discussion In this section we walk through a typical usage scenario with ResumAI and provide our perspectives on how it can be useful to its primary users - students or early career professionals, especially in low-resource environments. ### ResumAI The ResumAI website is located at [https://eivy1234.github.io/Resumai-site/index.html#](https://eivy1234.github.io/Resumai-site/index.html#) The following figures guides the a new user when using the service for the first time. When visiting the website service, the user is met with the home page: Figure 1: Architecture flow **Welcome To ResumAI!** We have provided a list of sample questions the user may use if they are stuck on what to ask. These include "Are there any specific keywords or buzzwords I should include on my resume to align with industry expectations?" and "Based on my current resume, what are the key strengths and areas for improvement?." Although surface-level, they will help new users get accustomed to using ResumAI and when they feel more prepared, they can input their custom questions at any point. Chat history is also available to be saved and downloaded into a text file in case user wants to save their conversation. Let us input some sample questions with a resume from an aspiring Investment Banker from the University of Pennsylvania. First, the student asks about his resume in a general manner. ResumAI does a good job responding to the user prompt here. The user selected the and has a resume cleared tailed to the finance sector already and ResumAI encourages the user to continue on his career trajectory given his credentials. If he lacked the proper courses and experience, ResumAI response would have changed. Next, the user selects the sample question of "Are there sections or information missing from my resume that I should consider including?" Given this prompt, ResumAI offers specific feedback on what they can improve. This quick, personalized feedback is very useful for those who are in a Figure 5: Once the user understands the instructions, they can feel confident to click on the Chat tab and begin their career counseling session Figure 6: ResumAI on a specific career time-crunch or do not want to spend excessive time on formatting. The user can then go in and makes these changes and continue to ask ResumAI about the career of investment banking, his resume, or anything else related to career counseling he wishes. If he wishes to change careers for example, he could ask ResumAI what modifications he would need to achieve this career change. We presented this service and tested with 10 undergraduate students who have all provided positive feedback. The main features they appreciated included the concise nature of the feedback as well as the ability to ask any question to ResumAI without judgment. They also appreciated how ResumAI has deep knowledge to be able to answer any question instantly on virtually any career. Otherwise they would have to set up multiple different appointments with different departments for specific expert advise on potential majors and careers. One student used it to answer her questions about the field of healthcare and engineering and potential intersections in just one session. She said it gave her ideas to consider she had not encountered before. These types of benefits were the authors' main goal when creating this service. Limitations include that as of this paper's writing, the API utilized is unable to be fine-tuned meaning training it on custom resume datasets was not possible. As the availability of APIs increases, the next steps would be to fine-tune the models. Furthermore, the API key we chose was the text-davinci-003. A limitation of this model is that its training data goes only until September 2021 as of July 2023, meaning it may not be able to provide the most up-to-date information about the job market. In addition, we have included a maximum token response of 600 though this could be lifted for the ResumAI to provide even more detailed information prompt. The reasoning for this decision was to prevent ResumAI from being a one-and-one resume rewriter. The goal of the service is for career counseling to be provided, not strictly a resume service. Therefor the service will provide pointers and specific advice but will not rewrite the entire resume if prompted. Another feature that may be added would be adding user authentication and allowing users to have cross-session chat conversations. This would allow users with very complex needs to provide and receive more information tailored to their specific situation. An advantage of this could be ResumAI's ability to remember each user and create custom-tailored advice. The user would be able to continuously revisit the site. They could theoretically start using it as a high school student, and come back as a young professional and still receive advice on how they should navigate their careers, for example, if they wanted a career change. Figure 7: ResumAI on adding content on a resume Future work includes investigating more thoroughly the ethical concerns and gaps in human and AI career counseling to ensure responsible and equitable use of these technologies. This research paper did not entirely cover that scope with its limited literature review, so further research is needed to investigate the ethical concerns and gaps in human and AI career counseling within the context of a ResumAI reviewer-type program. ## 5 Conclusion Utilizing AI in career counseling shows great promise in increasing accessibility for students. More research is needed on issues of potential bias and explainability of results, but for students and career changers with minimal or no career counseling available, a service like ResumAI is a much needed resource. ResumAI exemplifies the integration of AI in career counseling through the ability to have a social impact towards low-resource job applicants and college students by supporting them through the resume development and career-finding process. With quick personalized feedback accessible to all, ResumAI provides support for those who lack resources around them to develop a strong ResumAI. ResumAI empowers individuals from low-resource and low-income backgrounds to enhance their chances of employment.
2307.15842
Linear-quadratic Gaussian Games with Asymmetric Information: Belief Corrections Using the Opponents Actions
We consider two-player non-zero-sum linear-quadratic Gaussian games in which both players aim to minimize a quadratic cost function while controlling a linear and stochastic state process {using linear policies}. The system is partially observable with asymmetric information available to the players. In particular, each player has a private and noisy measurement of the state process but can see the history of their opponent's actions. The challenge of this asymmetry is that it introduces correlations into the players' belief processes for the state and leads to circularity in their beliefs about their opponents beliefs. We show that by leveraging the information available through their opponent's actions, both players can enhance their state estimates and improve their overall outcomes. In addition, we provide a closed-form solution for the Bayesian updating rule of their belief process. We show that there is a Nash equilibrium which is linear in the estimation of the state and with a value function incorporating terms that arise due to errors in the state estimation. We illustrate the results through an application to bargaining which demonstrates the value of these information corrections.
Ben Hambly, Renyuan Xu, Huining Yang
2023-07-29T00:07:50Z
http://arxiv.org/abs/2307.15842v1
Linear-quadratic Gaussian Games with Asymmetric Information: Belief Corrections Using the Opponents Actions+ ###### Abstract We consider two-player non-zero-sum linear-quadratic Gaussian games in which both players aim to minimize a quadratic cost function while controlling a linear and stochastic state process using linear policies. The system is partially observable with asymmetric information available to the players. In particular, each player has a private and noisy measurement of the state process but can see the history of their opponent's actions. The challenge of this asymmetry is that it introduces correlations into the players' belief processes for the state and leads to circularity in their beliefs about their opponents beliefs. We show that by leveraging the information available through their opponent's actions, both players can enhance their state estimates and improve their overall outcomes. In addition, we provide a closed-form solution for the Bayesian updating rule of their belief process. We show that there is a Nash equilibrium which is linear in the estimation of the state and with a value function incorporating terms that arise due to errors in the state estimation. We illustrate the results through an application to bargaining which demonstrates the value of these information corrections. ## 1 Introduction In the realm of game theory, two-player bargaining games have been extensively studied due to their practical implications in various domains [16, 4, 21, 9]. The types of game of interest here typically occur in negotiations between large organizations. One example is the negotiation of extraction rights between a host country and a foreign mining company. The host country seeks to maximize its economic benefits while ensuring sustainable resource management, while the foreign company aims to secure profitable extraction agreements. Another relevant scenario is the negotiation of rebates on pharmaceutical purchases between a manufacturer and a retail chain. The manufacturer desires to maximize its sales volume and market share, while the retail chain seeks competitive pricing to attract customers. To capture these situations we consider negotiations over a good whose value changes through time and assume that there are a fixed number of rounds of negotiation, where at each round both players present their bids simultaneously. In practice, both players possess limited and potentially asymmetric information about the good under negotiation. This provides compelling motivation for the mathematical modeling and analysis of two-player stochastic games where there is asymmetric information, which can shed light on the negotiation dynamics, identify the key parameters affecting the bargaining outcome, and provide insights into the search for information. In the game setting where the underlying state dynamics are only partially observed, the players' observation histories expand over time, leading to strategies with _expanding domains_ due to the dynamic and sequential nature of such games. To address this issue, a commonly employed technique is to summarize the time-expanding histories into _sufficient statistics_. This enables the application of the Dynamic Programming Principle (DPP), allowing such sequential decision-making problems to be solved through nested sub-optimization problems using the Bellman equation. The sufficient statistics may vary from player to player, depending on the observations available to each player. In the case of symmetric information in extensive form games, the concept of a Markov perfect equilibrium has been introduced [10], in which the players' strategies are determined solely by past events that are relevant to their payoffs, rather than the entire history. See also [3, 15]. However, for games with asymmetric information, identifying the appropriate sufficient statistics poses a significant challenge. A quantity commonly used as the sufficient statistic is a _posterior belief_ of the state dynamics, constructed using the available observations and Bayes formula. The main difficulty in this context is the emergence of _private beliefs_, the fact that different agents in the system may have different (private) observations about the same unknown quantity, which introduces _dependence_ among the agents' beliefs. One way to avoid this problem is to consider models in which private beliefs either do not exist (such as considering symmetric information games, or asymmetric but independent observations [18, 19]), or, if they do exist, they are not taken into account in the agents' strategies (see for example the concept of "public perfect equilibrium" [1]). Another closely related line of work considers common-information based Markov perfect equilibria, which breaks the history into the common and private parts. This idea was first introduced in [11] for finite games and then generalized in [7] to linear-quadratic games. See [12] for an extension to a more general setting and [6] for an application to cyber-physical systems. One key assumption made in this approach is that players' posterior beliefs about the system state, conditioned on their common information, are independent of the strategies used by the players in the past. This decouples the sequential rationality and belief consistency, resulting in a simplification in calculating the equilibria, and obviating the need to define (possibly correlated) private beliefs. To better understand the challenge posed by having private beliefs, let us consider the following simplified scenario. We have two players, \(P\) and \(E\), who collect private information about an unknown variable \(\Theta\) at each time step. Player \(P\) acts based on her own private belief about \(\Theta\), and expects that Player \(E\) will do the same. Although both players can observe each other's actions, they cannot observe each other's private beliefs. This means that Player \(P\) must form a belief about Player \(E\)'s private belief in order to interpret the actions from player \(E\), and take this into account when making her own decisions. However, this creates a chain of "belief about belief" that must be taken into consideration, which extends as long as each player's beliefs remain private. Due to the symmetry of the information structure, Player \(E\) must do the same. Thus, Player \(P\) needs to form beliefs about beliefs about beliefs of Player \(E\), creating an increasingly complex hierarchy of beliefs. This chain only stops when a belief in one step becomes a public function of the beliefs in the previous steps. Indeed, stochastic games with private beliefs have been identified as an open problem in the past decade [13, 14]. There have been a few attempts to address this challenging issue. In [8] a model with an unknown (but static) state \(\Theta\) of the world is considered, where each player has a private noisy observation of \(\Theta\) at timestamp \(t\). The private observations are independent given \(\Theta\). The authors specialize this setting to the case of a linear-quadratic Gaussian non-zero-sum game where \(\Theta\) is a Gaussian variable and players' observations are generated through a linear Gaussian model from \(\Theta\). The main contribution of the paper is to show that, due to the conditional independence of the private signals given \(\Theta\), the private belief chain stops at the second step and players' beliefs about others' beliefs are public functions of their own beliefs (the first step beliefs). In addition, the authors show that the perfect Bayesian equilibrium (PBE) for their model is a linear function of a players' private Kalman filter estimator. For a more general setting when the (partially) unknown quantity is the underlying stochastic process itself, [13] considers a zero-sum two-player game formulation in which each agent observes a private linear signal of the underlying process with non-degenerate Gaussian noise. The private signals at each time step are independent conditioned on the true value of the state at the same time. Similarly, the author shows that the private belief chain stops at the second step. However, the sufficient statistics developed in [13] are not completely correct, impeding the application of the DPP and making it impossible to derive the Nash equilibrium solutions. We will give a detailed technical discussion in Section 2.2. Our Contributions.Motivated by bargaining games, we consider a two-player non-zero-sum game with linear dynamics and quadratic cost functions. Both players cannot directly observe the dynamics but instead rely on private signal processes that are linear in the state process with some additive Gaussian noise. In addition, both players adopt linear policies to control the partially observable dynamics and each player can also observe the past actions taken by the other player. The novelty of our approach is that we show how to use the opponent's actions as additional information to correct the state estimate of the previous time step via a modified Kalman filtering procedure. For the partially observable setting, we formally derive the updating rule for the belief process and provide an explicit formula for the projection of the unknown state process onto the filtration generated by the information flow that is available to each player in Theorem 2.4. In addition, we prove a conditional version of the DPP that works in the game setting in Theorem 3.1. With the sufficient statistics and DPP in hand, we are able to derive a Nash equilibrium solution for the two-player game in Theorem 3.4. Finally, in Section 4, we extend the above-mentioned results to a more general setting where part of the state is fully observable and part of the state is partially observable through the signal process. In the mixed partially and fully observable setting, we establish parallel findings- specifically, the updating rule for the sufficient statistics in Theorem 4.2 and the Nash equilibrium in Theorem 4.3. To the best of our knowledge, this is the first work that rigorously characterizes the equilibrium solution under private belief when the underlying state dynamics are partially observable. To demonstrate the performance of our framework, we conclude the paper in Section 5 by discussing a bargaining game example and give a numerical illustration of the effects of using information corrections. ## 2 Problem Set-up We consider a general setting for games with two players, \(P\) and \(E\), under partial observations and asymmetric information. The joint dynamics of the state process \(x_{t}\in\mathbb{R}^{n}\) takes a linear form (\(0\leq t\leq T-1\)): \[x_{t+1}=A_{t}x_{t}+B_{t}^{P}u_{t}^{P}+B_{t}^{E}u_{t}^{E}+\Gamma_{t}w_{t}, \tag{2.1}\] with initial value \(x_{0}=x\), and the controls of \(P\) and \(E\) are \(u_{t}^{P}\in\mathbb{R}^{m}\) and \(u_{t}^{E}\in\mathbb{R}^{k}\), respectively. Here, for each \(t\), the process noise \(w_{t}\in\mathbb{R}^{d}\) is an i.i.d. sample from \(\mathcal{N}(0,W)\) with \(W\in\mathbb{R}^{d\times d}\) and we have the model parameters \(A_{t}\in\mathbb{R}^{n\times n}\), \(B_{t}^{P}\in\mathbb{R}^{n\times m}\), \(B_{t}^{E}\in\mathbb{R}^{n\times k}\), and \(\Gamma_{t}\in\mathbb{R}^{n\times d}\). Information Structure.At the time \(t=0\), player \(P\) is not able to observe \(x_{0}\) but instead believes that the initial state is drawn from a Gaussian distribution. Namely, from the view point of player \(P\), \[x_{0}\sim\mathcal{N}(\widehat{x}_{0}^{P},W_{0}^{P}), \tag{2.2}\] where \(\widehat{x}_{0}^{P}\) is their own initial constant and \(W_{0}^{P}\) is a known constant covariance matrix. After that, player \(P\) observes the following noisy state signal (or measurement) \(z_{t}^{P}\in\mathbb{R}^{p}\): \[z_{t+1}^{P}=H_{t+1}^{P}\,x_{t+1}+\,w_{t+1}^{P},\quad w_{t+1}^{P}\sim\mathcal{N} (0,G^{P}),\quad t=0,1,\cdots,T-1, \tag{2.3}\] with \(\{w_{t}^{P}\}_{t=0}^{T-1}\) a sequence of i.i.d. random variables. Here \(G^{P}\in\mathbb{R}^{p\times p}\) and \(H_{t+1}^{P}\in\mathbb{R}^{p\times n}\). Similarly, player \(E\) believes that the initial state is drawn from a Gaussian distribution: \[x_{0}\sim\mathcal{N}(\widehat{x}_{0}^{E},W_{0}^{E}), \tag{2.4}\] with their own initial constant \(\widehat{x}_{0}^{E}\) and known constant \(W_{0}^{E}\). Thereafter player \(E\) observes the following noisy state signal \(z_{t}^{E}\in\mathbb{R}^{q}\): \[z_{t+1}^{E}=H_{t+1}^{E}\,x_{t+1}+\,w_{t+1}^{E},\quad w_{t+1}^{E}\sim\mathcal{N }(0,G^{E}),\quad t=0,1,\cdots,T-1. \tag{2.5}\] with \(\{w_{t}^{E}\}_{t=0}^{T-1}\) a sequence of i.i.d. random variables. For simplicity we assume that \(\{w_{t}^{E}\}_{t=0}^{T-1}\) are independent from \(\{w_{t}^{P}\}_{t=0}^{T-1}\). In addition, \(G^{E}\in\mathbb{R}^{q\times q}\) and \(H_{t+1}^{E}\in\mathbb{R}^{q\times n}\). We follow [13] to define games with perfect, imperfect, and partial observations: * If the observation matrices \(H_{t}^{P}\) and \(H_{t}^{E}\) are both the identity matrix and there is no measurement noise, that is, the covariance matrices \(G^{P}=0\) and \(G^{E}=0\), we have a game with _full (or perfect) observation_. In other words, the players' observation is \(z_{t}^{E}=z_{t}^{P}=x_{t}\). * If the observation matrices \(H_{t}^{P}\) and \(H_{t}^{E}\) are the identity matrix and there is measurement noise, we have a game with _imperfect observation_. Namely, the players' observations are of the form \[z_{t}^{P}=x_{t}+w_{t}^{P},\quad z_{t}^{E}=x_{t}+w_{t}^{E}.\] * If the observation matrices are not the identity and there is measurement noise, as in \[z_{t}^{P}=H_{t}^{P}x_{t}+w_{t}^{P},\quad z_{t}^{E}=H_{t}^{E}x_{t}+w_{t}^{E},\] we have a game with _partial observation_. Both players make their decisions based on the public and private information available to them. We write \(\mathcal{Z}_{t}^{P}=\{z_{s}^{P}\}_{s=1}^{t}\) and \(\mathcal{Z}_{t}^{E}=\{z_{s}^{E}\}_{s=1}^{t}\) for the private signals players P and E receive up to time \(t\)\((1\leq t\leq T)\), respectively. Let \(\mathcal{U}_{t}^{P}=\{u_{s}^{P}\}_{s=1}^{t}\) and \(\mathcal{U}_{t}^{E}=\{u_{s}^{E}\}_{s=1}^{t}\) denote the control history of player \(P\) and player \(E\) up to time \(t\), respectively. We assume \(\mathcal{H}_{t}^{P}\) is the information (or history) available to player \(P\) and \(\mathcal{H}_{t}^{E}\) is the information available to player \(E\) for them to make decisions at time \(t\), where \(\mathcal{H}_{t}^{P}\) and \(\mathcal{H}_{t}^{E}\) follow: \[\mathcal{H}_{t}^{P}=\{\widehat{x}_{0}^{P},W_{0}^{P},W_{0}^{E}\}\cup\mathcal{ Z}_{t}^{P}\cup\mathcal{U}_{t-1}^{P}\cup\mathcal{U}_{t-1}^{E},\quad\mathcal{H}_{t }^{E}=\{\widehat{x}_{0}^{E},W_{0}^{P},W_{0}^{E}\}\cup\mathcal{Z}_{t}^{E}\cup \mathcal{U}_{t-1}^{P}\cup\mathcal{U}_{t-1}^{E}. \tag{2.6}\] Note that the covariance matrices \(\{W_{0}^{P},W_{0}^{E}\}\) are known to both players. In the posterior update of a Gaussian distribution, sufficient statistics involve both the mean and covariance matrix. Knowing \(\{W_{0}^{P},W_{0}^{E}\}\) is essential for both players to update their posterior covariance matrices. In addition, we highlight that both players know all the model parameters. Cost Function.We consider a non-zero sum game between player \(P\) and player \(E\) where they strive to minimize their own cost functions. Player \(i\)'s \((i=P,E)\) cost function is given by \[\min_{\{u_{t}^{i}\}_{t=0}^{T-1}}J^{i}(\widehat{x}_{0}^{i}) := \min_{\{u_{t}^{i}\}_{t=0}^{T-1}}\mathbb{E}\left[x_{T}^{\top}Q_{T} ^{i}x_{T}+\sum_{t=0}^{T-1}\left(x_{t}^{\top}Q_{t}^{i}x_{t}+(u_{t}^{i})^{\top}R _{t}^{i}u_{t}^{i}\right)\Bigg{|}\ \mathcal{H}_{0}^{i}\right], \tag{2.7}\] with cost parameters \(Q_{t}^{P},Q_{t}^{E}\in\mathbb{R}^{n\times n}\), \(R_{t}^{P}\in\mathbb{R}^{m\times m}\), and \(R_{t}^{E}\in\mathbb{R}^{k\times k}\). For the well-definedness of the game, we summarize the assumptions on the model parameters, initial state, and noise. **Assumption 2.1** (Parameters, Initial State, and Noise).: _For \(i=P,E\),_ 1. \(\{w_{t}\}_{t=0}^{T-1}\) _and_ \(\{w^{i}_{t}\}_{t=1}^{T-1}\) _are zero-mean, i.i.d. Gaussian random variables that are independent from_ \(x_{0}\) _and each other and such that_ \(\mathbb{E}[w_{t}w^{\top}_{t}]=W\) _is positive definite and_ \(\mathbb{E}[w^{i}_{t}(w^{i}_{t})^{\top}]=G^{i}\) _is positive definite;_ 2. _Both matrices_ \(H^{P}_{t+1}\in\mathbb{R}^{p\times n}\) _and_ \(H^{E}_{t+1}\in\mathbb{R}^{q\times n}\) _have rank_ \(n\) _for_ \(t=0,\ldots,T-1\)_._ 3. _The matrices_ \(\Gamma_{t}W\Gamma_{t}^{\top}\) _are non-singular for_ \(t=1,\ldots,T\)_;_ 4. _The cost matrices_ \(Q^{i}_{t}\)_, for_ \(t=0,1,\ldots,T\) _are positive semi-definite, and_ \(R^{i}_{t}\) _for_ \(t=0,1,\ldots,T-1\) _are positive definite._ Under the assumptions we make on \(G^{P}\) and \(G^{E}\), we exclude the perfect information case as we require \(G^{P}\) and \(G^{E}\) to be positive definite. This is a challenging scenario to study as the agent cannot get precise information for any coordinate of the state process. On the other hand, since \(H^{P}_{t+1}\in\mathbb{R}^{p\times n}\) and \(H^{E}_{t+1}\in\mathbb{R}^{q\times n}\) have rank \(n\), the signal process will reveal aggregated information from all of the coordinates. Hence the agent is still capable of gradually learning each coordinate of the state process. **Remark 2.2**.: We focus on the case where all states are partially observable, as given in Assumption 2.1, for the rest of Section 2 and for all of Section 3. We generalize it to a mixed fully and partially observable setting in Section 4. ### Sufficient Statistics: Decentralized Kalman Filtering with Information Correction To better demonstrate the sufficient statistics of Kalman filtering in the game setting, we start with a brief discussion of some existing results in the single-agent setting. #### 2.1.1 Preliminary: Single-agent setting Suppose there is a single player \(P\) who controls the state dynamics (2.1): \[x_{t+1}=A_{t}x_{t}\,+\,B^{P}_{t}u^{P}_{t}\,+\,\Gamma_{t}w_{t}, \tag{2.8}\] where \(x_{0}=x\) is the initial position and \(u^{P}_{t}\in\mathbb{R}^{m}\) are the controls from player \(P\). Information Structure.In this single-player case player \(P\) believes that the initial state is drawn from a Gaussian distribution at the time \(t=0\): \[x_{0}\sim\mathcal{N}\left(\widehat{x}^{P}_{0},W^{P}_{0}\right), \tag{2.9}\] and thereafter player \(P\) observes the following noisy state signal \(z_{t}\in\mathbb{R}^{p}\): \[z^{P}_{t+1}=H^{P}_{t+1}\,x_{t+1}+\,w^{P}_{t+1},\quad w^{P}_{t+1}\sim\mathcal{ N}(0,G^{P}),\quad t=0,1,\cdots,T-1, \tag{2.10}\] with \(\{w^{P}_{t}\}_{t=0}^{T-1}\) a sequence of i.i.d. random variables. Here \(G^{P}\in\mathbb{R}^{p\times p}\) and \(H^{P}_{t+1}\in\mathbb{R}^{p\times n}\). Assume the information available to player \(P\) to make a decision at time \(t\) follows: \[\mathcal{H}^{P}_{t}=\{\widehat{x}^{P}_{0},W^{P}_{0}\}\cup\mathcal{Z}^{P}_{t} \cup\mathcal{U}^{P}_{t-1}, \tag{2.11}\] then we have the following result characterizing player \(P\)'s belief in the state. **Theorem 2.3**.: _[_17_, (5.3-39)-(5.3-42)]_ _The sufficient statistic for player P at decision time \(t=0\) is \((\widehat{x}_{0},W_{0}^{P})\). Namely player P believes that \(x_{0}\sim\mathcal{N}(\widehat{x}_{0},W_{0}^{P})\). For time \(1\leq t\leq T-1\), the distribution of the physical state \(x_{t}\) calculated by player \(P\), by conditioning on the private information available to him at time \(t\), is given by_ \[x_{t}\sim\mathcal{N}(\widehat{x}_{t}^{P},\widehat{\Sigma}_{t}^{P}), \tag{2.12}\] _where_ \[\big{(}\widehat{x}_{t}^{P}\big{)}^{-} =A_{t-1}\widehat{x}_{t-1}^{P}+B_{t-1}^{P}u_{t-1}^{P}, \tag{2.13a}\] \[\big{(}\widehat{\Sigma}_{t}^{P}\big{)}^{-} =A_{t-1}\widehat{\Sigma}_{t-1}^{P}A_{t-1}^{\top}+\Gamma_{t-1}W \Gamma_{t-1}^{\top},\] (2.13b) \[K_{t}^{P} =\big{(}\widehat{\Sigma}_{t}^{P}\big{)}^{-}(H_{t}^{P})^{\top} \left[H_{t}^{P}\big{(}\widehat{\Sigma}_{t}^{P}\big{)}^{-}(H_{t}^{P})^{\top}+G ^{P}\right]^{-1},\] (2.13c) \[\widehat{x}_{t}^{P} =\big{(}\widehat{x}_{t}^{P}\big{)}^{-}+K_{t}^{P}\left[z_{t}^{P}-H _{t}^{P}\big{(}\widehat{x}_{t}^{P}\big{)}^{-}\right],\] (2.13d) \[\widehat{\Sigma}_{t}^{P} =\big{(}I-K_{t}^{P}H_{t}^{P}\big{)}\big{(}\widehat{\Sigma}_{t}^{P} \big{)}^{-}, \tag{2.13e}\] _with initial condition \(\widehat{x}_{0}^{P}=\widehat{x}_{0}^{P}\) and \(\widehat{\Sigma}_{0}^{P}=W_{0}^{P}\)._ Note that \(\big{(}\widehat{x}_{t}^{P}\big{)}^{-}=\mathbb{E}\,\left[x_{t}\right]\mathcal{H }_{t-1}\right]\) is the state estimate obtained before the measurement update, namely, using information up to time \(t-1\). This term is often called the _pre-estimate_ in the literature. With the new measurement information \(z_{t}^{P}\) at time \(t\), the agent updates the state estimate to \(\widehat{x}_{t}^{P}=\mathbb{E}\,\left[x_{t}\right]\mathcal{H}_{t}^{P}\Big{]}\), which is a linear combination of \(\big{(}\widehat{x}_{t}^{P}\big{)}^{-}\) and \(z_{t}^{P}\). This term is often called the _post-estimate_[17]. Here the second term \(z_{t}^{P}-H_{t}^{P}\big{(}\widehat{x}_{t}^{P}\big{)}^{-}\) on the RHS of (2.13d) is independent of the first term \(\big{(}\widehat{x}_{t}^{P}\big{)}^{-}\). The only decision variable \(K_{t}^{P}\) in the coefficients is chosen so that the conditional mean-square error is minimized. Namely, \[K_{t}^{P}=\arg\min\ \mathbb{E}\,\left[\|x_{t}-\widehat{x}_{t}^{P}\|^{2} \right|\mathcal{H}_{t}^{P}\right]\!. \tag{2.14}\] In terms of quantifying the uncertainty in the state estimate, we have \(\big{(}\widehat{\Sigma}_{t}^{P}\big{)}^{-}=\mathbb{E}\Big{[}(x_{t}-(\widehat{ x}_{t}^{P})^{-})^{\top}(x_{t}-(\widehat{x}_{t}^{P})^{-})\Big{]}\) and \(\widehat{\Sigma}_{t}^{P}=\mathbb{E}\Big{[}\big{(}x_{t}-\widehat{x}_{t}^{P} \big{)}^{\top}\big{(}x_{t}-\widehat{x}_{t}^{P}\big{)}\Big{]}\) representing the covariance before and after measurement updates, respectively. #### 2.1.2 Two-player setting There are a few challenges for the belief updates in the game setting. In this paper we focus on the case where both players \(P\) and \(E\) adopt linear feedback policies: \[u_{t}^{P}:=F_{t}^{P}\,\mathbb{E}[x_{t}|\mathcal{H}_{t}^{P}],\ \text{and}\ u_{t}^{E}:=F_{t}^{E}\,\mathbb{E}[x_{t}|\mathcal{H}_{t}^{E}], \tag{2.15}\] with some _policy matrices_\(F_{t}^{P}\in\mathbb{R}^{m\times n}\) and \(F_{t}^{E}\in\mathbb{R}^{k\times n}\). From player \(P\)'s perspective, the new information collected at time \(t\) is \(z_{t}^{P}\) and \(u_{t-1}^{E}\). Intuitively, player \(P\) should be able to use \(u_{t-1}^{E}\) as some additional information to improve their estimate of \(x_{t-1}\). Note that player \(P\) is aware that player \(E\) adopts a linear feedback policy \(u_{t-1}^{E}=F_{t-1}^{E}\mathbb{E}[x_{t-1}|\mathcal{H}_{t-1}^{E}]\), for which the opponent's policy matrix \(F_{t-1}^{E}\) (a function of model parameters) is known but the state estimate \(\mathbb{E}[x_{t-1}|\mathcal{H}_{t-1}^{E}]\) is unknown to player \(P\). In order to utilize the information contained in \(u_{t-1}^{E}\), player \(P\) also needs to infer the distribution of \(\mathbb{E}[x_{t-1}|\mathcal{H}_{t-1}^{E}]\) using \(\mathcal{H}_{t-1}^{P}\). We will show later on in Section 2.2 that this idea of _using the opponent's actions to make an information correction_ is not only a possible approach to improve the estimation precision but also a necessary step to guarantee that the conditional expectations based on the information filtrations satisfy the tower property and hence that the DPP holds. Let us start at decision time \(t=0\). Player \(P\) reasons as follows. He models his initial belief \(\widehat{x}_{0}^{P}\) of the initial state \(x_{0}\) as \[\widehat{x}_{0}^{P}=x_{0}+e_{0}^{P}, \tag{2.16}\] where \(x_{0}\) is the true physical state and \(e_{0}^{P}\) is player \(P\)'s estimation error, whose distribution, in view of (2.2), is \(\mathcal{N}(0,W_{0}^{P})\). In the same way, player \(E\)'s belief \(\widehat{x}_{0}^{E}\) of the initial state \(x_{0}\) follows \[\widehat{x}_{0}^{E}=x_{0}+e_{0}^{E}, \tag{2.17}\] where, as before, \(x_{0}\) is the true physical state and \(e_{0}^{E}\) is player \(E\)'s estimation error, whose distribution, in view of (2.4), is \(\mathcal{N}(0,W_{0}^{E})\). The Gaussian random variables \(e_{0}^{E}\) and \(e_{0}^{P}\) are independent by our assumptions. From player \(P\)'s perspective, \(\widehat{x}_{0}^{P}\) is known but \(\widehat{x}_{0}^{E}\) is a random variable. Subtracting (2.16) from (2.17), at time \(t=0\) player \(P\) concludes that as far as he is concerned, player \(E\)'s estimate, upon which he will decide his optimal control, is the random variable \[\widehat{x}_{0}^{E}=\widehat{x}_{0}^{P}+e_{0}^{E}-e_{0}^{P}. \tag{2.18}\] As far as \(P\) is concerned, \(E\)'s estimate of the initial state \(x_{0}\) is a Gaussian random variable \[\widehat{x}_{0}^{E}\sim\mathcal{N}(\widehat{x}_{0}^{P},W_{0}^{P}+W_{0}^{E}). \tag{2.19}\] Thus, at time \(t=0\) player \(P\) has used his private information \(\widehat{x}_{0}^{P}\) and the public information \((W_{0}^{P},W_{0}^{E})\) to calculate the distribution of the sufficient statistic \(\widehat{x}_{0}^{E}\) of player \(E\). Similarly, as far as player \(E\) is concerned, at time \(t=0\) the distribution of the initial state estimate \(\widehat{x}_{0}^{P}\) of player \(P\) follows \(\mathcal{N}(\widehat{x}_{0}^{E},W_{0}^{P}+W_{0}^{E})\). In addition to (2.15), we further restrict the admissible set of policy matrices to the following: \[\mathcal{A}^{P}:=\big{\{}F^{P}\in\mathbb{R}^{m\times n}|F^{P}\text{ has rank}\min(m,n)\big{\}},\quad\mathcal{A}^{E}:=\big{\{}F^{E}\in\mathbb{R}^{k\times n}|F^{E} \text{ has rank}\min(k,n)\big{\}}.\] For time \(t\geq 1\), we have the following result. **Theorem 2.4** (Sufficient Statistics in Two-player Games).: _Assume the sufficient statistic of player \(i\) (\(i=P,E\)) at decision time \(t=0\) is \(x_{0}\sim N(\widehat{x}_{0}^{i},W_{0}^{i})\). In addition, assume both players are applying linear strategies. Namely, \(u_{t}^{P}=F_{t}^{P}\mathbb{E}[x_{t}|\mathcal{H}_{t}^{P}]\) and \(u_{t}^{E}=F_{t}^{E}\mathbb{E}[x_{t}|\mathcal{H}_{t}^{E}]\) for some matrices \(F_{t}^{P}\in\mathcal{A}^{P}\) and \(F_{t}^{E}\in\mathcal{A}^{E}\). Then, for time \(1\leq t\leq T-1\), the distribution of the physical state \(x_{t}\) calculated by player \(i\) conditioning on the private information available to him at time \(t\) follows_ \[x_{t}\sim\mathcal{N}(\widehat{x}_{t}^{i},\widehat{\Sigma}_{t}^{i}), \tag{2.20}\] _where, for \(j\neq i\),_ \[J_{t-1}^{i} =\Big{(}\widehat{\Sigma}_{t-1}^{i}-\widehat{\Sigma}_{t-1}^{(i,j)} \Big{)}\widehat{\Sigma}_{t-1}^{(i,j)}(Y_{t-1}^{j})^{\top}\Big{(}Y_{t-1}^{j} \widehat{\Sigma}_{t-1}^{(i,j)}\widehat{\Sigma}_{t-1}^{(i,j)}(Y_{t-1}^{j})^{ \top}\Big{)}^{-1}, \tag{2.21a}\] \[(\widehat{x}_{t-1}^{i})^{+} =\widehat{x}_{t-1}^{i}+J_{t-1}^{i}(y_{t-1}^{j}-Y_{t-1}^{j} \widehat{x}_{t-1}^{i}),\] (2.21b) \[(\widehat{\Sigma}_{t-1}^{i})^{+} =\widehat{\Sigma}_{t-1}^{i}-\Big{(}\widehat{\Sigma}_{t-1}^{i}- \widetilde{\Sigma}_{t-1}^{(i,j)}\Big{)}(\widehat{\Sigma}_{t-1}^{(i,j)})^{-1} \Big{(}\widehat{\Sigma}_{t-1}^{i}-\widetilde{\Sigma}_{t-1}^{(i,j)}\Big{)}^{\top},\] (2.21c) \[\big{(}\widehat{x}_{t}^{i}\big{)}^{-} =A_{t-1}(\widehat{x}_{t-1}^{i})^{+}+B_{t-1}^{P}u_{t-1}^{P}+B_{t-1} ^{E}u_{t-1}^{E},\] (2.21d) \[\big{(}\widehat{\Sigma}_{t}^{i}\big{)}^{-} =A_{t-1}(\widehat{\Sigma}_{t-1}^{i})^{+}A_{t-1}^{\top}+\Gamma_{t-1}W \Gamma_{t-1}^{\top}, \tag{2.21e}\] \[K_{t}^{i} =\left(\widehat{\Sigma}_{t}^{i}\right)^{-}(H_{t}^{i})^{\top}\left[H_ {t}^{i}(\widehat{\Sigma}_{t}^{i})^{-}(H_{t}^{i})^{\top}+G^{i}\right]^{-1}, \tag{2.21f}\] \[\widehat{x}_{t}^{i} =\left(\widehat{x}_{t}^{i}\right)^{-}+K_{t}^{i}\left[z_{t}^{i}-H_ {t}^{i}\left(\widehat{x}_{t}^{i}\right)^{-}\right],\] (2.21g) \[\widehat{\Sigma}_{t}^{i} =\left(I-K_{t}^{i}H_{t}^{i}\right)(\widehat{\Sigma}_{t}^{i})^{-},\] (2.21h) \[\widehat{\Sigma}_{t}^{(i,j)} =\left(I-K_{t}^{i}H_{t}^{i}\right)\left(A_{t-1}\Delta_{t-1}^{(i,j )}A_{t-1}^{\top}+\Gamma_{t-1}W\Gamma_{t-1}^{\top}\right)\left(I-K_{t}^{j}H_{t }^{j}\right)^{\top}\] (2.21i) \[\Delta_{t-1}^{(i,j)} =(\widehat{\Sigma}_{t-1}^{i}-\widetilde{\Sigma}_{t-1}^{(i,j)})( \widehat{\Sigma}_{t-1}^{(i,j)})^{-1}(\widehat{\Sigma}_{t-1}^{j}-\widetilde{ \Sigma}_{t-1}^{(j,i)})^{\top}+\widetilde{\Sigma}_{t-1}^{(i,j)}\] (2.21j) \[\widehat{\Sigma}_{t}^{(i,j)} =\widehat{\Sigma}_{t}^{i}+\widehat{\Sigma}_{t}^{j}-\widetilde{ \Sigma}_{t}^{(i,j)}-\left(\widetilde{\Sigma}_{t}^{(i,j)}\right)^{\top}, \tag{2.21k}\] _where \(\widehat{\Sigma}_{t-1}^{(i,j)}\) is positive definite. The values of \(Y_{t}^{P}\in\mathbb{R}^{m\times n}\), \(Y_{t}^{E}\in\mathbb{R}^{k\times n}\) and \(y_{t}^{P}\),\(y_{t}^{E}\) depend on the ranks of \(F_{t}^{P}\) and \(F_{t}^{E}\) as follows:_ 1. _The pair_ \[(Y_{t}^{P},y_{t}^{P})=\left\{\begin{array}{ll}(F_{t}^{P},u_{t}^{P})&\text{ if }F_{t}^{P}\text{ has rank }m<n\text{,}\\ (I_{n},\widehat{x}_{t}^{P})&\text{ if }F_{t}^{P}\text{ has rank }n\leq m\text{.}\end{array}\right.\] 2. _The pair_ \[(Y_{t}^{E},y_{t}^{E})=\left\{\begin{array}{ll}(F_{t}^{E},u_{t}^{E})&\text{ if }F_{t}^{P}\text{ has rank }k<n\text{,}\\ (I_{n},\widehat{x}_{t}^{E})&\text{ if }F_{t}^{P}\text{ has rank }n\leq k\text{.}\end{array}\right.\] _In addition, the initial conditions are \(\widehat{\Sigma}_{0}^{i}=W_{0}^{i}\), \(\widetilde{\Sigma}_{0}^{(i,j)}=0\), and \(\widehat{\Sigma}_{0}^{(i,j)}=\widehat{\Sigma}_{0}^{i}+\widehat{\Sigma}_{0}^{j}\). Finally, in player \(i\)'s view, the posterior distribution for the state estimate \(\widehat{x}_{t}^{j}\) of player \(j\) is_ \[\widehat{x}_{t}^{j}\sim\mathcal{N}(\widehat{x}_{t}^{i},\widehat{\Sigma}_{t}^{( i,j)}). \tag{2.22}\] **Remark 2.5**.: 1. \(J_{t-1}^{i}\) in (2.21a) is the Kalman gain for player \(i\) when viewing player \(j\)'s action as the additional signal to improve the state estimation in the previous step \(t-1\). We call \((\widehat{x}_{t-1}^{i})^{+}\) in (2.21b) the _improved-estimate_ for \(x_{t-1}\), with the corresponding estimation error \((\widehat{\Sigma})_{t-1}^{+}\). 2. The post-estimates of the state and covariance after the measurement/signal update (2.21g)-(2.21h) take similar forms to the single-agent case (2.13d)-(2.13e). The differences occur in the input state and covariance estimates. In particular, the post-estimate for the single-agent setting uses the pre-estimate as the input whereas the post-estimate for the two-player setting uses the improved-estimate as the input. 3. Equation (2.22) in Theorem 2.4 shows that the chain of "belief about belief" stops at the second step, as the belief at the second step becomes a public function of the beliefs at the first step. 4. When \(n>k,m\), players \(P\) and \(E\) will not be able to recover the opponent's state estimate via observing the action taken by the opponent. Instead, from player \(i\)'s viewpoint, the posterior distribution for the state estimate \(\widehat{x}_{t}^{j}\) of player \(j\) follows a Gaussian distribution with mean \(\widehat{x}_{t}^{i}\) and variance \(\widehat{\Sigma}_{t}^{(i,j)}\). 5. Consider the special case that \[F_{t}^{P}\in\mathbb{R}^{m\times n}\ \ \text{has rank }n,\ n\leq m,\ \text{and}\ F_{t}^{E}\in\mathbb{R}^{k\times n}\ \ \text{has rank }n,\ n\leq k.\] (2.23) In this case, player \(i\) can fully recover the state estimate from player \(j\) by observing her actions, as the RHS of the following equation is fully known to player \(i\): \[\mathbb{E}[x_{t}|\mathcal{H}_{t}^{j}]=((F_{t}^{j})^{\top}F_{t}^{j})^{-1}(F_{t}^{ j})^{\top}\,u_{t}^{j}.\] (2.24) Observing that \(J^{i}_{t}+J^{j}_{t}=I\), we have \[(\widehat{x}^{i}_{t})^{+}=\left(I-J^{i}_{t}\right)\,\widehat{x}^{i}_{t}+J^{i}_{t }\widehat{x}^{j}_{t}=J^{j}_{t}\widehat{x}^{i}_{t}+\left(I-J^{i}_{t}\right)\, \widehat{x}^{j}_{t}=(\widehat{x}^{j}_{t})^{+}.\] This shows that Player P and Player E have the _same_ improved estimate after observing each other's actions. In this case, information is fully shared between the players. Proof.: There are four possible combinations under the conditions (i)-(ii) stated in Theorem 2.4. Here we only show the proof for the following combination as the proof for each of the other combinations follows the same logic: \[F^{P}_{t}\in\mathbb{R}^{m\times n}\ \ \text{has rank}\ m,\ m<n;\ \text{and}\ F^{E}_{t}\in\mathbb{R}^{k\times n}\ \ \text{has rank}\ k,\ k<n. \tag{2.25}\] In addition, under condition (2.25), we only prove the results for player \(P\) here as the results for player \(E\) follow in the same way. We handle the new information \(u^{E}_{t-1}\) and \(z^{P}_{t}\), in an incremental fashion. More precisely, we first adjust the estimate \(\widehat{x}^{P}_{t-1}\) using \(u^{E}_{t-1}\), denoted by \((\widehat{x}^{P}_{t-1})^{+}\), and then derive \(\widehat{x}^{P}_{t}\) using \(z^{P}_{t}\) and \((\widehat{x}^{P}_{t-1})^{+}\). After player \(P\) observes the action \(u^{E}_{t-1}\) from player \(E\), player \(P\) updates: \[(\widehat{x}^{P}_{t-1})^{+}:=\,\mathbb{E}\Big{[}x_{t-1}\Big{|}\,\mathcal{H}^{ P}_{t-1}\cup\{u^{E}_{t-1}\}\Big{]}. \tag{2.26}\] Following the convention in filtering theory [17], we write: \[(\widehat{x}^{P}_{t-1})^{+}=\widehat{x}^{P}_{t-1}+J^{P}_{t-1}\Big{(}u^{E}_{t- 1}-F^{E}_{t-1}\,\widehat{x}^{P}_{t-1}\Big{)}, \tag{2.27}\] where \(J^{P}_{t-1}\) is a matrix to be determined to minimize \(\mathbb{E}[\|(x^{P}_{t-1})^{+}-x_{t-1}\|^{2}]\). To calculate \(J^{P}_{t-1}\), we have \[\mathbf{cov}\Big{(}x_{t-1}-(\widehat{x}^{P}_{t-1})^{+}\Big{)}\] \[= \mathbf{cov}\Big{(}x_{t-1}-\widehat{x}^{P}_{t-1}-J^{P}_{t-1}(u^{E }_{t-1}-F^{E}_{t}\widehat{x}^{P}_{t-1})\Big{)}\] \[= \mathbf{cov}\Big{(}-(I-J^{P}_{t-1}F^{E}_{t-1})e^{P}_{t-1}\,-\,J^{ P}_{t-1}F^{E}_{t-1}e^{E}_{t-1}\Big{)}\] \[\quad+(I-J^{P}_{t-1}F^{E}_{t-1})\widetilde{\Sigma}^{(P,E)}_{t-1}( J^{P}_{t-1}F^{E}_{t-1})^{\top}+(J^{P}_{t-1}F^{E}_{t-1})(\widetilde{\Sigma}^{(P,E)}_{t-1 })^{\top}(I-J^{P}_{t-1}F^{E}_{t-1})^{\top}\] \[= \widehat{\Sigma}^{P}_{t-1}-J^{P}_{t-1}F^{E}_{t-1}\widehat{\Sigma} ^{P}_{t-1}-\widehat{\Sigma}^{P}_{t-1}(F^{E}_{t-1})^{\top}(J^{P}_{t-1})^{\top} +J^{P}_{t-1}F^{E}_{t-1}\widehat{\Sigma}^{P}_{t-1}(F^{E}_{t-1})^{\top}(J^{P}_{t -1})^{\top}\] \[\quad+J^{P}_{t-1}F^{E}_{t-1}\widehat{\Sigma}^{E}_{t-1}(F^{E}_{t-1 })^{\top}(J^{P}_{t-1})^{\top}+\widetilde{\Sigma}^{(P,E)}_{t-1}(F^{E}_{t-1})^{ \top}(J^{P}_{t-1})^{\top}-J^{P}_{t-1}F^{E}_{t-1}\widetilde{\Sigma}^{(P,E)}_{t- 1}(F^{E}_{t-1})^{\top}(J^{P}_{t-1})^{\top}\] \[\quad+J^{P}_{t-1}F^{P*}_{t-1}(\widetilde{\Sigma}^{(P,E)}_{t-1})^{ \top}-J^{P}_{t-1}F^{E}_{t-1}(\widetilde{\Sigma}^{(P,E)}_{t-1})^{\top}(F^{E}_{ t-1})^{\top}(J^{P}_{t-1})^{\top}.\] Note that minimizing \(\mathbb{E}[\|(x^{P}_{t-1})^{+}-x_{t-1}\|^{2}]\) is equivalent to minimizing \(\mathrm{Tr}\left(\mathbf{cov}\Big{(}x_{t-1}-(\widehat{x}^{P}_{t-1})^{+}\Big{)}\right)\). Taking the derivative with respect to \(J^{P}_{t-1}\) and setting it to zero, we have \[\frac{\partial\,\mathrm{Tr}\left(\mathbf{cov}\Big{(}x_{t-1}-(\widehat{x}^{P}_{t -1})^{+}\Big{)}\right)}{\partial J^{P}_{t-1}}=-2F^{E}_{t-1}\widehat{\Sigma}^{P} _{t-1}+2F^{E}_{t-1}\widehat{\Sigma}^{(P,E)}_{t-1}(F^{E}_{t-1})^{\top}(J^{P}_{t -1})^{\top}+2F^{E}_{t-1}(\widetilde{\Sigma}^{(P,E)}_{t-1})^{\top}=0,\] which is equivalent to the following equation (since \(\widehat{\Sigma}^{(P,E)}_{t-1}\) is symmetric by its definition) \[\widehat{\Sigma}^{P}_{t-1}-\widetilde{\Sigma}^{(P,E)}_{t-1}=J^{P}_{t-1}F^{E}_{t- 1}\widehat{\Sigma}^{(P,E)}_{t-1}. \tag{2.28}\] When \(F^{E}_{t-1}\widehat{\Sigma}^{(P,E)}_{t-1}\widehat{\Sigma}^{(P,E)}_{t-1}(F^{E}_{ t-1})^{\top}\) is of rank \(k\) (which will be shown at the end of the proof), we have \[J^{P}_{t-1}=\Big{(}\widehat{\Sigma}^{P}_{t-1}-\widetilde{\Sigma}^{(P,E)}_{t-1} \Big{)}\widehat{\Sigma}^{(P,E)}_{t-1}(F^{E}_{t-1})^{\top}\Big{(}F^{E}_{t-1} \widehat{\Sigma}^{(P,E)}_{t-1}\widehat{\Sigma}^{(P,E)}_{t-1}(F^{E}_{t-1})^{ \top}\Big{)}^{-1}. \tag{2.29}\] Using the expression in (2.29), we have \[(\widehat{\Sigma}^{P}_{t-1})^{+} := \mathbf{cov}\Big{(}x_{t-1}-(\widehat{x}^{P}_{t-1})^{+}\Big{)}= \widehat{\Sigma}^{P}_{t-1}-(\widehat{\Sigma}^{P}_{t-1}-\widetilde{\Sigma}^{(P,E )}_{t-1})(\widehat{\Sigma}^{(P,E)}_{t-1})^{-1}\Big{(}\widehat{\Sigma}^{P}_{t-1}- \widetilde{\Sigma}^{(P,E)}_{t-1}\Big{)}^{\top}.\] Then we have the pre-estimate: \[(\widehat{x}^{P}_{t})^{-}=A_{t-1}(\widehat{x}^{P}_{t-1})^{+}+B^{P} _{t-1}u^{P}_{t-1}+B^{E}_{t-1}u^{E}_{t-1}. \tag{2.30}\] The post-estimate after observing the signal/measure \(z^{P}_{t}\) at time \(t\) is defined as: \[\widehat{x}^{P}_{t}=(\widehat{x}^{P}_{t})^{-}+K^{P}_{t}\Big{(}z^{ P}_{t}-H^{P}_{t}(\widehat{x}^{P}_{t})^{-}\Big{)}, \tag{2.31}\] with a variance \[(\widehat{\Sigma}^{P}_{t})^{-}:=\mathbb{E}[(\widehat{x}^{P}_{t}- x_{t})(\widehat{x}^{P}_{t}-x_{t})^{\top}]. \tag{2.32}\] In the same way as for the derivation of \(J^{P}_{t-1}\), we can show that the following choice of \(K^{P}_{t}\) minimizes the quantity \(\mathbb{E}[\|x_{t}-\widehat{x}^{P}_{t}\|^{2}]\): \[K^{P}_{t}=(\widehat{\Sigma}^{P}_{t-1})^{-}(H^{P}_{t})^{\top} \Big{[}H^{P}_{t}(\widehat{\Sigma}^{P}_{t-1})^{-}(H^{P}_{t})^{\top}+G^{P}\Big{]} ^{-1}. \tag{2.33}\] The corresponding covariance takes the form: \[\widehat{\Sigma}^{P}_{t}:=\mathbb{E}\Big{[}\left\|x_{t}-\widehat{ x}^{P}_{t}\right\|^{2}\Big{]}=(I-K^{P}_{t}H^{P}_{t})(\widehat{\Sigma}^{P}_{t-1})^ {-}. \tag{2.34}\] To update player \(P\)'s belief of player \(E\)'s state, define similarly to the case (2.18) when \(t=0\), \[x_{t}=\widehat{x}^{P}_{t}-e^{P}_{t}=\widehat{x}^{E}_{t}-e^{E}_{t}, \tag{2.35}\] where \(e^{P}_{t}\) and \(e^{E}_{t}\) are the estimation errors from players \(P\) and \(E\), respectively. Given that (2.35) is equivalent to the following: \[\widehat{x}^{P}_{t}=\widehat{x}^{E}_{t}+(e^{P}_{t}-e^{E}_{t}), \tag{2.36}\] Player \(P\)'s posterior distribution for the state estimate \(\widehat{x}^{E}_{t}\) of player \(E\) is \[\widehat{x}^{E}_{t}\sim\mathcal{N}(\widehat{x}^{P}_{t},\widehat{ \Sigma}^{(P,E)}_{t}) \tag{2.37}\] where the estimation error covariance matrix \(\widehat{\Sigma}^{(P,E)}_{t}\) is defined as \[\widehat{\Sigma}^{(P,E)}_{t} = \mathbb{E}\left[\Big{(}e^{E}_{t}-e^{P}_{t}\Big{)}\big{(}e^{E}_{t} -e^{P}_{t}\Big{)}^{\top}\right]=\widehat{\Sigma}^{P}_{t}+\widehat{\Sigma}^{E} _{t}-\widetilde{\Sigma}^{(P,E)}_{t}-\left(\widetilde{\Sigma}^{(P,E)}_{t} \right)^{\top}, \tag{2.38}\] in which \(\widehat{\Sigma}^{P}_{0}=W^{P}_{0}\), \(\widehat{\Sigma}^{E}_{0}=W^{E}_{0}\), \(\widehat{\Sigma}^{P}_{t}=\mathbb{E}[e^{P}_{t}(e^{P}_{t})^{\top}]\), \(\widehat{\Sigma}^{E}_{t}=\mathbb{E}[e^{E}_{t}(e^{E}_{t})^{\top}]\), and \(\widetilde{\Sigma}^{(P,E)}_{t}=\mathbb{E}[e^{P}_{t}(e^{E}_{t})^{\top}]\). We will see that \(\widetilde{\Sigma}^{(P,E)}_{t}\) satisfies a recursive linear matrix equation which is a Lyapunov equation: \[\widetilde{\Sigma}^{(P,E)}_{t}=\left(I-K^{P}_{t}H^{P}_{t}\right) \left(A_{t-1}\Delta^{(P,E)}_{t-1}A^{\top}_{t-1}+\Gamma_{t-1}W\Gamma^{\top}_{t- 1}\right)\left(I-K^{E}_{t}H^{E}_{t}\right)^{\top}, \tag{2.39}\] with \[\Delta^{(P,E)}_{t-1} = (I-J^{P}_{t-1}F^{E}_{t-1})\widetilde{\Sigma}^{(P,E)}_{t-1}(I-J^{E }_{t-1}F^{P}_{t-1})^{\top}+(I-J^{P}_{t-1}F^{E}_{t-1})\widehat{\Sigma}^{P}_{t- 1}(J^{E}_{t-1}F^{P}_{t-1})^{\top} \tag{2.40}\] \[+J^{P}_{t-1}F^{E}_{t-1}(\widetilde{\Sigma}^{(P,E)}_{t-1})^{\top} (J^{E}_{t-1}F^{P}_{t-1})^{\top}+J^{P}_{t-1}F^{E}_{t-1}\widehat{\Sigma}^{E}_{t- 1}(I-J^{E}_{t-1}F^{P}_{t-1})^{\top}.\] The equations (2.39) and (2.40) hold since given (2.21d)-(2.21h), we have \[\widehat{x}_{t}^{P}=A_{t-1}(\widehat{x}_{t-1}^{P})^{+}+B_{T-1}^{P}u_{t-1}^{P}+B_{ T-1}^{E}v_{t-1}^{E}+K_{t}^{P}H_{t}^{P}A_{t-1}\big{(}x_{t-1}-(\widehat{x}_{t-1}^{P} )^{+}\big{)}+K_{t}^{P}H_{t}^{P}\Gamma_{t-1}w_{t-1}+K_{t}^{P}w_{t}^{P}, \tag{2.41}\] and hence \[e_{t}^{P}=\widehat{x}_{t}^{P}-x_{t} = (I-K_{t}^{P}H_{t}^{P})A_{t-1}\big{(}(\widehat{x}_{t-1}^{P})^{+}-x_{ t-1}\big{)}+(K_{t}^{P}H_{t}^{P}-I)\Gamma_{t-1}w_{t-1}+K_{t}^{P}w_{t}^{P} \tag{2.42}\] \[= (I-K_{t}^{P}H_{t}^{P})A_{t-1}\left((I-J_{t-1}^{P}F_{t-1}^{E})e_{t- 1}^{P}+J_{t-1}^{P}F_{t-1}^{E}e_{t-1}^{E}\right)\] \[+(K_{t}^{P}H_{t}^{P}-I)\Gamma_{t-1}w_{t-1}+K_{t}^{P}w_{t}^{P}.\] Similarly, we have \[e_{t}^{E}=(I-K_{t}^{E}H_{t}^{E})A_{t-1}\left((I-J_{t-1}^{E}F_{t-1}^{P})e_{t-1} ^{E}+J_{t-1}^{E}F_{t-1}^{P}e_{t-1}^{P}\right)+(K_{t}^{E}H_{t}^{E}-I)\Gamma_{t- 1}w_{t-1}+K_{t}^{E}w_{t}^{E}. \tag{2.43}\] Calculating \(e_{t}^{P}(e_{t}^{E})^{\top}\) using (2.42) and (2.43) and taking the expectation lead to (2.39) and (2.40). Now we simplify (2.40) to obtain (2.21j). As for (2.28), we have \[\widehat{\Sigma}_{t-1}^{E}-(\widetilde{\Sigma}_{t-1}^{(P,E)})^{\top}=J_{t-1}^ {E}F_{t-1}^{P}\widehat{\Sigma}_{t-1}^{(P,E)}. \tag{2.44}\] By substituting \(J_{t-1}^{E}F_{t-1}^{P}=(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{\Sigma}_{t-1}^ {(P,E)})^{\top})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}\) and \(J_{t-1}^{P}F_{t-1}^{E}=(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^ {(P,E)})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}\) into (2.40) and by using the fact that \(J_{t-1}^{E}F_{t-1}^{P}+J_{t-1}^{P}F_{t-1}^{E}=I\), we can rewrite \(\Delta_{t-1}^{(P,E)}\) as: \[\Delta_{t-1}^{(P,E)} = J_{t-1}^{E}F_{t-1}^{P}\widetilde{\Sigma}_{t-1}^{(P,E)}(J_{t-1}^{ P}F_{t-1}^{E})^{\top}+J_{t-1}^{E}F_{t-1}^{P}\widehat{\Sigma}_{t-1}^{P}(J_{t-1}^{E}F _{t-1}^{P})^{\top} \tag{2.45}\] \[+J_{t-1}^{P}F_{t-1}^{E}(\widehat{\Sigma}_{t-1}^{(P,E)})^{\top}(J_ {t-1}^{E}F_{t-1}^{P})^{\top}+J_{t-1}^{P}F_{t-1}^{E}\widehat{\Sigma}_{t-1}^{E}( J_{t-1}^{P}F_{t-1}^{E})^{\top}\] \[= (\widehat{\Sigma}_{t-1}^{E}-(\widetilde{\Sigma}_{t-1}^{(P,E)})^{ \top})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}\widetilde{\Sigma}_{t-1}^{(P,E)}( \widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{P}-\widetilde{ \Sigma}_{t-1}^{(P,E)})^{\top}\] (2.46) \[+(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{\Sigma}_{t-1}^{(P,E)})^{ \top})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}\widehat{\Sigma}_{t-1}^{P}( \widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{ \Sigma}_{t-1}^{(P,E)})^{\top})^{\top}\] (2.47) \[+(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P,E)})( \widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widetilde{\Sigma}_{t-1}^{(P,E)})^{\top}( \widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{ \Sigma}_{t-1}^{(P,E)})^{\top})^{\top}\] (2.48) \[+(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P,E)})( \widehat{\Sigma}_{t-1}^{(P,E)})^{-1}\widehat{\Sigma}_{t-1}^{E}(\widehat{ \Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1 }^{(P,E)})^{\top}. \tag{2.49}\] Given the fact that \(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{\Sigma}_{t-1}^{(P,E)})^{\top}=\widehat{ \Sigma}_{t-1}^{(P,E)}-(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P,E )})\), we have \[(\ref{eq:2.46}) = \widetilde{\Sigma}_{t-1}^{(P,E)}(\widehat{\Sigma}_{t-1}^{(P,E)})^{ -1}(\widehat{\Sigma}_{t-1}^{P}-(\widetilde{\Sigma}_{t-1}^{(P,E)})^{\top}) \tag{2.50}\] \[-(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P,E)})( \widehat{\Sigma}_{t-1}^{(P,E)})^{-1}\widetilde{\Sigma}_{t-1}^{(P,E)}( \widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{P}-(\widetilde{ \Sigma}_{t-1}^{(P,E)})^{\top}). \tag{2.51}\] Similarly, \(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P,E)}=\widehat{\Sigma}_{t-1 }^{(P,E)}-(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{\Sigma}_{t-1}^{(P,E)})^{\top})\) leads to the following relationship \[(\ref{eq:2.48}) = (\widetilde{\Sigma}_{t-1}^{(P,E)})^{\top}(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{E}-\widetilde{\Sigma}_{t-1}^{(P,E)}) \tag{2.53}\] \[-(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{\Sigma}_{t-1}^{(P,E)})^{ \top})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widetilde{\Sigma}_{t-1}^{(P,E)})^{ \top}(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{E}- \widetilde{\Sigma}_{t-1}^{(P,E)}).\] Combine (2.51) and (2.49), we have \[(\ref{eq:2.51})+(\ref{eq:2.49})=(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1 }^{(P,E)})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{E}- \widetilde{\Sigma}_{t-1}^{(P,E)})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}( \widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P,E)})^{\top}. \tag{2.54}\] Combine (2.53) and (2.47), we have \[(\ref{eq:2.53})+(\ref{eq:2.47})=(\widehat{\Sigma}_{t-1}^{E}-(\widetilde{ \Sigma}_{t-1}^{(P,E)})^{\top})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}( \widehat{\Sigma}_{t-1}^{P}-\widetilde{\ It is easy to check that \((2.54)+(2.55)=(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P,E)})^{ \top}(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widehat{\Sigma}_{t-1}^{E}-( \widetilde{\Sigma}_{t-1}^{(P,E)})^{\top})^{\top}.\) Combining with (2.50) and (2.52), we have \[\Delta_{t-1}^{(P,E)}=(\widehat{\Sigma}_{t-1}^{P}-\widetilde{\Sigma}_{t-1}^{(P, E)})(\widehat{\Sigma}_{t-1}^{(P,E)})^{-1}(\widetilde{\Sigma}_{t-1}^{E}-( \widetilde{\Sigma}_{t-1}^{(P,E)})^{\top})^{\top}+\widetilde{\Sigma}_{t-1}^{(P,E)}. \tag{2.56}\] Finally we show that \(\widehat{\Sigma}_{t-1}^{(P,E)}\) is positive definite, which guarantees that \(F_{t-1}^{E}\widehat{\Sigma}_{t-1}^{(P,E)}\widehat{\Sigma}_{t-1}^{(P,E)}(F_{t- 1}^{E})^{\top}\) is of rank \(k\). To see this, we have \[\widehat{\Sigma}_{t}^{(P,E)} = \mathbb{E}\left[\big{(}e_{t}^{E}-e_{t}^{P}\big{)}\big{(}e_{t}^{E} -e_{t}^{P}\big{)}^{\top}\right]=\mathbb{E}[s_{t-1}(s_{t-1})^{\top}]+K_{t}^{P} G^{P}(K_{t}^{P})^{\top}+K_{t}^{E}G^{E}(K_{t}^{E})^{\top},\] with \(s_{t-1}\) defined as \[s_{t-1} = (I-K_{t}^{E}H_{t}^{E})A_{t-1}\left((I-J_{t-1}^{E}F_{t-1}^{P})e_{t- 1}^{E}+J_{t-1}^{E}F_{t-1}^{P}e_{t-1}^{P}\right)+(K_{t}^{E}H_{t}^{E}-I)\Gamma_{ t-1}w_{t-1}\] \[-(I-K_{t}^{P}H_{t}^{P})A_{t-1}\left((I-J_{t-1}^{P}F_{t-1}^{E})e_{ t-1}^{P}+J_{t-1}^{P}F_{t-1}^{E}e_{t-1}^{E}\right)-(K_{t}^{P}H_{t}^{P}-I) \Gamma_{t-1}w_{t-1}.\] It is easy to see that \(\mathbb{E}[s_{t-1}(s_{t-1})^{\top}]\) is positive semi-definite. \(K_{t}^{P}G^{P}(K_{T}^{P})^{\top}\) is positive definite since \(K_{t}^{P}\) has rank \(n\) and \(G^{p}\) is positive definite. Similarly, \(K_{t}^{E}G^{E}(K_{T}^{E})^{\top}\) is also positive definite. ### Conditional Expectation and Tower Property In partially observable game settings, players face the difficult task of incrementally estimating unknown quantities through information filtering, and then using those estimates to make informed decisions. In the linear-quadratic framework, each player \(i\) needs to determine \[\mathbb{E}\left[x_{t}\,\Big{|}\;\mathcal{H}_{t}^{i}\right]\!,\;\text{and}\; \mathbb{E}\Big{[}x_{t}^{\top}O_{t}x_{t}\,|\,\mathcal{H}_{t}^{i}\Big{]}, \tag{2.57}\] for any given matrix \(O_{t}\in\mathbb{R}^{n\times n}\). This requires projection of the unknown quantity into the space spanned by the information filtration \(\mathcal{H}_{t}^{i}\). Given that \(\mathcal{H}_{t}^{i}\) contains information on the opponent's action \(u_{t-1}^{j}\), the critical challenge boils down to how to utilize this information so that the conditional expectation can be calculated in a valid incremental form to facilitate further analysis and renders the game amenable to solution by dynamic programming. Theorem 2.4 provides an explicit formula to calculate (2.57) in an incremental format: \[\mathbb{E}\left[x_{t}\,\Big{|}\;\mathcal{H}_{t}^{i}\right]=\widehat{x}_{t}^{ i},\;\;\text{and}\;\mathbb{E}\Big{[}x_{t}^{\top}O_{t}x_{t}\,|\,\mathcal{H}_{t}^{i} \Big{]}=(\widehat{x}_{t}^{i})^{\top}O_{t}\widehat{x}_{t}^{i}+\text{Tr}\left(O _{t}\widehat{\Sigma}_{t}^{i}\right), \tag{2.58}\] with \(\widehat{x}_{t}^{i}\) and \(\widehat{\Sigma}_{t}^{i}\) following the explicit recursive formats in (2.21g) and (2.21h), respectively. Indeed, a similar setup was first studied in [13], in which zero-sum linear-quadratic dynamic games with partial observations and asymmetric information are considered. The players' initial state estimate and their measurements are private information, but each player is able to observe his opponent's past control inputs, so the players' past controls are shared information. However, when the conditional expectation \(\mathbb{E}[\cdot|\mathcal{H}_{t}^{i}]\) is calculated, the recursive format proposed in [13] follows the single-agent Bayes formula (see Equations (11)-(20) in [13] or similarly (2.13a)-(2.13e)) and does not utilize the observable information of the opponent's past controls to improve their state estimation, leading to an incorrect formula and hence the tower property fails to hold, let alone the DPP. To be mathematically more concrete, we do a sanity check to show that the tower property holds when using the recursive expression in (2.21a)-(2.21k). Namely, from player \(P\)'s perspective, we have \[\mathbb{E}\left[\,\mathbb{E}\left[x_{t}^{\top}Q_{t}^{P}x_{t}\Big{|}\,\mathcal{H }_{t}^{P}\right]\right|\mathcal{H}_{t-1}^{P}\right]=\mathbb{E}\left[x_{t}^{ \top}Q_{t}^{P}x_{t}\Big{|}\,\mathcal{H}_{t-1}^{P}\right] \tag{2.59}\] holds when using (2.21a)-(2.21k) to unwind the conditional expectations. To do so, we will first calculate both the LHS and the RHS of (2.59) using (2.21a)-(2.21k), and then match all the terms to prove that the LHS equals the RHS. Finally, we will show that the tower property fails to hold when using the formulas in Equations (11)-(20) of [13] to unwind the conditional expectations. Calculations using (2.21a)-(2.21k).For the LHS of (2.59), by (2.21g), \[\widehat{x}_{T}^{P} = \big{(}\widehat{x}_{T}^{P}\big{)}^{-}+K_{T}^{P}\left[z_{T}^{P}-H_{T} ^{P}\big{(}\widehat{x}_{T}^{P}\big{)}^{-}\right]\] \[= \big{(}A_{T-1}(\widehat{x}_{T-1}^{P})^{+}+B_{T-1}^{P}u_{T-1}^{P}+B _{T-1}^{E}u_{T-1}^{E}\big{)}+K_{T}^{P}w_{T}^{P}\] \[+K_{T}^{P}H_{T}^{P}\left[A_{T-1}\big{(}x_{T-1}-\big{(}\widehat{x}_ {T-1}^{P}\big{)}^{+}\big{)}+\Gamma_{T-1}w_{T-1}\right],\] where (2.60) holds by definition of \(z_{T}^{P}\) given in (2.3), and (2.21d). We will just consider the case when \(F_{t}^{P}\) has rank \(m<n\) and \(F_{t}^{E}\) has rank \(k<n\) for all \(t=0,1,\ldots,T-1\), as the other cases will follow the same logic. Define \(\Pi_{T-1}^{P}:=\big{(}\widehat{\Sigma}_{T-1}^{P}-\widetilde{\Sigma}_{T-1}^{( P,E)}\big{)}\big{(}\widehat{\Sigma}_{T-1}^{(P,E)}\big{)}^{-1}\), then (2.28) becomes \[\Pi_{T-1}^{P}=J_{T-1}^{P}F_{T-1}^{E}. \tag{2.61}\] We can then rewrite (2.21b) as \[(\widehat{x}_{T-1}^{P})^{+}=(I-\Pi_{T-1}^{P})\widehat{x}_{T-1}^{P}+\Pi_{T-1}^ {P}\widehat{x}_{T-1}^{E}=(I-\Pi_{T-1}^{P})\widehat{x}_{T-1}^{P}+\Pi_{T-1}^{P} (\widehat{x}_{T-1}^{P}-e_{T-1}^{P}+e_{T-1}^{E}).\] Using this equation in (2.60) we obtain \[\widehat{x}_{T}^{P}=(A_{T-1}+B_{T-1}^{P}F_{T-1}^{P}+B_{T-1}^{E}F_{T-1}^{E}) \widehat{x}_{T-1}^{P}+L_{1}e_{T-1}^{E}+L_{2}e_{T-1}^{P}+K_{T}^{P}w_{T}^{P}+K_ {T}^{P}H_{T}^{P}\Gamma_{T-1}w_{T-1} \tag{2.62}\] with \[L_{1} := A_{T-1}\Pi_{T-1}^{P}+B_{T-1}^{E}F_{T-1}^{E}-K_{T}^{P}H_{T}^{P}A _{T-1}\Pi_{T-1}^{P}, \tag{2.63}\] \[L_{2} := -A_{T-1}\Pi_{T-1}^{P}-B_{T-1}^{E}F_{T-1}^{E}-K_{T}^{P}H_{T}^{P}A _{T-1}(I-\Pi_{T-1}^{P}). \tag{2.64}\] Then substituting (2.62) into the LHS of (2.59), we have \[\mathbb{E}\left[\left.\mathbb{E}\left[\left.x_{T}^{\top}Q_{T}^{P }x_{T}\right|\mathcal{H}_{T}^{P}\right]\right|\mathcal{H}_{T-1}^{P}\right]= \mathbb{E}\left[\left.(\widehat{x}_{T}^{P})^{\top}Q_{T}^{P}\widehat{x}_{T}^{P} \right|\mathcal{H}_{T-1}^{P}\right]+\mathrm{Tr}(Q_{T}^{P}\widehat{\Sigma}_{T}^ {P}) \tag{2.65}\] \[= (\widehat{x}_{T-1}^{P})^{\top}\left(A_{T-1}+B_{T-1}^{P}F_{T-1}^{ P}+B_{T-1}^{E}F_{T-1}^{E}\right)^{\top}Q_{T}^{P}\left(A_{T-1}+B_{T-1}^{P}F_{T-1}^{P }+B_{T-1}^{E}F_{T-1}^{E}\right)\widehat{x}_{T-1}^{P}\] \[+\,\mathrm{Tr}(L_{1}^{\top}Q_{T}^{P}L_{1}\widehat{\Sigma}_{T-1}^ {E})+\mathrm{Tr}(L_{2}^{\top}Q_{T}^{P}L_{2}\widehat{\Sigma}_{T-1}^{P})+2\, \mathrm{Tr}(L_{1}^{\top}Q_{T}^{P}L_{2}\widehat{\Sigma}_{T-1}^{(P,E)})\] \[+\,\mathrm{Tr}((K_{T}^{P})^{\top}Q_{T}^{P}K_{T}^{P}G^{P})+\mathrm{ Tr}(\Gamma_{T-1}^{T}(H_{T}^{P})^{\top}(K_{T}^{P})^{\top}Q_{T}^{P}K_{T}^{P}H_{T}^{P} \Gamma_{T-1}W)+\mathrm{Tr}(Q_{T}^{P}\widehat{\Sigma}_{T}^{P}).\] For the RHS of (2.59), we have by expanding \(x_{T}\) directly, \[\mathbb{E}\left[\left.x_{T}^{\top}Q_{T}^{P}x_{T}\right|\mathcal{ H}_{T-1}^{P}\right]\] \[= (\widehat{x}_{T-1}^{P})^{\top}\left(A_{T-1}+B_{T-1}^{P}F_{T-1}^{ P}+B_{T-1}^{E}F_{T-1}^{E}\right)^{\top}Q_{T}^{P}\left(A_{T-1}+B_{T-1}^{P}F_{T-1}^{P}+B _{T-1}^{E}F_{T-1}^{E}\right)\widehat{x}_{T-1}^{P}\] \[+\,\mathrm{Tr}(\Gamma_{T-1}^{\top}Q_{T}\Gamma_{T-1}W)+\mathrm{Tr} \left((A_{T-1}+B_{T-1}^{E}F_{T-1}^{E})^{\top}Q_{T}^{P}(A_{T-1}+B_{T-1}^{E}F_{T- 1}^{E})\widehat{\Sigma}_{T-1}^{P}\right)\] \[+\,\mathrm{Tr}\left((F_{T-1}^{E})^{\top}(B_{T-1}^{E})^{\top}Q_{T} ^{P}B_{T-1}^{E}F_{T-1}^{E}\widehat{\Sigma}_{T-1}^{E}\right)-2\,\mathrm{Tr} \left((A_{T-1}+B_{T-1}^{E}F_{T-1}^{E})^{\top}Q_{T}^{P}B_{T-1}^{E}F_{T-1}^{E} \widehat{\Sigma}_{T-1}^{(E,P)}\right).\] The proof that (2.65) is equivalent to (2.66) is deferred to Appendix A. Calculations using the result in [13].In [13], the authors used the following recursive formulas which are essentially the same as the single agent case (see Theorem 2.3): \[x_{t}\sim\mathcal{N}(\widehat{x}_{t}^{i},\widehat{\Sigma}_{t}^{i}), \tag{2.67}\] where \(\widehat{x}^{i}_{t}\) and \(\widehat{\Sigma}^{i}_{t}\) are updated according to (2.13a)-(2.13e) Now we use the recursive formula listed in (2.13a)-(2.13e) to calculate the LHS and RHS of (2.59). For the LHS, by direct calculation, \[\mathbb{E}\left[(\widehat{x}^{P}_{T})^{\top}Q^{P}_{T}\widehat{x}^{P }_{T}\right]\left|\mathcal{H}^{P}_{T-1}\right]+\mathrm{Tr}(Q^{P}_{T}\widehat{ \Sigma}^{P}_{T}) \tag{2.68}\] \[= (\widehat{x}^{P}_{T-1})^{\top}A^{\top}_{T-1}Q^{P}_{T}A_{T-1} \widehat{x}^{P}_{T-1}+(\widehat{x}^{P}_{T-1})^{\top}(F^{P}_{T-1})^{\top}(B^{P} _{T-1})^{\top}Q^{P}_{T}B^{P}_{T-1}F^{P}_{T-1}\widehat{x}^{P}_{T-1}\] \[+(F^{E}_{T-1}\widehat{x}^{P}_{T-1})^{\top}((B^{E}_{T-1})^{\top}Q^ {P}_{T}B^{E}_{T-1})F^{E}_{T-1}\widehat{x}^{P}_{T-1}+\mathrm{Tr}((F^{E}_{T-1})^ {\top}(B^{E}_{T-1})^{\top}Q^{P}_{T}B^{E}_{T-1}F^{E}_{T-1}\widehat{\Sigma}^{(P, E)}_{T-1})\] \[+2(F^{P}_{T-1}\widehat{x}^{P}_{T-1})^{\top}(B^{P}_{T-1})^{\top}Q ^{P}_{T}A_{T-1}\widehat{x}^{P}_{T-1}+2(F^{P}_{T-1}\widehat{x}^{P}_{T-1})^{\top }(B^{P}_{T-1})^{\top}Q^{P}_{T}B^{E}_{T-1}F^{E}_{T-1}\widehat{x}^{P}_{T-1}\] \[+2(F^{E}_{T-1}\widehat{x}^{E}_{T-1})^{\top}(B^{E}_{T-1})^{\top}Q ^{P}_{T}A_{T-1}\widehat{x}^{P}_{T-1}-2\,\mathrm{Tr}((B^{E}_{T-1}F^{E}_{T-1})^ {\top}Q^{P}_{T}K^{P}_{T}H^{P}_{T}A_{T-1}\widetilde{\Sigma}^{(P,E)}_{T-1})\] \[+2\,\mathrm{Tr}((B^{E}_{T-1}F^{E}_{T-1})^{\top}Q^{P}_{T}K^{P}_{T} H^{P}_{T}A_{T-1}\widehat{\Sigma}^{P}_{T-1})+\mathrm{Tr}\left(\Gamma^{\top}_{T-1}(H^{P} _{T})^{\top}(K^{P}_{T})^{\top}Q^{P}_{T}K^{P}_{T}H^{P}_{T}\Gamma_{T-1}W\right)\] \[+\,\mathrm{Tr}\left((K^{P}_{T})^{\top}Q^{P}_{T}K^{P}_{T}G^{P} \right)+\mathrm{Tr}\left(A^{\top}_{T-1}(H^{P}_{T})^{\top}(K^{P}_{T})^{\top}Q^ {P}_{T}K^{P}_{T}H^{P}_{T}A_{T-1}\widehat{\Sigma}^{P}_{T-1}\right)+\mathrm{Tr}( Q^{P}_{T}\widehat{\Sigma}^{P}_{T}).\] On the other hand, \[\mathbb{E}\left[x^{\top}_{T}Q^{P}_{T}x_{T}\right|\mathcal{H}^{P}_ {T-1}\right] \tag{2.69}\] \[= (\widehat{x}^{P}_{T-1})^{\top}A^{\top}_{T-1}Q^{P}_{T}A_{T-1} \widehat{x}^{P}_{T-1}+(F^{P}_{T-1}\widehat{x}^{P}_{T-1})^{\top}(B^{P}_{T-1})^ {\top}Q^{P}_{T}B^{P}_{T-1}(F^{P}_{T-1}\widehat{x}^{P}_{T-1})\] \[+(F^{E}_{T-1}\widehat{x}^{P}_{T-1})^{\top}((B^{E}_{T-1})^{\top}Q ^{P}_{T}B^{E}_{T-1})F^{E}_{T-1}\widehat{x}^{P}_{T-1}+\mathrm{Tr}(((B^{E}_{T-1}F ^{E}_{T-1})^{\top}Q^{P}_{T}B^{E}_{T-1}F^{E}_{T-1})\widehat{\Sigma}^{(P,E)}_{T-1})\] \[+2u^{\top}_{T-1}(B^{P}_{T-1})^{\top}Q^{P}_{T}A_{T-1}\widehat{x}^{ P}_{T-1}+2u^{\top}_{T-1}(B^{P}_{T-1})^{\top}Q^{P}_{T}B^{E}_{T-1}F^{E}_{T-1} \widehat{x}^{P}_{T-1}\] \[+2(F^{E}_{T-1}\widehat{x}^{P}_{T-1})^{\top}(B^{E}_{T-1})^{\top}Q ^{P}_{T}A_{T-1}(\widehat{x}^{P}_{T-1})+2\,\mathrm{Tr}((B^{E}_{T-1}F^{E}_{T-1}) ^{\top}Q^{P}_{T}A_{T-1}\widehat{\Sigma}^{P}_{T-1})\] \[-2\,\mathrm{Tr}((B^{E}_{T-1}F^{E}_{T-1})^{\top}Q^{P}_{T}A_{T-1} \widetilde{\Sigma}^{(P,E)}_{T-1})+\mathrm{Tr}(A^{\top}_{T-1}Q^{P}_{T}A_{T-1} \widehat{\Sigma}^{P}_{T-1})+\mathrm{Tr}\left(\Gamma^{\top}_{T-1}Q^{P}_{T}\Gamma _{T-1}W\right).\] By routine calculations similar to those used for Step 3 in Appendix A, we see that (2.68) and (2.69) differ from each other by: \[2\,\mathrm{Tr}\left((F^{E}_{t})^{\top}(B^{E}_{t})^{\top}Q^{P}_{t+1}(K^{P}_{t+1 }H^{P}_{t+1}-I)A_{t}\left(\widetilde{\Sigma}^{(E,P)}_{t}-\widehat{\Sigma}^{P}_{ t}\right)\right). \tag{2.70}\] Hence the recursive formula (2.13a)-(2.13e) adopted in [13] does not lead to the correct conditional expectation, let alone the tower property and DPP. ## 3 Equilibrium Solution Now that we have the information corrections in the updating scheme, we will use these to discuss the DPP and the Nash equilibrium in this section. ### Dynamic Programming Principle Although both players do not have access to the true state and their controls are related to their state estimates which causes extra correlated randomness, we are still able to derive the individual DPP for the two-player general-sum linear-quadratic Gaussian game under a fixed linear and Markovian strategy from the opponent. This is because the sufficient statistics derived from Theorem 2.4 leads to a valid tower property given in (2.59). This equips us with sufficient tools to prove the DPP. To start, denote the value function of player \(i\) (\(i=P,E\)), under a fixed strategy \(F^{j}:=\{F^{j}_{t}\}_{t=0}^{T-1}\) from player \(j\) (\(j\neq i\)) and at any given time \(0\leq t\leq T-1\), as \[V^{i}_{t}(\widehat{x}^{i}_{t};F^{j})=\min_{\{u^{i}_{s}\}_{s=t}^{T-1}}\mathbb{E} \left[x^{\top}_{T}Q^{i}_{T}x_{T}+\sum_{s=t}^{T-1}(u^{i}_{t})^{\top}R^{i}_{t}u^{i}_ {t}+x^{\top}_{t}Q^{i}_{t}x_{t}\right|\mathcal{H}^{i}_{t}\right] \tag{3.1}\] subject to (2.1) and \(F^{j}\), with the terminal value \[V_{T}^{i}\left(\widehat{x}_{T}^{i};F^{j}\right)=V_{T}^{i}\left(\widehat{x}_{T}^{i }\right)=\,\mathbb{E}\left[x_{T}^{\top}Q_{T}^{i}x_{T}\,\Big{|}\ \mathcal{H}_{T}^{i}\right]=\left(\widehat{x}_{T}^{i}\right)^{\top}Q_{T}^{i} \widehat{x}_{T}^{i}+\mathrm{Tr}\left(Q_{T}^{i}\widehat{x}_{T}^{i}\right). \tag{3.2}\] Now we prove the DPP in the two-player general-sum linear-quadratic Gaussian game under partial observations and asymmetric information. **Theorem 3.1** (Dynamic Programming Principle).: _For any given time \(0\leq t\leq T-1\), the value function \(V_{t}^{i}\) for player \(i\)\((i=P,E)\), under a fixed policy \(F^{j}:=\{F_{t}^{j}\}_{t=0}^{T-1}\) from player \(i\), satisfies_ \[V_{t}^{i}(\widehat{x}_{t}^{i};F^{j})=\min_{u_{t}^{i}}\mathbb{E}\left[x_{t}^{ \top}Q_{t}^{i}x_{t}+(u_{t}^{i})^{\top}R_{t}^{i}u_{t}^{i}+V_{t+1}^{i}\left( \widehat{x}_{t+1}^{i};F^{j}\right)\Big{|}\ \mathcal{H}_{t}^{i}\right] \tag{3.3}\] _with \(j=P,E\), \(i\neq j\), and terminal value \(V_{T}^{i}(\widehat{x}_{T}^{i};F^{j})\) given in (3.2)._ Proof.: We take the perspective of player \(P\) and the result for player \(E\) follows the same logic. By definition of the value function in (3.1) we have, \[V_{t}^{P}(\widehat{x}_{t}^{P};F^{E}) \tag{3.4}\] \[= \min_{\{u_{s}^{P}\}_{s=t}^{T-1}}\mathbb{E}\left[x_{T}^{\top}Q_{T} ^{P}x_{T}+\sum_{s=t+1}^{T-1}(u_{t}^{P})^{\top}R_{t}^{P}u_{t}^{P}+x_{t}^{\top} Q_{t}^{P}x_{t}\right|\mathcal{H}_{t}^{P}\right]\] \[= \min_{u_{t}^{P}}\Bigg{\{}\mathbb{E}\left[x_{t}^{\top}Q_{t}^{P}x_{ t}+(u_{t}^{P})^{\top}R_{t}^{P}u_{t}^{P}\Big{|}\,\mathcal{H}_{t}^{P}\right]\] \[\quad+\min_{\{u_{s}^{P}\}_{s=t+1}^{T}}\mathbb{E}\left[x_{T}^{\top }Q_{T}^{P}x_{T}+\sum_{s=t+1}^{T-1}\left((u_{s}^{P})^{\top}R_{s}^{P}u_{s}^{P}+x _{s}^{\top}Q_{s}^{P}x_{s}\right)\right|\mathcal{H}_{t}^{P}\right]\Bigg{\}}\] \[= \min_{u_{t}^{P}}\Bigg{\{}\mathbb{E}\left[x_{t}^{\top}Q_{t}^{P}x_{ t}+(u_{t}^{P})^{\top}R_{t}^{P}u_{t}^{P}\Big{|}\,\mathcal{H}_{t}^{P}\right]\] (3.5) \[\quad+\mathbb{E}\left[\min_{\{u_{s}^{P}\}_{s=t+1}^{T}}\mathbb{E} \left[x_{T}^{\top}Q_{T}^{P}x_{T}+\sum_{s=t+1}^{T-1}\left((u_{s}^{P})^{\top}R_ {s}^{P}u_{s}^{P}+x_{s}^{\top}Q_{s}^{P}x_{s}\right)\right|\mathcal{H}_{t+1}^{P }\right]\Bigg{|}\mathcal{H}_{t}^{P}\Bigg{]}\Bigg{\}},\] where (3.4) holds since \(u_{t}^{P}\) is adapted to \(\mathcal{H}_{t}^{P}\), (3.5) holds by the tower property (2.59), and (3.6) holds since \(u_{s}^{P}\) is adapted to \(\mathcal{H}_{s}^{P}\) (\(s\geq t+1\)). Finally (3.6) leads to the DPP (3.3) by the definition of \(V_{t+1}^{P}\). ### Nash Equilibrium In this section, we will show that the Nash equilibrium strategy for the game (2.1)-(2.2)-(2.3)-(2.5)-(2.7) is related to the solution of a coupled Riccati system. Assumption 3.2 is the existence and uniqueness of this solution and we provide a sufficient condition for this assumption in Remark 3.3. **Assumption 3.2**.: _There exists a unique solution set \(F^{P*}:=\{F_{t}^{P*}\}_{t=0}^{T-1}\) with \(F_{t}^{P*}\in\mathcal{A}^{P}\) and \(F^{E*}:=\{F_{t}^{E*}\}_{t=0}^{T-1}\) with \(F_{t}^{E*}\in\mathcal{A}^{E}\) to the following set of linear matrix equations:_ \[F_{t}^{P*} = -(R_{t}^{P}+(B_{t}^{P})^{\top}U_{t+1}^{P*}B_{t}^{P})^{-1}\big{(}(B_ {t}^{P})^{\top}U_{t+1}^{P*}(A_{t}+B_{t}^{E}F_{t}^{E*})\big{)}, \tag{3.7}\] \[F_{t}^{E*} = -(R_{t}^{E}+(B_{t}^{E})^{\top}U_{t+1}^{E*}B_{t}^{E})^{-1}\big{(}(B_{t }^{E})^{\top}U_{t+1}^{E*}(A_{t}+B_{t}^{P}F_{t}^{P*})\big{)}, \tag{3.8}\] _where \(\{U_{t}^{P*}\}_{t=0}^{T}\) and \(\{U_{t}^{E*}\}_{t=0}^{T}\) are obtained recursively backwards from_ \[U_{t}^{P*} = Q_{t}^{P}+(F_{t}^{P*})^{\top}R_{t}^{P}F_{t}^{P*}+\big{(}A_{t}+B_ {t}^{P}F_{t}^{P*}+B_{t}^{E}F_{t}^{E*}\big{)}^{\top}U_{t+1}^{P*}\big{(}A_{t}+B_ {t}^{P}F_{t}^{P*}+B_{t}^{E}F_{t}^{E*}\big{)}, \tag{3.9}\] \[U_{t}^{E*} = Q_{t}^{E}+(F_{t}^{E*})^{\top}R_{t}^{E}F_{t}^{E*}+\big{(}A_{t}+B_ {t}^{P}F_{t}^{P*}+B_{t}^{E}F_{t}^{E*}\big{)}^{\top}U_{t+1}^{E*}\big{(}A_{t}+B_ {t}^{P}F_{t}^{P*}+B_{t}^{E}F_{t}^{E*}\big{)}. \tag{3.10}\] _with terminal conditions \(U_{T}^{i*}=Q_{T}^{i}\) for \(i=P,E\)._ **Remark 3.3**.: A sufficient condition for the unique solvability of (3.9)-(3.10) is the invertibility of the block matrix \(\Phi_{t}\), \(t=0,1,\cdots,T-1\), with the \(ii\)-th block given by \(R_{t}^{i}+(B_{t}^{i})^{\top}U_{t+1}^{i*}B_{t}^{i}\) and the \(ij\)-th block given by \((B_{t}^{i})^{\top}U_{t+1}^{i*}B_{t}^{j}\), where \(i,j=P,E\) and \(j\neq i\). See Remark 6.5 in [2]. Using the DPP formula in Theorem 3.1, we have the following result for the Nash equilibrium strategy and the corresponding value function for the game (2.1)-(2.2)-(2.3)-(2.5)-(2.7). **Theorem 3.4**.: _Suppose Assumptions 2.1 and 3.2 hold. We also assume that both players are applying linear strategies. Then the unique Nash equilibrium policy can be expressed as for \(i=P,E\)_ \[u_{t}^{i*}(\widehat{x}_{t}^{i})=F_{t}^{i*}\widehat{x}_{t}^{i}, \tag{3.11}\] _with \(F_{t}^{P*}\) and \(F_{t}^{E*}\) given in (3.7) and (3.8). The corresponding optimal value function of player \(i\) is quadratic \((0\leq t\leq T)\):_ \[V_{t}^{i}(\widehat{x}_{t}^{i};F^{j*})=(\widehat{x}_{t}^{i})^{\top}U_{t}^{i*} \widehat{x}_{t}^{i}+c_{t}^{i*}, \tag{3.12}\] _where \(j=P,E\) and \(j\neq i\), the matrices \(U_{t}^{P*},U_{t}^{E*}\in\mathbb{R}^{n\times n}\) are given in (3.9) and (3.10), and the scalars \(c_{t}^{P*},c_{t}^{E*}\in\mathbb{R}\) are given by_ \[c_{t}^{i*} = c_{t+1}^{i*}+\mathrm{Tr}\left(Q_{t}^{i}\widehat{\Sigma}_{t}^{i} \right)-\mathrm{Tr}\left(U_{t+1}^{i*}\widehat{\Sigma}_{t+1}^{i}\right)+ \mathrm{Tr}\left((A_{t}+B_{t}^{j}F_{t}^{j*})^{\top}U_{t+1}^{i*}(A_{t}+B_{t}^{ j}F_{t}^{j*})\widehat{\Sigma}_{t}^{i}\right)\] \[+\mathrm{Tr}\left((F_{t}^{j*})^{\top}(B_{t}^{j})^{\top}U_{t+1}^{i* }B_{t}^{j}F_{t}^{j*}\widehat{\Sigma}_{t}^{j}\right)-2\,\mathrm{Tr}\left((A_{t }+B_{t}^{j}F_{t}^{j*})^{\top}U_{t+1}^{i*}B_{t}^{j}F_{t}^{j*}\widehat{\Sigma}_{ t}^{(j,i)}\right)\] \[+\mathrm{Tr}(\Gamma_{t}^{\top}U_{t+1}^{i*}\Gamma_{t}W). \tag{3.13}\] _The terminal condition for player \(i\) is \(c_{T}^{i*}=\mathrm{Tr}\left(Q_{T}^{i}\widehat{\Sigma}_{T}^{i}\right)\)._ **Remark 3.5** (Discussion of linear policies).: 1. In the partially observable setting, it is widely recognized that the existence of a Nash equilibrium is not guaranteed if a more general class of policies is considered, as players can mislead their opponents by disclosing false intentions [5, 20]. 2. We note that the optimal policies \(F_{t}^{P*}\) and \(F_{t}^{E*}\) given in (3.7) and (3.8), and the Riccati equations given in (3.9) and (3.10), are the same as the optimal policies and Riccati equations in the case of full observation ([2, Corollary 6.4]). However, the linear-quadratic Gaussian game under partial observation (defined in (2.1)-(2.2)-(2.3)-(2.5)-(2.7)) differs from the linear-quadratic game with _full information_ in [2, Corollary 6.4] in the sense that the Nash equilibrium strategy is linear in the _state estimate_ rather than the _true state_, and the scalars \(c_{t}^{P*}\) and \(c_{t}^{E*}\) in the value function involve more terms due to the errors in state estimation. Proof.: We prove the theorem by backward induction. We take the perspective of player \(P\) and let player \(E\) use the linear strategy \(F^{E*}=\{F_{t}^{E*}\}_{t=0}^{T-1}\) defined in (3.8). At time \(T\), (3.12) holds by the terminal condition given in (3.2). At time \(T-1\), by Theorem 3.1, we have the DPP for player \(P\): \[V_{T-1}^{P}(\widehat{x}_{T-1}^{P};F^{E*})=\min_{u_{T-1}^{P}}\mathbb{E}\left[x_ {T-1}^{\top}Q_{T-1}^{P}x_{T-1}+(u_{T-1}^{P})^{\top}R_{T-1}^{P}u_{T-1}^{P}+V_{T} ^{P}\left(\widehat{x}_{T}^{P};F^{E*}\right)\right]\mathcal{H}_{T-1}^{P}\right],\] and by (2.62), \[\widehat{x}_{T}^{P}=(A_{T-1}+B_{T-1}^{E}F_{T-1}^{E*})\widehat{x}_{T-1}^{P}+B_{T-1} ^{P}u_{T-1}^{P}+L_{T-1}^{1}e_{T-1}^{E}+L_{2}e_{T-1}^{P}+K_{T}^{P}w_{T}^{P}+K_{T}^ {P}H_{T}^{P}\Gamma_{T-1}w_{T-1},\] with \(L_{T-1}^{1}\) and \(L_{T-1}^{2}\) defined as \(L_{T-1}^{1}=A_{T-1}\Pi_{T-1}^{P}+B_{T-1}^{E}F_{T-1}^{E*}-K_{T}^{P}H_{T}^{P}A_{T- 1}\Pi_{T-1}^{P}\) and \(L_{T-1}^{2}=-A_{T-1}\Pi_{T-1}^{P}-B_{T-1}^{E}F_{T-1}^{*}-K_{T}^{P}H_{T}^{P}A_{T -1}(I-\Pi_{T-1}^{P})\), where \(\Pi_{T-1}^{P}\) is defined as \(\Pi_{T-1}^{P}=\big{(}\widehat{\Sigma}_{T-1}^{P}-\widetilde{\Sigma}_{T-1}^{(P,E )}\big{)}\big{(}\widehat{\Sigma}_{T-1}^{(P,E)}\big{)}^{-1}\). Hence \[V_{T-1}^{P}(\widehat{x}_{T-1}^{P};F^{E*}) = \min_{u_{T-1}^{P}}\left\{(u_{T-1}^{P})^{\top}R_{T-1}^{P}u_{T-1}^{ P}+(\widehat{x}_{T-1}^{P})^{\top}Q_{T}^{P}\widehat{x}_{T-1}^{P}+\mbox{Tr}\left(Q_{T-1}^ {P}\widehat{\Sigma}_{T-1}^{P}\right)\right. \tag{3.14}\] \[\left.+\mathbb{E}\left[V_{T}^{P}((A_{T-1}+B_{T-1}^{E}F_{T-1}^{E*} )\widehat{x}_{T-1}^{P}+B_{T-1}^{P}u_{T-1}^{P}+L_{T-1}^{1}e_{T-1}^{E}+L_{T-1}^{ 2}e_{T-1}^{P}\right.\right.\] \[\left.\left.+K_{T}^{P}w_{T}^{P}+K_{T}^{P}H_{T}^{P}\Gamma_{T-1}w_{T -1};F^{E*})\big{|}\,\mathcal{H}_{T-1}^{P}\right]\right\},\] Since \(V_{T}^{P}(\widehat{x}_{T}^{P})=(\widehat{x}_{T}^{P})^{\top}Q_{T}^{P}\widehat{ x}_{T}^{P}+\mbox{Tr}(Q_{T}^{P}\widehat{\Sigma}_{T}^{P})\), we have \[V_{T-1}^{P}(\widehat{x}_{T-1}^{P};F^{E*})=\min_{u_{T-1}^{P}} \left\{(u_{T-1}^{P})^{\top}R_{T-1}^{P}u_{T-1}^{P}+(\widehat{x}_{T-1}^{P})^{ \top}Q_{T-1}^{P}\widehat{x}_{T-1}^{P}+\mbox{Tr}\left(Q_{T-1}^{P}\widehat{ \Sigma}_{T-1}^{P}\right)\right.\] \[\left.+\,\mbox{Tr}\left(Q_{T}^{P}\widehat{\Sigma}_{T}^{P}\right)+ \mathbb{E}\left[((A_{T-1}+B_{T-1}^{E}F_{T-1}^{E*})\widehat{x}_{T-1}^{P}+B_{T- 1}^{P}u_{T-1}^{P}+L_{T-1}^{1}e_{T-1}^{E}+L_{T-1}^{2}e_{T-1}^{P}\right.\right.\] \[\left.\left.+K_{T}^{P}w_{T}^{P}+K_{T}^{P}H_{T}^{P}\Gamma_{T-1}w_{ T-1}\right)^{\top}Q_{T}^{P}\big{(}(A_{T-1}+B_{T-1}^{E}F_{T-1}^{E*})\widehat{x}_{T-1}^{P }+B_{T-1}^{P}u_{T-1}^{P}\right.\] \[\left.\left.+L_{T-1}^{1}e_{T-1}^{E}+L_{T-1}^{2}e_{T-1}^{P}+K_{T}^ {P}w_{T}^{P}+K_{T}^{P}H_{T}^{P}\Gamma_{T-1}w_{T-1}\right)\big{|}\,\mathcal{H}_{ T-1}^{P}\big{]}\right\}. \tag{3.15}\] Expanding terms in the expectation, (3.15) becomes \[V_{T-1}^{P}(\widehat{x}_{T-1}^{P};F^{E*}) \tag{3.16}\] \[= \min_{u_{T-1}^{P}}\left\{(u_{T-1}^{P})^{\top}(R_{T-1}^{P}+(B_{T-1 }^{P})^{\top}Q_{T}^{P}B_{T-1}^{P})u_{T-1}^{P}+2(\widehat{x}_{T-1}^{P})^{\top} \big{(}A_{T-1}+B_{T-1}^{E}F_{T-1}^{E*})^{\top}Q_{T}^{P}B_{T-1}^{P}u_{T-1}^{P}\right\}\] \[+(\widehat{x}_{T-1}^{P})^{\top}\Big{(}Q_{T-1}^{P}+\big{(}A_{T-1}+B _{T-1}^{E}F_{T-1}^{E*}\big{)}^{\top}Q_{T}^{P}\big{(}A_{T-1}+B_{T-1}^{E}F_{T-1}^ {E*}\big{)}\Big{)}\widehat{x}_{T-1}^{P}+\mbox{Tr}\left(Q_{T}^{P}\widehat{ \Sigma}_{T}^{P}\right)\] \[+\,\mbox{Tr}\left(Q_{T-1}^{P}\widehat{\Sigma}_{T-1}^{P}\right)+ \mbox{Tr}((L_{T-1}^{1})^{\top}Q_{T}^{P}L_{T-1}^{1}\widehat{\Sigma}_{T-1}^{E})+ \mbox{Tr}((L_{T-1}^{2})^{\top}Q_{T}^{P}L_{T-1}^{2}\widehat{\Sigma}_{T-1}^{P})\] \[+2\,\mbox{Tr}((L_{T-1}^{1})^{\top}Q_{T}^{P}L_{T-1}^{2}\widetilde{ \Sigma}_{T-1}^{(P,E)})+\mbox{Tr}\left(\Gamma_{T-1}^{\top}(H_{T}^{P})^{\top}(K_{T }^{P})^{\top}Q_{T}^{P}K_{T}^{P}H_{T}^{P}\Gamma_{T-1}W\right)\] \[+\,\mbox{Tr}\left((K_{T}^{P})^{\top}Q_{T}^{P}K_{T}^{P}G^{P}\right).\] We note that all the constant terms are independent of \(u_{T-1}^{P}\), since \(\widehat{\Sigma}_{T}^{P}\), \(\widehat{\Sigma}_{T-1}^{(P,E)}\), and \(\widehat{\Sigma}_{T-1}^{P}\) are independent of the policy \(F_{T-1}^{P}\). Thus these constant terms will not be involved in the minimization problem. Applying the first order condition to the minimization part in (3.16) leads to \[u_{T-1}^{P*}=-(R_{T-1}^{P}+(B_{T-1}^{P})^{\top}Q_{T}^{P}B_{T-1}^{P})^{-1}(B_{T-1 }^{P})^{\top}Q_{T}^{P}(A_{T-1}+B_{T-1}^{E}F_{T-1}^{E*})\widehat{x}_{T-1}^{P}=F_{ T-1}^{P*}\widehat{x}_{T-1}^{P}.\] Similarly, we can derive the optimal policy of the player \(E\) when fixing player P's strategy \(F_{T-1}^{P*}\): \[u_{T-1}^{E*}=-(R_{T-1}^{E}+(B_{T-1}^{E})^{\top}Q_{T}^{E}B_{T-1}^{E})^{-1}(B_{T-1}^{E})^{ \top}Q_{T}^{E}(A_{T-1}+B_{T-1}^{P}F_{T-1}^{P*})\widehat{x}_{T-1}^{E}=F_{T-1}^{E*} \widehat{x}_{T-1}^{E}.\] Substituting \(u_{T-1}^{P*}=F_{T-1}^{P*}\widehat{x}_{T-1}^{P}\) into (3.16) we obtain the optimal value function given as \[V_{T-1}^{P}(\widehat{x}_{T-1}^{P};F^{E*}) = (\widehat{x}_{T-1}^{P})^{\top}\Big{(}Q_{T-1}^{P}+(F_{T-1}^{P*})^{ \top}R_{T-1}^{P}F_{T-1}^{P*}+\big{(}A_{T-1}+B_{T-1}^{P}F_{T-1}^{P*}+B_{T-1}^{E}F _{T-1}^{E*}\big{)}^{\top}\cdot\] \[Q_{T}^{P}\big{(}A_{T-1}+B_{T-1}^{P}F_{T-1}^{P*}+B_{T-1}^{E}F_{T \[+2\,{\rm Tr}\big{(}(L^{1}_{T-1})^{\top}Q^{P}_{T}L^{2}_{T-1}\widetilde{ \Sigma}^{(P,E)}_{T-1}\big{)}+{\rm Tr}\,\big{(}\Gamma^{\top}_{T-1}(H^{P}_{T})^{ \top}(K^{P}_{T})^{\top}Q^{P}_{T}K^{P}_{T}H^{P}_{T}\Gamma_{T-1}W\big{)}\] \[+\,{\rm Tr}\,\big{(}(K^{P}_{T})^{\top}Q^{Q}_{T}K^{P}_{T}G^{P} \big{)}\] \[= (\widehat{\widehat{x}}^{P}_{T-1})^{\top}U^{P*}_{T-1}(\widehat{x}^ {P}_{T-1})^{\top}+c^{P}_{T-1}. \tag{3.18}\] where (3.18) holds by a similar calculation to that proving (2.65) is equivalent to (2.66) in Section 2.2. Similarly we can also show that (3.12) holds for player \(E\) at time \(T-1\). Now assume that (3.11)-(3.12) holds at all \(s\geq t+1\). Then we have \[V^{P}_{t+1}(\widehat{x}^{P}_{t+1};F^{E*})=(\widehat{x}^{P}_{t+1})^{\top}U^{P*} _{t+1}\widehat{x}^{P}_{t+1}+c^{P*}_{t+1}. \tag{3.19}\] At time \(t\), recall that \(\Pi^{P}_{t}\) is defined as \(\Pi^{P}_{t}=\big{(}\widehat{\Sigma}^{P}_{t}-\widetilde{\Sigma}^{(P,E)}_{t} \big{)}\big{(}\widehat{\Sigma}^{(P,E)}_{t}\big{)}^{-1}\). We further define \(L^{1}_{t}\) and \(L^{2}_{t}\) as \(L^{1}_{t}=A_{t}\Pi^{P}_{t}+B^{E}_{t}F^{E*}_{t}-K^{P}_{t+1}H^{P}_{t+1}A_{t}\Pi^ {P}_{t}\) and \(L^{2}_{t}=-A_{t}\Pi^{P}_{t}-B^{E}_{t}F^{E*}_{t}-K^{P}_{t+1}H^{P}_{t+1}A_{t}(I- \Pi^{P}_{t})\). Then similarly to (2.62) we have \[\widehat{x}^{P}_{t+1}=(A_{t}+B^{E}_{t}F^{E*}_{t})\widehat{x}^{P}_{t}+B^{P}_{t} u^{P}_{t}+L^{1}_{t}e^{E}_{t}+L^{2}_{t}e^{P}_{t}+K^{P}_{t+1}w^{P}_{t+1}+K^{P}_{t+1 }H^{P}_{t+1}\Gamma_{t}w_{t}, \tag{3.20}\] We apply the DPP again at time \(t\), then by (3.19) and (3.20) we have \[V^{P}_{t}(\widehat{x}^{P}_{t};F^{E*})\] \[= \min_{u^{P}_{t}}\left\{(u^{P}_{t})^{\top}R^{P}_{t}u^{P}_{t}+( \widehat{x}^{P}_{t})^{\top}Q^{P}_{t}\widehat{x}^{P}_{t}+{\rm Tr}\,\Big{(}Q^{P} _{t}\widehat{\Sigma}^{P}_{t}\Big{)}\right.\] \[\left.+\mathbb{E}\left[\big{(}(A_{t}+B^{E}_{t}F^{E*}_{t})\widehat{ x}^{P}_{t}+B^{P}_{t}u^{P}_{t}+L^{1}_{t}e^{E}_{t}+L^{2}_{t}e^{P}_{t}+K^{P}_{t+1}w^{P }_{t+1}+K^{P}_{t+1}H^{P}_{t+1}\Gamma_{t}w_{t}\big{)}^{\top}\cdot U^{P*}_{t+1}.\right.\] \[\left.\left((A_{t}+B^{E}_{t}F^{E*}_{t})\widehat{x}^{P}_{t}+B^{P}_ {t}u^{P}_{t}+L^{1}_{t}e^{E}_{t}+L^{2}_{t}e^{P}_{t}+K^{P}_{t+1}w^{P}_{t+1}+K^{P }_{t+1}H^{P}_{t+1}\Gamma_{t}w_{t}\right)+c^{P*}_{t+1}\big{|}\,\mathcal{H}^{P}_ {t}\right]\right\}.\] Expanding the terms in the expectation we obtain \[V^{P}_{t}(\widehat{x}^{P}_{t};F^{E*}) \tag{3.21}\] \[= \min_{u^{P}_{t}}\left\{(u^{P}_{t})^{\top}\big{(}R^{P}_{t}+(B^{P}_ {t})^{\top}U^{P*}_{t+1}B^{P}_{t}\big{)}u^{P}_{t}+2(\widehat{x}^{P}_{t})^{\top} (A_{t}+B^{E}_{t}F^{E}_{t})^{\top}U^{P}_{t+1}B^{P}_{t}u^{P}_{t}\right\}\] \[+c^{P}_{t+1}+(\widehat{x}^{P}_{t})^{\top}\big{(}Q^{P}_{t}+(A_{t}+ B^{E}_{t}F^{E*}_{t})^{\top}U^{P*}_{t+1}(A_{t}+B^{E}_{t}F^{E*}_{t})\big{)}\widehat{x}^{P}_{t}+{ \rm Tr}\,\Big{(}Q^{P}_{t}\widehat{\Sigma}^{P}_{t}\Big{)}\] \[+\,{\rm Tr}\,((L^{1}_{t})^{\top}U^{P*}_{t+1}L^{1}_{t}\widehat{ \Sigma}^{E}_{t})+{\rm Tr}\big{(}(L^{2}_{t})^{\top}U^{P*}_{t+1}L^{2}_{t}\widehat {\Sigma}^{P}_{t}\big{)}+2\,{\rm Tr}((L^{1}_{t})^{\top}U^{P*}_{t+1}L^{2}_{t} \widehat{\Sigma}^{P}_{t})\] \[+\,{\rm Tr}\,\big{(}\Gamma^{\top}_{t}(H^{P}_{t+1})^{\top}(K^{P}_{ t+1})^{\top}U^{P*}_{t+1}K^{P}_{t+1}H^{P}_{t+1}\Gamma_{t}W\big{)}+{\rm Tr}\, \big{(}(K^{P}_{t+1})^{\top}U^{P*}_{t+1}K^{P}_{t+1}G^{P}\big{)}\] We note that all the constant terms including the accumulated sum \(c^{P}_{t+1}\) are independent of \(u^{P}_{t}\), since \(\widehat{\Sigma}^{P}_{s}\), \(\widehat{\Sigma}^{E}_{s}\), and \(\widetilde{\Sigma}^{(P,E)}_{s}\) (\(s=t,\ldots,T\)) are independent of the sequence \(\{F^{P}_{s}\}_{s=0}^{t}\). We can apply the first-order condition to obtain the following optimal response: \[u^{P*}_{t}=-(R^{P}_{t}+(B^{P}_{t})^{\top}U^{P*}_{t+1}B^{P}_{t})^{-1}(B^{P}_{t})^{ \top}U^{P*}_{t+1}(A_{t}+B^{E}_{t}F^{E*}_{t})\widehat{x}^{P}_{t}=F^{P*}_{t} \widehat{x}^{P}_{t}. \tag{3.22}\] Similarly, player E minimizes his value function to find his optimal response to player P's strategy \(\widehat{F}^{P*}_{t}\widehat{x}^{P}_{t}\). We can show that the optimal strategy \(u^{E*}_{t}\) of player E is given by \[u^{E*}_{t}=-(R^{E}_{t}+(B^{E}_{t})^{\top}U^{E*}_{t+1}B^{E}_{t})^{-1}(B^{E}_{t})^ {\top}U^{E*}_{t+1}(A_{t}+B^{P}_{t}F^{P*}_{t})\widehat{x}^{E}_{t}=F^{E*}_{t} \widehat{x}^{E}_{t}. \tag{3.23}\] Plugging \(u^{P*}_{t}=F^{P*}_{t}\widehat{x}^{P}_{t}\) into (3.21) and after manipulations similar to those in the proof that (2.65) is equivalent to (2.66) in Section 2.2, we can rewrite the value function as \[V^{P}_{t}(\widehat{x}^{P}_{t};F^{E*})=(\widehat{x}^{P}_{t})^{\top}U^{P*}_{t} \widehat{x}^{P}_{t}+c^{P*}_{t},\] with \(U^{P*}_{t}\) and \(c^{P*}_{t}\) given in (3.9) and (3.13). Similarly, we can show that (3.13) also holds for player \(E\). Therefore by backward induction, the statements hold for all \(t=0,1,\ldots,T\) A Mixed Partially and Fully Observable Setting In this section, we consider a more general setting for games with two players, \(P\) and \(E\). Now part of the state process is fully observable and part of the state process is partially observable. The joint dynamics \(x_{t}\in\mathbb{R}^{n}\) takes a linear form (\(0\leq t\leq T-1\)): \[x_{t+1}:=\begin{pmatrix}x_{t+1}^{(1)}\\ x_{t+1}^{(2)}\end{pmatrix}=A_{t}\begin{pmatrix}x_{t}^{(1)}\\ x_{t}^{(2)}\end{pmatrix}+B_{t}^{P}u_{t}^{P}+B_{t}^{E}u_{t}^{E}+\Gamma_{t}w_{t}, \tag{4.1}\] with initial value \(x_{0}=(x_{0}^{(1)},x_{0}^{(2)})^{\top}\), and the controls of \(P\) and \(E\) are \(u_{t}^{P}\in\mathbb{R}^{m}\) and \(u_{t}^{E}\in\mathbb{R}^{k}\), respectively. Here, for each \(t\), the noise \(w_{t}\in\mathbb{R}^{d}\) is an i.i.d. sample from \(\mathcal{N}(0,W)\) with \(W\in\mathbb{R}^{d\times d}\) and we have the model parameters \(A_{t}\in\mathbb{R}^{n\times n}\), \(B_{t}^{P}\in\mathbb{R}^{n\times m}\), \(B_{t}^{E}\in\mathbb{R}^{n\times k}\), and \(\Gamma_{t}\in\mathbb{R}^{n\times d}\). We assume that \(x_{t}^{(1)}\in\mathbb{R}^{n_{1}}\) is the partially observable part and \(x_{t}^{(2)}\in\mathbb{R}^{n_{2}}\) is the fully observable part with \(n=n_{1}+n_{2}\). Information Structure.At the time \(t=0\), player \(P\) observes \(x_{0}^{(2)}\) and believes that \(x_{0}^{(1)}\) is drawn from a Gaussian distribution \(x_{0}^{(1)}\sim\mathcal{N}(\widehat{x}_{0}^{P,(1)},W_{0}^{P})\), and thereafter player \(P\) observes part of the state \(x_{t}^{(2)}\in\mathbb{R}^{n_{2}}\) and the noisy state signal \(z_{t}^{P}\in\mathbb{R}^{p}\): \[z_{t+1}^{P}=H_{t+1}^{P}\,x_{t+1}^{(1)}+\,w_{t+1}^{P},\quad w_{t+1}^{P}\sim \mathcal{N}(0,G^{P}),\quad t=0,1,\cdots,T-1, \tag{4.2}\] with \(\{w_{t}^{P}\}_{t=0}^{T-1}\) a sequence of i.i.d. random variables. Here \(G^{P}\in\mathbb{R}^{p\times p}\) and \(H_{t+1}^{P}\in\mathbb{R}^{p\times n_{1}}\). Similarly, player \(E\) observes \(x_{0}^{(2)}\) and believes that \(x_{0}^{(1)}\) is drawn from a Gaussian distribution \(x_{0}^{(1)}\sim\mathcal{N}(\widehat{x}_{0}^{E,(1)},W_{0}^{E})\). Then player \(E\) observes part of the state \(x_{t}^{(2)}\in\mathbb{R}^{n_{2}}\) and the noisy state signal \(z_{t}^{E}\in\mathbb{R}^{q}\): \[z_{t+1}^{E}=H_{t+1}^{E}\,x_{t+1}^{(1)}\,+\,w_{t+1}^{E},\quad w_{t+1}^{E}\sim \mathcal{N}(0,G^{E}),\quad t=0,1,\cdots,T-1. \tag{4.3}\] with \(\{w_{t}^{E}\}_{t=0}^{T-1}\) a sequence of i.i.d. random variables. For simplicity we assume that \(\{w_{t}^{E}\}_{t=0}^{T-1}\) are independent from \(\{w_{t}^{P}\}_{t=0}^{T-1}\). In addition we have \(G^{E}\in\mathbb{R}^{q\times q}\) and \(H_{t+1}^{E}\in\mathbb{R}^{q\times n_{1}}\). Both players make their decisions based on the public and private information available to them. We write \(\mathcal{Z}_{t}^{P}=\{z_{s}^{P}\}_{s=1}^{t}\) and \(\mathcal{Z}_{t}^{E}=\{z_{s}^{E}\}_{s=1}^{t}\) for the private signals players P and E receive up to time \(t\)\((1\leq t\leq T)\), respectively. Let \(\mathcal{U}_{t}^{P}=\{u_{s}^{P}\}_{s=1}^{t}\) and \(\mathcal{U}_{t}^{E}=\{u_{s}^{E}\}_{s=1}^{t}\) denote the control history from the buyer and seller up to time \(t\), respectively. Also let \(\mathcal{X}_{t}:=\{x_{s}^{(2)}\}_{s=0}^{t}\) be the public information that is available to both players. We assume \(\mathcal{H}_{t}^{P}\) is the information (or history) available to player P and \(\mathcal{H}_{t}^{E}\) is the information available to player E for them to make decisions at time \(t\), where \(\mathcal{H}_{t}^{P}\) and \(\mathcal{H}_{t}^{E}\) follow: \[\mathcal{H}_{t}^{P}=\{\widehat{x}_{0}^{P},W_{0}^{P},W_{0}^{E}\}\cup\mathcal{Z }_{t}^{P}\cup\mathcal{X}_{t}\cup\mathcal{U}_{t-1}^{P}\cup\mathcal{U}_{t-1}^{E},\ \mathcal{H}_{t}^{E}=\{\widehat{x}_{0}^{E},W_{0}^{P},W_{0}^{E}\}\cup\mathcal{Z}_{t }^{E}\cup\mathcal{X}_{t}\cup\mathcal{U}_{t-1}^{P}\cup\mathcal{U}_{t-1}^{E}. \tag{4.4}\] Note that the covariance matrices \(\{W_{0}^{P},W_{0}^{E}\}\) are known to both players. Cost Function.Each player \(i\)\((i=P,E)\) strives to minimize their own cost function: \[\min_{\{u_{t}^{i}\}_{t=0}^{T-1}}\mathcal{J}^{i}(\widehat{x}_{0}^{i,(1)},x_{0}^{ (2)}) := \min_{\{u_{t}^{i}\}_{t=0}^{T-1}}\mathbb{E}\left[x_{T}^{\top}Q_{T}^ {i}x_{T}+\sum_{t=0}^{T-1}\left(x_{t}^{\top}Q_{t}^{i}x_{t}+(u_{t}^{i})^{\top}R_{t }^{i}u_{t}^{i}\right)\Bigg{|}\ \mathcal{H}_{0}^{i}\right], \tag{4.5}\] with cost parameters \(Q_{t}^{P},Q_{t}^{E}\in\mathbb{R}^{n\times n}\), \(R_{t}^{P}\in\mathbb{R}^{m\times m}\) and \(R_{t}^{E}\in\mathbb{R}^{k\times k}\). Rewrite the earlier model as \(A_{t}=\begin{pmatrix}A_{t}^{(1,1)}\,A_{t}^{(1,2)}\\ A_{t}^{(2,1)}\,A_{t}^{(2,2)}\end{pmatrix}\) with \(A_{t}^{(1,1)}\in\mathbb{R}^{n_{1}\times n_{1}}\), \(A_{t}^{(1,2)}\in\mathbb{R}^{n_{1}\times n_{2}}\), \(A_{t}^{(2,1)}\in\mathbb{R}^{n_{2}\times n_{1}}\) and \(A_{t}^{(2,2)}\in\mathbb{R}^{n_{2}\times n_{2}}\). Similarly, rewrite \(B_{t}^{P}=(B_{t}^{P,(1)},B_{t}^{P,(2)})^{\top}\) with \(B_{t}^{P,(1)}\in\mathbb{R}^{n_{1}\times m}\) and \(B_{t}^{P,(2)}\in\mathbb{R}^{n_{2}\times m}\), and \(B_{t}^{E}=(B_{t}^{E,(1)},B_{t}^{E,(2)})^{\top}\) with \(B_{t}^{E,(1)}\in\mathbb{R}^{n_{1}\times k}\), \(B_{t}^{E,(2)}\in\mathbb{R}^{n_{2}\times k}\), and \(\Gamma_{t}=(\Gamma_{t}^{(1)},\Gamma_{t}^{(2)})^{\top}\) with \(\Gamma_{t}^{(1)}\in\mathbb{R}^{n_{1}\times d}\) and \(\Gamma_{t}^{(2)}\in\mathbb{R}^{n_{2}\times d}\). For the cost parameters, \(Q_{t}^{i}=\begin{pmatrix}Q_{t}^{i,(1,1)}&Q_{t}^{i,(1,2)}\\ (Q_{t}^{i,(1,2)})^{\top}&Q_{t}^{i,(2,2)}\end{pmatrix}\) with \(Q_{t}^{i,(1,1)}\in\mathbb{R}^{n_{1}\times n_{1}}\), \(Q_{t}^{i,(1,2)}\in\mathbb{R}^{n_{1}\times n_{2}}\), and \(Q_{t}^{i,(2,2)}\in\mathbb{R}^{n_{2}\times n_{2}}\) for \(i=P,E\). For the mixed case we make the following assumptions on the parameters, initial state, and noise. **Assumption 4.1** ([Mixed Setting] Parameters, Initial State, and Noise).: _For \(i=P,E\),_ 1. \(\{w_{t}\}_{t=0}^{T-1}\) _and_ \(\{w_{t}^{i}\}_{t=1}^{T-1}\) _are zero-mean, i.i.d. Gaussian random variables that are independent from_ \(x_{0}\) _and each other and such that_ \(\mathbb{E}[w_{t}w_{t}^{\top}]=W\) _and_ \(\mathbb{E}[w_{t}^{i}(w_{t}^{i})^{\top}]=G^{i}\) _are positive definite;_ 2. _Both matrices_ \(H_{t+1}^{P}\in\mathbb{R}^{p\times n_{1}}\) _and_ \(H_{t+1}^{E}\in\mathbb{R}^{q\times n_{1}}\) _have rank_ \(n_{1}\) _for_ \(t=0,\ldots,T-1\)_._ 3. _The matrices_ \(\Gamma_{t}^{(1)}W(\Gamma_{t}^{(1)})^{\top}\) _are non-singular for_ \(t=1,\ldots,T\)_;_ 4. _The cost matrices_ \(Q_{t}^{i}\)_, for_ \(t=0,1,\ldots,T\) _are positive semi-definite, and_ \(R_{t}^{i}\) _for_ \(t=0,1,\ldots,T-1\) _are positive definite._ We now give the main results and omit the proofs as they follow naturally by applying the ideas in Sections 2 and 3 to the partially observable part of the state process. **Theorem 4.2** (Sufficient Statistics in Two-player Games).: _Assume that both players are applying linear strategies in that \(u_{t}^{P}=F_{t}^{P,(1)}\,\mathbb{E}[x_{t}^{(1)}|\mathcal{H}_{t}^{P}]+F_{t}^{P, (2)}\,x_{t}^{(2)}\) and \(u_{t}^{E}=F_{t}^{E,(1)}\,\mathbb{E}[x_{t}^{(1)}|\mathcal{H}_{t}^{E}]+F_{t}^{E,(2)}\,x_{t}^{(2)}\) for some matrices \(F_{t}^{P,(1)}\in\mathbb{R}^{m\times n_{1}}\) of rank \(\min(m,n_{1})\), \(F_{t}^{P,(2)}\in\mathbb{R}^{m\times n_{2}}\), \(F_{t}^{E,(1)}\in\mathbb{R}^{k\times n_{1}}\) of rank \(\min(k,n_{1})\), and \(F_{t}^{E,(2)}\in\mathbb{R}^{k\times n_{2}}\). The sufficient statistic of player \(i\) for \(i=P,E\) at decision time \(t=0\) is \(x_{0}\sim N(\widehat{x}_{0}^{i},W_{0}^{i})\). For time \(1\leq t\leq T-1\), the distribution of \(x_{t}^{(1)}\) as calculated by player \(i\), conditioning on the private information available to him at time \(t\), is given by_ \[x_{t}^{(1)}\sim\mathcal{N}(\widehat{x}_{t}^{i,(1)},\widehat{\Sigma}_{t}^{i}), \tag{4.6}\] _where, for \(j\neq i\),_ \[J_{t-1}^{i} =\Big{(}\widehat{\Sigma}_{t-1}^{i}-\widetilde{\Sigma}_{t-1}^{(i,j )}\Big{)}\widehat{\Sigma}_{t-1}^{(i,j)}(Y_{t-1}^{j,(1)})^{\top}\Big{(}Y_{t-1}^ {j,(1)}\widehat{\Sigma}_{t-1}^{(i,j)}\widehat{\Sigma}_{t-1}^{(i,j)}(Y_{t-1}^{j,(1)})^{\top}\Big{)}^{-1}, \tag{4.7a}\] \[(\widehat{x}_{t-1}^{i,(1)})^{+} =\widehat{x}_{t-1}^{i,(1)}+J_{t-1}^{i}\Big{(}y_{t-1}^{j}-Y_{t-1}^{j,(1)}\widehat{x}_{t-1}^{i,(1)}\Big{)},\] (4.7b) \[(\widehat{\Sigma}_{t-1}^{i})^{+} =\widehat{\Sigma}_{t-1}^{i}-\Big{(}\widehat{\Sigma}_{t-1}^{i}- \widetilde{\Sigma}_{t-1}^{(i,j)}\Big{)}(\widehat{\Sigma}_{t-1}^{(i,j)})^{-1} \Big{(}\widehat{\Sigma}_{t-1}^{i}-\widetilde{\Sigma}_{t-1}^{(i,j)}\Big{)}^{ \top},\] (4.7c) \[\big{(}\widehat{x}_{t}^{i,(1)}\big{)}^{-} =A_{t-1}^{(1,1)}(\widehat{x}_{t-1}^{i,(1)})^{+}+A_{t-1}^{(1,2)}x_{t -1}^{(2)}+B_{t-1}^{P,(1)}u_{t-1}^{P}+B_{t-1}^{E,(1)}u_{t-1}^{E},\] (4.7d) \[\big{(}\widehat{\Sigma}_{t}^{i}\big{)}^{-} =A_{t-1}^{(1,1)}(\widehat{\Sigma}_{t-1}^{i})^{+}(A_{t-1}^{(1,1)})^{ \top}+\Gamma_{t-1}^{(1)}W(\Gamma_{t-1}^{(1)})^{\top},\] (4.7e) \[K_{t}^{i} =\big{(}\widehat{\Sigma}_{t}^{i}\big{)}^{-}(H_{t}^{i})^{\top}\left[ H_{t}^{i}\big{(}\widehat{\Sigma}_{t}^{i}\big{)}^{-}(H_{t}^{i})^{\top}+G^{i}\right]^{-1},\] (4.7f) \[\widehat{x}_{t}^{i,(1)} =\big{(}\widehat{x}_{t}^{i,(1)}\big{)}^{-}+K_{t}^{i}\left[z_{t}^{i }-H_{t}^{i}\big{(}\widehat{x}_{t}^{i,(1)}\big{)}^{-}\right],\] (4.7g) \[\widehat{\Sigma}_{t}^{i} =\big{(}I-K_{t}^{i}H_{t}^{i}\big{)}\big{(}\widehat{\Sigma}_{t}^{i} \big{)}^{-},\] (4.7h) \[\widetilde{\Sigma}_{t}^{(i,j)} =\big{(}I-K_{t}^{i}H_{t}^{i}\big{)}\left(A_{t-1}^{(1,1)}\Delta_{t -1}^{(i,j)}(A_{t-1}^{(1,1)})^{\top}+\Gamma_{t-1}^{(1)}W(\Gamma_{t-1}^{(1)})^{ \top}\right)\left(I-K_{t}^{j}H_{t}^{j}\right)^{\top},\] (4.7i) \[\Delta_{t-1}^{(i,j)} =(\widehat{\Sigma}_{t-1}^{i}-\widetilde{\Sigma}_{t-1}^{(i,j)})( \widehat{\Sigma}_{t-1}^{(i,j)})^{-1}(\widehat{\Sigma}_{t-1}^{j}-\widetilde{ \Sigma}_{t-1}^{(j,i)})^{\top}+\widetilde{\Sigma}_{t-1}^{(i,j)},\] (4.7j) \[\widehat{\Sigma}_{t}^{(i,j)} =\widehat{\Sigma}_{t}^{i}+\widehat{\Sigma}_{t}^{j}-\widetilde{ \Sigma}_{t}^{(i,j)}-\left(\widetilde{\Sigma}_{t}^{(i,j)}\right)^{\top}, \tag{4.7k}\] _where \(\widehat{\Sigma}^{(i,j)}_{t-1}\) is positive definite. The values of \(Y^{P,(1)}_{t}\in\mathbb{R}^{m\times n_{1}}\), \(Y^{E,(1)}_{t}\in\mathbb{R}^{k\times n_{1}}\) and \(y^{P}_{t}\), \(y^{E}_{t}\) depend on the ranks of \(F^{P,(1)}_{t}\) and \(F^{E,(1)}_{t}\) as follows:_ 1. _The pair_ \[(Y^{P,(1)}_{t},y^{P}_{t})=\left\{\begin{array}{ll}\left(F^{P,(1)}_{t},\,u^{P }_{t}-F^{P,(2)}_{t-1}x^{(2)}_{t-1}\right)&\mbox{if $F^{P,(1)}_{t}$ has rank $m<n_{1}$,}\\ (I_{n},\widehat{x}^{P,(1)}_{t})&\mbox{if $F^{P}_{t}$ has rank $n_{1}\leq m$.}\end{array}\right.\] 2. _The pair_ \[(Y^{E,(1)}_{t},y^{E}_{t})=\left\{\begin{array}{ll}\left(F^{E,(1)}_{t},\,u^{ E}_{t}-F^{E,(2)}_{t-1}x^{(2)}_{t-1}\right)&\mbox{if $F^{E,(1)}_{t}$ has rank $k<n_{1}$,}\\ (I_{n},\widehat{x}^{E,(1)}_{t})&\mbox{if $F^{E,(1)}_{t}$ has rank $n_{1}\leq k$.}\end{array}\right.\] _Finally, the initial conditions are \(\widehat{\Sigma}^{i}_{0}=W^{i}_{0}\), \(\widetilde{\Sigma}^{(i,j)}_{0}=0\), and \(\widehat{\Sigma}^{(i,j)}_{0}=\widehat{\Sigma}^{i}_{0}+\widehat{\Sigma}^{j}_{0}\)._ **Theorem 4.3** (Nash Equilibrium).: _Suppose Assumption 4.1 holds and there exists a unique solution \(\{F^{P*}_{t}\}_{t=0}^{T-1}\) and \(\{F^{E*}_{t}\}_{t=0}^{T-1}\) to (3.7)-(3.8) with \(F^{P*,(1)}_{t}\) of rank \(\min(m,n_{1})\) and \(F^{E*,(1)}_{t}\) of rank \(\min(k,n_{1})\). Further assume that both players apply linear policies. Then the unique Nash equilibrium policy is_ \[u^{i*}_{t}=F^{i*}_{t}y^{i}_{t},\quad\mbox{with}\quad y^{i}_{t}=(\widehat{x}^{ i,(1)}_{t},x^{(2)}_{t})^{\top},\quad i=P,E. \tag{4.8}\] _The corresponding optimal value functions are quadratic \((0\leq t\leq T)\):_ \[V^{P}_{t}(y^{P}_{t};F^{E*})=(y^{P}_{t})^{\top}U^{P*}_{t}y^{P}_{t}+\widehat{c} ^{P*}_{t},\quad V^{E}_{t}(y^{E}_{t};F^{P*})=(y^{E}_{t})^{\top}U^{E*}_{t}y^{E} _{t}+\widetilde{c}^{E*}_{t}, \tag{4.9}\] _with matrices \(U^{P*}_{t},U^{E*}_{t}\in\mathbb{R}^{n\times n}\) given in (3.9) and (3.10). Here for \(i,j=P,E\) and \(j\neq i\), the scalar \(c^{i*}_{t}\in\mathbb{R}\) is given by_ \[\widetilde{c}^{i*}_{t} = \widetilde{c}^{i*}_{t+1}+\mathrm{Tr}(Q^{i,(1,1)}_{t}\widehat{ \Sigma}^{i}_{t})+\mathrm{Tr}\left((\overline{L}^{i,(1)}_{t})^{\top}U^{i}_{t+1 }\overline{L}^{i,(1)}_{t}\widehat{\Sigma}^{i}_{t}\right)+\mathrm{Tr}\left(( \overline{L}^{i,(2)}_{t})^{\top}U^{i}_{t+1}\overline{L}^{i,(2)}_{t}\widehat{ \Sigma}^{j}_{t}\right)\] \[+2\,\mathrm{Tr}\left((\overline{L}^{i,(2)}_{t})^{\top}U^{i}_{t+1 }\overline{L}^{i,(1)}_{t}\widetilde{\Sigma}^{(i,j)}_{t}\right)+\mathrm{Tr} \left((K^{i}_{t+1})^{\top}U^{i,(1,1)}_{t+1}K^{i}_{t+1}G^{i}\right)\] \[+\,\mathrm{Tr}\left(\left[\left(K^{i}_{t+1}H^{i}_{t+1}\Gamma^{(1 )}_{t}\right)^{\top}\quad(\Gamma^{(2)}_{t})^{\top}\right]U^{i}_{t+1}\left[ \begin{matrix}K^{i}_{t+1}H^{i}_{t+1}\Gamma^{(1)}_{t}\\ \Gamma^{(2)}_{t}\end{matrix}\right]W\right),\] _where \(\overline{L}^{i,(1)}_{t}=(\widetilde{L}^{i,(1)}_{t},-(A^{(2,1)}_{t}+B^{j,(2)}_{ t}F^{j*,(1)}_{t}))^{\top}\), and \(\overline{L}^{i,(2)}_{t}=(\widetilde{L}^{i,(2)}_{t},B^{j,(2)}_{t}F^{j*,(1)}_{t}) ^{\top}\) with_ \[\widetilde{L}^{i,(1)}_{t} = -A^{(1,1)}_{t}\Pi^{i}_{t}-B^{j,(1)}_{t}F^{j*,(1)}_{t}-K^{i}_{t+1} H^{i}_{t+1}A^{(1,1)}_{t}(I-\Pi^{i}_{t}),\] \[\widetilde{L}^{i,(2)}_{t} = A^{(1,1)}_{t}\Pi^{i}_{t}+B^{j,(1)}_{t}F^{j*,(1)}_{t}-K^{i}_{t+1} H^{i}_{t+1}A^{(1,1)}_{t}\Pi^{i}_{t},\] _where \(\Pi^{i}_{t}:=\big{(}\widehat{\Sigma}^{i}_{t}-\widetilde{\Sigma}^{(i,j)}_{t} \big{)}\big{(}\widehat{\Sigma}^{(i,j)}_{t}\big{)}^{-1}\). The terminal conditions are \(\widetilde{c}^{i*}_{T}=\mathrm{Tr}(Q^{i,(1,1)}_{T}\widehat{\Sigma}^{i}_{T})\) for \(i=P,E\)._ ## 5 Numerical Experiment: the Bargaining Game In this section, we perform some numerical experiments on a bargaining game example which can be cast into the framework introduced in Section 4. Consider a two-player bargaining or negotiation game where a buyer and a seller must agree on the value of a good. Each party has a target price, that is the price they want to achieve by agreement at the deadline. The target price depends on their view of the project's true value. The buyer (resp. seller) does not know the target price of the seller (resp. buyer) or the true value of the good. The challenge is to establish a model for the bargaining situation and find the optimal bidding strategies when both parties have partial information about their counterparties, under some uncertainties (e.g. market fluctuations). In this section we focus on the case \(n_{1}=1\) when the opponent's state estimate can be inferred. The case where \(n_{1}=2\), where this is not the case, has similar results and these are deferred to Appendix B. ### Mathematical Set-up We now cast this bargaining game into the mathematical framework introduced in Section 4. Assume we have two players, a buyer \(B\) and a seller \(S\), who aim to reach an agreement on the value (or the price) of a good. The negotiation takes place over a finite period of time \(T\). At each timestamp \(t\), the buyer and seller simultaneously offer prices. We let \(x_{t}^{B}\in\mathbb{R}\) be the price offered by the buyer and \(x_{t}^{S}\in\mathbb{R}\) be the price offered by the seller. The dynamics of the offers follow \[x_{t+1}^{B}=x_{t}^{B}+u_{t}^{B}+\epsilon_{t}^{B},\quad x_{t+1}^{S}=x_{t}^{S}+u_ {t}^{S}+\epsilon_{t}^{S},\text{ with initial values }x_{0}^{B},x_{0}^{S}, \tag{5.1}\] Here \(u_{t}^{B}\in\mathbb{R}\) is the change in the buyer's offer and \(u_{t}^{S}\in\mathbb{R}\) is the change in the seller's offer at time \(t\). The random variables \(\epsilon_{t}^{B}\) and \(\epsilon_{t}^{S}\) are IID, representing the noise in both parties offers with \(\epsilon_{t}^{B}\sim\mathcal{N}(0,\overline{W}^{B})\) and \(\epsilon_{t}^{S}\sim\mathcal{N}(0,\overline{W}^{S})\), respectively. We note that \(\epsilon_{t}^{B}\) and \(\epsilon_{t}^{S}\) serve as regularization terms to guarantee the non-degeneracy of the state noise. Another way of thinking about this is to consider \(u_{t}^{B}\) and \(u_{t}^{S}\) as the _intended_ change of their offers when players can not completely control the differences between their offers (for example, due to some external restrictions). Both players can observe each other's exact offers. Thus \((x_{t}^{B},x_{t}^{S})^{\top}\) corresponds to the fully observable part \(x_{t}^{(2)}\) in Section 4. We assume the value of the good \(p_{t}\in\mathbb{R}\) is not available to both players and its dynamics follow: \[p_{t+1}=p_{t}+w_{t}, \tag{5.2}\] where \(\left\{w_{t}\right\}_{t=0}^{T-1}\) is a sequence of IID Gaussian random variables with zero mean and covariance \(\overline{W}\in\mathbb{R}\). Both the buyer and the seller do not have access to the true value of the good. Instead, they observe a noisy version of the value using their private information. At time \(t=0\), player \(i\) (\(i=B,S\)) believes that the initial value \(p_{0}\sim\mathcal{N}(\widetilde{p_{0}^{i}},W_{0}^{i})\), and after that player \(i\) observes the following noisy signal: \[z_{t+1}^{i}=p_{t+1}+\,w_{t+1}^{i},\quad w_{t+1}^{i}\sim\mathcal{N}(0,G^{i}), \quad t=0,1,\cdots,T-1, \tag{5.3}\] where \(\left\{w_{t}^{i}\right\}_{t=1}^{T-1}\) is a sequence of IID random variables, and \(\left\{w_{t}^{B}\right\}_{t=1}^{T-1}\) and \(\left\{w_{t}^{S}\right\}_{t=1}^{T-1}\) are independent of each other. Thus \(p_{t}\) corresponds to the partially observable part \(x_{t}^{(1)}\) in (4.1) of Section 4, with \(n_{1}=1\). We formulate player \(i\)'s (\(i=B,S\)) objective in the game as \[\min_{\left\{u_{t}^{i}\right\}_{t=0}^{T-1}}\mathbb{E}\left[\alpha_{i}\left(x_ {T}^{B}-x_{T}^{S}\right)^{2}+\beta_{i}\left(x_{T}^{i}-(1+\delta_{i})p_{T} \right)^{2}+\sum_{t=0}^{T-1}R_{t}^{i}(u_{t}^{i})^{2}\,\Bigg{|}\,\,\mathcal{H} _{0}^{i}\right], \tag{5.4}\] where \(\delta_{B}\in(-1,0)\) and \(\delta_{S}\in(0,1)\) are the scalars that determine the buyer's and the seller's target price at terminal time \(T\). The constants \(\alpha_{B}>0\) and \(\alpha_{S}>0\) are the penalties for not reaching an agreement, and \(\beta_{B}>0\) and \(\beta_{S}>0\) are the penalties for deviating from their target prices. The quadratic terms \(\alpha_{S}\left(x_{T}^{B}-x_{T}^{S}\right)^{2}\) and \(\alpha_{B}\left(x_{T}^{B}-x_{T}^{S}\right)^{2}\) can be viewed as a relaxation of the hard constraint \(x_{T}^{B}=x_{T}^{S}\). The parameters \(R_{t}^{B}>0\) and \(R_{t}^{S}>0\) measure the cost of adjusting the offer price at each time step, thus the final terms represent the penalty for making concessions. The filtrations \(\mathcal{H}_{0}^{B}:=\{\widetilde{\xi}_{0}^{B},W_{0}^{B},W_{0}^{S}\}\) and \(\mathcal{H}_{0}^{S}:=\{\widetilde{\xi}_{0}^{S},W_{0}^{B},W_{0}^{S}\}\) represent the information available at time \(0\). Both players have the incentive to reach an agreement at terminal time \(T\). The desire to reach this agreement is characterized by the value of \(\alpha_{B}\) and \(\alpha_{S}\), which may be different for the buyer and the seller. The hard constraint \(x_{T}^{B}=x_{T}^{S}\) can be recovered by letting \(\alpha_{B}\) and \(\alpha_{S}\) tend to infinity. The seller wants to sell the good at a price that is higher than (his estimate of) the true price and thus \(\delta_{S}>0\). Similarly \(\delta_{B}<0\) as the buyer has the incentive to buy at a price lower than his estimated true price. ### Experiments In this section, we present some numerical experiments and discuss the effect of observation noise and our information corrections for the bargaining game introduced in Section 5.1. We focus on the case \(n_{1}=1\), where the dynamics of the value of the good and the players' noisy observations are defined in (5.2)-(5.3). The bargaining model considered in this section satisfies the conditions for the special case described in point 5. of Remark 2.5, where each player can fully recover the opponent's state estimate in the previous step. Experimental Set-up.In the bargaining game (5.1)-(5.4), the model parameters are, \[A_{t}=I,\ B_{t}^{B}=\begin{bmatrix}0\\ 1\\ 0\end{bmatrix},\quad B_{t}^{S}=\begin{bmatrix}0\\ 0\\ 1\end{bmatrix},\quad W=\begin{bmatrix}\overline{W}&0&0\\ 0&\overline{W}^{B}&0\\ 0&0&\overline{W}^{S}\end{bmatrix},\quad Q_{t}^{B}=Q_{t}^{S}=0,\ \text{and}\] \[Q_{T}^{B}=\begin{bmatrix}\beta_{B}(1+\delta_{B})^{2}&-\beta_{B}(1+\delta_{B}) &0\\ -\beta_{B}(1+\delta_{B})&\alpha_{B}+\beta_{B}&-\alpha_{B}\\ 0&-\alpha_{B}&\alpha_{B}\end{bmatrix},\quad Q_{T}^{S}=\begin{bmatrix}\beta_{S }(1+\delta_{S})^{2}&0&-\beta_{S}(1+\delta_{S})\\ 0&\alpha_{S}&-\alpha_{S}\\ -\beta_{S}(1+\delta_{S})&-\alpha_{S}&\alpha_{S}+\beta_{S}\end{bmatrix},\] for \(t=0,1,\ldots,T-1\). Also we have \(H_{t}^{i}=I\) for \(i=S,B\). In the experiments we let \(\alpha_{B}=\alpha_{S}=50\), \(\beta_{B}=\beta_{S}=30\), \(\delta_{B}=-0.05\), \(\delta_{S}=0.05\), and \(T=10\), so the players care more about reaching an agreement with each other. We set the penalty function to be \(R_{t}^{i}=\rho_{i}\exp(-\gamma_{i}t)\) for \(i=B,S\) with \(\rho_{B}=\rho_{S}=15\) and \(\gamma_{B}=\gamma_{S}=0.1\). The penalty function decays over time which allows players to be more flexible near the deadline to reach an agreement. For the initial state we set \(p_{0}=50\), \(x_{0}^{B}=10\), \(x_{0}^{S}=90\). We also set \(\overline{W}=9\) for the noise in the dynamics of the true value of the good, and \(\overline{W}^{B}=\overline{W}^{S}=10^{-12}\). The reason for adding the small noise to \(x_{t}^{B}\) and \(x_{t}^{S}\) is to guarantee the well-definedness of the problem. In practice we can set \(\overline{W}^{B}=\overline{W}^{S}=0\), and the numerical experiments will still work. To see the effect of the observation noise, we let the buyer have a much more noisy observation of the true price (\(G^{B}=100\) and \(G^{S}=1\)). We also set \(\widehat{x}_{0}^{B}=40\) with \(W_{0}^{B}=100\) for the buyer and \(\widehat{x}_{0}^{S}=51\) with \(W_{0}^{S}=1\) for the seller, thus the buyer has a far more inaccurate guess of the initial state. In the figures and tables we will write IC for information corrections. Effect of Observation Noise.Since the buyer receives relatively noisy signals of the true price, their price estimate (indicated in orange) will be more inaccurate than the seller's (indicated in blue) in the example shown in Figure 1. The behaviour of both players is similar to that in the full information case, since the buyer utilizes the seller's accurate information to improve their own state estimate. Effect of Information Corrections.A key contribution of our work is information corrections, where players correct their estimate of the state after observing their opponent's actions. We demonstrate the power of the information corrections in Figure 2. When the buyer skips steps (4.7a)-(4.7c), their state estimate will rely purely on their own observations and thus can be very inaccurate. However, with information corrections, they can obtain a better estimate which is less affected by the noisy observations. Hence they are more likely to reach an agreement with the seller at a reasonable price. Figure 1: Comparison between the full observation (right) and partial observation (left) cases. We now show some statistics for the buyer's estimation error in Table 1, where both the average mean squared error and average mean absolute error of 500 experiments (each experiment consists of 10 rounds of bargaining) are smaller when information corrections are used. We also show the effect of using information corrections on the outcomes of the bargaining game in Table 2. If the difference between the players' final offers is less than 3, we consider them to have reached an agreement. We can see that with information corrections the players are more likely to reach an agreement. The difference is shown in Figure 2. We can see that with information correction the final offers of the buyer and the seller are closer to each other and they have made the deal, while without information correction they are not able to reach an agreement in the end. Furthermore, the above setting is an "asymmetric" case, where the buyer has a more inaccurate estimate of the initial state and receives noisier signals during the negotiation. We now compare the number of agreements obtained in this case (Table 2) with that obtained in two symmetric cases, which can be considered as benchmarks. In the symmetric case where both players have "accurate" information with small observation noises, we let \(G^{B}=G^{S}=1\) for both players, and set \(\widehat{x}_{0}^{B}=49\), and \(\widehat{x}_{0}^{S}=51\) with \(W_{0}^{B}=W_{0}^{S}=1\); in the symmetric where they both have inaccurate information, we set \(G^{B}=G^{S}=100\), \(\widehat{x}_{0}^{B}=40\), and \(\widehat{x}_{0}^{S}=60\) with \(W_{0}^{B}=W_{0}^{S}=100\). We run 500 experiments in each case, and the number of agreements when they both have "accurate" information is the same regardless of whether they utilize the observed actions from their opponent. However, when both players receive very noisy signals, having information corrections significantly improves the number of agreements. This further demonstrates the need for incorporating information corrections especially when players have different levels of observation noise, as is often the case in practice since players may have a variety of different information sources. We also note that the number of agreements reached in the asymmetric case is closer to that in the inaccurate symmetric case. Although the players have improved their state estimate at the previous time step, when the players make offers they are based on their current noisy observation. We can also compare the players' cost in the asymmetric and symmetric cases (see Table 3). The \begin{table} \begin{tabular}{l c c} & Mean squared error & Mean absolute error \\ \hline \hline With information correction & 17.43 & 3.10 \\ \hline Without information correction & 35.91 & 4.83 \\ \end{tabular} \end{table} Table 1: Effect of IC on the buyer’s estimation error (average over 500 experiments). \begin{table} \begin{tabular}{l c c} & Asymmetric & Symmetric (“accurate”) & Symmetric (inaccurate) \\ \hline \hline With IC & 371 & 470 & 367 \\ \hline Without IC & 253 & 470 & 247 \\ \end{tabular} \end{table} Table 2: Number of agreements achieved in 500 experiments with and without IC. Figure 2: The buyer’s price estimate with IC (right) and without IC (left). costs are calculated empirically based on (5.4) In the asymmetric case and the symmetric case when they both have very noisy observations, both players achieve significantly lower costs when using information corrections. Aiming at Beneficial Prices.In the above experiments we focused on the case where both players strive to reach an agreement with their opponent (\(\alpha_{i}>\beta_{i}\) for \(i=B,S\)). However, in some situations players may be more keen on achieving their target price or a more beneficial price in negotiations and not be so concerned if an agreement is reached. We now let the buyer focus more on pursuing their target price and let the seller mainly seek an agreement with the buyer. We let \(\alpha_{B}=20\), \(\alpha_{S}=50\)\(\beta_{B}=40\), \(\beta_{S}=30\), \(\rho_{B}=\rho_{S}=10\), and \(\overline{W}=1\). We also let the buyer have far more inaccurate information than the seller by setting \(G^{B}=100\), \(G^{S}=1\), \(\widehat{x}_{0}^{B}=70\), \(W_{0}^{B}=100\), \(\widehat{x}_{0}^{S}=53\), and \(W_{0}^{S}=1\). Other parameters are set to be the same as in the previous experiments. In Table 4, we can see that the information corrections significantly improve the number of agreements achieved. Here we set the agreement price achieved to be the average of \(x_{T}^{B}\) and \(x_{T}^{S}\). For a fair comparison we only consider situations where an agreement is achieved in both cases (with and without information corrections). We observe that there is a gap between the confidence intervals of the agreement prices, which illustrates that the buyer can obtain a better deal by using the information corrections to more effectively exploit the seller's willingness to sacrifice their target price in their desire to reach an agreement. \begin{table} \begin{tabular}{l c c c} \multicolumn{2}{c}{Number of agreements} & Mean of APs & 95\% confidence interval of APs \\ \hline \hline With IC & 442 & 48.58 & (48.31, 48.84) \\ \hline Without IC & 342 & 49.61 & (49.34, 49.88) \\ \end{tabular} \end{table} Table 4: Bargaining outcomes and agreement prices (APs) with and without IC in 500 experiments. \begin{table} \begin{tabular}{l c c c} & Asymmetric & Symmetric (“accurate”) & Symmetric (inaccurate) \\ \hline \hline With IC (Buyer/Seller) & 2250/2235 & 1923/2053 & 2505/2715 \\ \hline Without IC (Buyer/Seller) & 3068/2819 & 1924/2053 & 3244/3359 \\ \end{tabular} \end{table} Table 3: Players’ average costs with and without IC in 500 experiments.
2303.09744
Inferring Occluded Agent Behavior in Dynamic Games from Noise Corrupted Observations
In mobile robotics and autonomous driving, it is natural to model agent interactions as the Nash equilibrium of a noncooperative, dynamic game. These methods inherently rely on observations from sensors such as lidars and cameras to identify agents participating in the game and, therefore, have difficulty when some agents are occluded. To address this limitation, this paper presents an occlusion-aware game-theoretic inference method to estimate the locations of potentially occluded agents, and simultaneously infer the intentions of both visible and occluded agents, which best accounts for the observations of visible agents. Additionally, we propose a receding horizon planning strategy based on an occlusion-aware contingency game designed to navigate in scenarios with potentially occluded agents. Monte Carlo simulations validate our approach, demonstrating that it accurately estimates the game model and trajectories for both visible and occluded agents using noisy observations of visible agents. Our planning pipeline significantly enhances navigation safety when compared to occlusion-ignorant baseline as well.
Tianyu Qiu, David Fridovich-Keil
2023-03-17T02:50:32Z
http://arxiv.org/abs/2303.09744v3
# Identifying Occluded Agents in Dynamic Games ###### Abstract To provide safe and efficient services, robots must rely on observations from sensors (lidar, camera, etc.) to have a clear knowledge of the environment. In multi-agent scenarios, robots must further reason about the intrinsic motivation underlying the behavior of other agents in order to make inferences about their future behavior. Occlusions, which often occur in robot operating scenarios, make the decision-making of robots even more challenging. In scenarios without occlusions, dynamic game theory provides a solid theoretical framework for predicting the behavior of agents with different objectives interacting with each other over time. Prior work proposed an inverse dynamic game method to recover the game model that best explains observed behavior. However, an apparent shortcoming is that it does not account for agents that may be occluded. Neglecting these agents may result in risky navigation decisions. To address this problem, we propose a novel inverse dynamic game technique to infer the behavior of occluded, unobserved agents that best explains the observation of visible agents' behavior, and simultaneously to predict the agents' future behavior based on the recovered game model. We demonstrate our method in several simulated scenarios. Results reveal that our method robustly estimates agents' objectives and predicts trajectories for both visible and occluded agents from a short sequence of noise corrupted trajectory observation of only the visible agents. ## I Introduction Robots depend on sensor observations to be alert to the presence of static and dynamic obstacles. Yet, sensors are fundamentally limited due to occlusion or sensing range. In practice, humans can use their prior experience to make inferences about occluded agents and avoid potential risks. For example, in Figure 1, the blue pedestrian is running across the road while the green and red vehicle are driving along the road in the same direction. However, the blue pedestrian is occluded by the red vehicle, and thus is invisible to the green vehicle. The green vehicle will likely collide with the blue pedestrian if the driver maintains its current speed. However, he notices that the red vehicle beside him brakes. Thus, he infers that someone is moving in the occluded area, forcing the red vehicle's deceleration. Consequently, he can actively brake to avoid the potential collision before the occluded agent finally comes into view. This example reveals the fact that the behaviors of occluded agents can be inferred from those of visible agents since both are affected by their interaction. The green vehicle is then capable of acting based on the knowledge of the invisible agents. Prior works [1, 2] have investigated multi-agent interaction problems with dynamic game techniques. Fridovich-Keil _et al._[2] proposed an iterative linear quadratic game algorithm to compute optimal trajectories in multi-agent noncooperative scenarios. Based on this work, Peters _et al._[1] solved the inverse problem to learn the game model from noisy observations, estimated agents' trajectories from the recovered model, and achieved high-quality estimation performance in occlusion-free scenarios. Inspired by these works [1, 2], we propose a novel technique that identifies the unknown parameters of each agent's cost function and estimates the trajectories of _occluded_ agents in a Nash game based on realistic sensor measurements of only visible agents. We evaluate our method in various scenarios at different levels of observation noise. Results reveal that our method is noise robust and provides accurate trajectory prediction for both visible and invisible agents. ## II Related Work ### _Social Occlusion Inference_ Many works have explored methods to make inferences for occluded areas based on observation of visible agents' Fig. 1: Traffic scenario, in which the green vehicle’s view is occluded by the red vehicle so it is not able to observe the occluded blue pedestrian. Our method utilizes only observations of visible agents’ behavior, solves an inverse dynamic game to recover a game model that best explains the observed behavior, and indirectly identifies the behavior of occluded agents. In this traffic scenario, the driver in the green vehicle notices the deceleration of the red vehicle next to him. Our method infers that the blue pedestrian is running across the road within the occluded area. As a consequence, the driver in the green vehicle can actively brake to avoid a potential collision. behavior in social settings. These works utilize the fundamental fact that one's behavior is affected by the surrounding environment to a great extent. Representative works often apply computer vision techniques [3, 4] and occupancy grid map generating techniques [5, 6] to make inferences about the occluded area. Hara _et al._[3] proposed a CNN that identifies the existence of a person in blind spots with artificial occlusions in a volleyball game dataset. Subsequent work [4] further extended to traffic scenarios and proposed a spatio-temporal D-CNN to predict whether a vehicle would come into view soon based on the tracking of visible pedestrians from first-person view video input. Afolabi _et al._[5] proposed a mapping framework that incorporates people as sensors to generate an occupancy grid map for the occluded area. Itkina _et al._[6] further trained a conditional variational autoencoder to generate an occupancy grid map from the observation of driver trajectories. However, despite these encouraging achievements in social occlusion inference, these works only solve the problem of whether the occluded area is occupied, or whether an agent is coming out from the occluded area. They cannot infer the actual behavior or intent of agents in occluded areas, which prevents robots in the scene from making optimal decisions. Other techniques must be utilized to model the behavior of agents in occluded areas more precisely from the observation of only visible agents' social behavior. ### _Planning-based Agent Behavior Modeling_ Before we move towards the ultimate goal of precise inference about the occluded agents' behavior, we first introduce several planning-based methods of agent behavior modeling which are closely related to our work. Planning-based methods make the assumption that humans rationally make decisions about their trajectories while walking or driving; that is, their trajectories must explicitly or implicitly follow certain rules and these rules can be applied to predict their future behaviors. This assumption casts the behavior inference problem as one of (inverse) optimal control, and the goal becomes to reconstruct such rules from the observation of agents' behavior. #### Ii-B1 Single agent behavior modeling With the fundamental investigation of inverse optimal control (IOC) in [7], researchers extensively applied IOC techniques in modelling single agent behaviors [8, 9, 10], where an explicit optimal control model was proposed and parameters were identified to best fit the observations of human behaviors. The advance of inverse reinforcement learning (IRL) introduced in [11] also gave rise to applications [12, 13, 14, 15] towards single agent behavior modeling. Representative work [12] described human behavior based on a Markov decision process with the principle of maximum entropy. Results were utilized to improve a robot's ability to navigate in the presence of pedestrians. #### Ii-B2 Multi-agent behavior modeling Aside from the interaction between agents and the environment, each agent's behavior affects that of other agents. A typical framework is multi-agent IRL generalized from IRL for single agent, under which human social behaviors are investigated [16, 17]. Subsequent works further make applications in robot social navigation tasks [18, 19] to improve a robot's ability to navigate in the presence of pedestrians, and autonomous driving scenarios [20, 21] to guarantee safe driving performance. ### _Dynamic Games and Inverse Dynamic Games_ Our work builds upon dynamic game theory, which generalizes single agent optimal control cases to multi-agent scenarios and provides a solid framework to model multi-agent behaviors. Fridovich-Keil _et al._[2] proposed an forward iterative linear quadratic game to depict interaction between agents in traffic scenarios and computed feedback Nash strategies. To model human behavior more accurately, more recent works [1, 22, 23, 24] optimized the model by solving inverse dynamic games. Le Cleac'h _et al._[22] iteratively solved the inverse dynamic game problem to update a Bayesian estimate of agents' cost function parameters by recasting it in a recursive parameter-estimation framework. Peters _et al._[1, 23] jointly optimized player objectives and trajectory estimates by coupling them through Nash equilibrium constraints based on noisy, partial state observations. We emphasize that these works share the same limitation, which is that observations of all agents are required. To deal with scenarios where agents are occluded, inspired by [1], we propose a novel inverse dynamic game technique to identify the unknown weighting parameters in each agent's cost function, and simultaneously estimate both visible and invisible agents' trajectories that best explain the observations of visible agents' trajectories. ## III Discrete Time Open-loop Nash Games A discrete time open-loop Nash game with \(M\) agents is characterized by the state \({x_{t}^{i}\in\mathbb{R}^{n}}\) and control inputs \({u_{t}^{i}\in\mathbb{R}^{m}},\;\forall i\in[M]\), at time step \(t\). The dynamics \({\mathbf{x}_{t+1}=f({\mathbf{x}_{t},\mathbf{u}_{t}})}\) describe how the game evolves with each agent's control input, at each time step \({t\in[T]}\). \({J^{i}:=\sum_{t=1}^{T}g_{t}^{i}({\mathbf{x}_{t},\mathbf{u}_{t}})}\) for each agent evaluates their cost in the game. The game is thus fully characterized by the tuple of all agents' cost functions, the initial condition \(\mathbf{x}_{1}\), and the dynamics, which is denoted by \(\Gamma:=(\{J^{i}\}_{i=1}^{M},\mathbf{x}_{1},f)\). In a Nash game, each agent aims to minimize his cost function, subject to dynamic feasibility constraints, i.e. \[\min_{\mathbf{x},\mathbf{u}^{i}} J^{i}(\mathbf{u};\mathbf{x}_{1}), \forall i\in[M],\] (1a) s.t. \[\mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t}), \forall t\in[T-1]. \tag{1b}\] Given that each agent decides his control input rationally at equilibrium, the following inequalities are satisfied: \[J^{i}(\mathbf{u}^{*};\mathbf{x}_{1})\leq J^{i}(\mathbf{u}^{i},\mathbf{u}^{-i*} ;\mathbf{x}_{1}),\forall i\in[M], \tag{2}\] and \(\mathbf{u}^{*}:=(\mathbf{u}^{1*},\cdots,\mathbf{u}^{M*})\) is called a Nash strategy. Equation (2) reveals the fact that no agents will decrease cost by unilaterally deviating from Nash strategy \(\mathbf{u}^{i*}\)[25]. To make these concepts concrete, we introduce the following running example. Consider \(M=2\) pedestrians walking toward their goal, avoiding each other. \(\mathbf{x}_{t}:=(x_{t}^{1},x_{t}^{2})\) denotes the position of both pedestrians and they follow single-integrator dynamics at time discretization \(\Delta t\): \[\begin{split} x_{t+1}^{i}&=\begin{cases}p_{x,t+1}^{ i}=p_{x,t}^{i}+v_{x,t}^{i}\Delta t,\\ p_{y,t+1}^{i}=p_{y,t}^{i}+v_{y,t}^{i}\Delta t,\end{cases}\\ &=x_{t}^{i}+u_{t}^{i}\Delta t,\quad t\in[T-1],i\in\{1,2\},\end{split}\] (3) \[\mathbf{x}_{1}\] is known. \(u_{t}^{i}=[v_{x,t}^{i},v_{y,t}^{i}]^{\top}\) denotes the velocity of each pedestrian and \(\mathbf{x}_{1}\) is the initial state of two pedestrians. Each pedestrian's objective is the sum of the running cost \(g_{t}^{i}\) over time, where \(g_{t}^{i}\) is the combination of different features weighted by non-negative weighting parameters \(\theta^{i}\): \[g_{t}^{i}=\sum_{j=1}^{n}\theta_{j}^{i}g_{t,t}^{i}\begin{cases}g_{1,t}^{i}=\|x_ {t}^{i}-x_{d}^{i}\|_{2}^{2}\\ g_{2,t}^{i}=-\log(\|x_{t}^{i}-x_{t}^{-i}\|_{2}^{2})\;,\\ g_{3,t}^{i}=\|u_{t}^{i}\|_{2}^{2}\end{cases} \tag{4}\] where \(\|\cdot\|_{2}^{2}\) denotes the squared Euclidean norm. The pedestrian aims to get closer to his destination (\(g_{1,t}^{i}\)), and to keep far away from the other agents (\(g_{2,t}^{i}\)) without great energy consumption (\(g_{3,t}^{i}\)). In practice, \(g_{t}^{i}\) can be readily modified to accommodate different scenarios. ## IV Our Approach In this work, we introduce two roles in the dynamic game: the participants and the observer. The participants compete with each other in the game, while the observer is outside the game, observing the interaction between the agents. All participants are observable to each other. However, only some of the participants are visible to the observer, while others are occluded, and hence invisible. No matter whether the agents are visible or not, their interaction remains the same. In practice, the observer could be a robot wishing to navigating in human-rich environments. To do so, he must estimate both visible and invisible agents' objectives and trajectories from the observation of only visible agents, and thereby improve his navigation performance. For clarity, we seek to estimate the value of all weighting parameters \(\theta\) in the game model as well as the trajectories \(\mathbf{x}\) of all agents that maximize the likelihood of a given sequence of state observations. \(\mathbf{y}^{\mathcal{V}}:=(\mathbf{y}^{i}),i\in\mathcal{V}\): \[\max_{\theta,\mathbf{x},\mathbf{u}} p(\mathbf{y}^{\mathcal{V}}|\mathbf{x},\mathbf{u}),\] (5a) s.t. \[(\mathbf{x},\mathbf{u})\text{ is an OLNE of }\Gamma(\theta), \tag{5b}\] \[(\mathbf{x},\mathbf{u})\text{ is dynamically feasible under }f, \tag{5c}\] where \(\theta\) is the tuple of weighting parameters over all agents, i.e. \(\theta:=(\theta^{1},\cdots,\theta^{M})\), and \(p(\mathbf{y}^{\mathcal{V}}|\mathbf{x},\mathbf{u})\) denotes a likelihood model based on observation \(\mathbf{y}^{\mathcal{V}}\). Note that this formulation extends that of [1] for our problem. In particular, we have constructed a new objective (5a) and kept the constraints (5b), (5c) the same since the visibility to the observer does not interfere with the participants' interaction. To solve (5), we need to first solve the OLNE in the forward dynamic game (2). Since the constraints are the same, akin to the method in [1], we construct each player's Lagrangian with additional Lagrange multipliers \(\lambda_{t}^{i}\): \[\mathcal{L}^{i}=J^{i}+\sum_{t=1}^{T-1}{\lambda_{t}^{i}}^{\top}\left(x_{t+1}^{i }-f(x_{t}^{i},u_{t}^{i})\right).\] The first-order necessary conditions for the optimal solution are given by the KKT conditions for each agent: \[\mathbf{G}(\mathbf{x}^{i},\mathbf{u}^{i},\boldsymbol{\lambda}^{i}):=\begin{bmatrix} \nabla_{\mathbf{x}^{i}}\mathcal{L}^{i}\\ \nabla_{\mathbf{u}^{i}}\mathcal{L}^{i}\\ x_{t+1}^{i}-f(x_{t}^{i},u_{t}^{i}),t\in[T-1]\end{bmatrix}=\mathbf{0}, \tag{6}\] \[\forall i\in[M].\] Thus (5b) and (5c) are replaced by (6) and the inverse dynamic game model for occluded agents evolves into \[\max_{\theta,\mathbf{x},\mathbf{u},\boldsymbol{\lambda}} p(\mathbf{y}^{\mathcal{V}}|\mathbf{x},\mathbf{u})\] (7a) s.t. \[\mathbf{G}(\mathbf{x}^{i},\mathbf{u}^{i},\boldsymbol{\lambda}^{i})= \mathbf{0},\forall i\in[M]. \tag{7b}\] The objective of (7) is to estimate the unknown weighting parameters and all agents' states and actions from the observation of only visible agents' trajectories under noise corruption. To deal with the observation noise, we assume that observations take white Gaussian noise, i.e. \(\mathbf{n}_{t}\sim\mathcal{N}(\mathbf{0},\Sigma)\), and the observation of visible agents \(\mathbf{y}_{t}^{\mathcal{V}}:=\mathbf{x}_{t}^{\mathcal{V}}+\mathbf{n}_{t}\) in this work. We then solve (7) by substituting the likelihood maximization problem of (7a) with the negative log-likelihood minimization problem with objective \(\sum_{t\in[T]}\sum_{i\in\mathcal{V}}\|y_{t}^{i}-x_{t}^{i}\|_{2}^{2}\). In the next section, we conduct simulation experiments to evaluate the estimation performance of our method and how it functions under different levels of observation noise. ## V Simulation Experiments We implement our proposed approach in YALMIP [26], a MATLAB interface for mathematical programming. We use the open-source COIN-OR IPOPT algorithm [27] as a low-level solver. We conduct Monte Carlo studies to analyze the performance of our proposed method in several simulated scenarios. ### _Experiment Setup_ To demonstrate the performance and robustness of our method, we perform a sequence of Monte Carlo studies. For each scenario, we fix the weighting parameters \(\theta\) for each agent and find the corresponding OLNE trajectories. We hide the invisible agents' trajectories and then corrupt the visible agents' trajectories with white Gaussian noise. To evaluate the performance of our method at different levels of noise corruption, we generate 24 sets of random observation sequences at 21 different levels of noise. For each of the resulting 504 observation sequences, we run our method to recover estimates of weights for each agent as well as the trajectory for invisible agents. ### _Evaluation Metrics_ To evaluate the performance of weighting parameter recovery, we first follow [1] and measure the cosine dissimilarity between the unobserved true weighting parameters \(\theta_{\text{true}}\) and the estimated parameters \(\theta_{\text{est}}\) \[D(\theta_{\text{true}},\theta_{\text{est}}):=1-\frac{1}{M}\sum_{i\in[M]}\frac {{\theta_{\text{true}}^{i}}^{\top}\theta_{\text{est}}^{i}}{\|\theta_{\text{ true}}^{i}\|_{2}\|\theta_{\text{est}}^{i}\|_{2}}, \tag{8}\] which denotes the degree of dissimilarity between \(\theta_{\text{true}}\) and \(\theta_{\text{est}}\). To evaluate the performance of trajectory estimation for all agents, we measure the trajectory estimation error between the unobserved true trajectory based on true weighting parameters and the estimated trajectory based on recovered weighting parameters for both visible and invisible agents with the average displacement error (ADE) metric: \[ADE_{\text{visible}} :=\frac{1}{|\mathcal{V}|\cdot T}\sum_{i\in\mathcal{V}}\sum_{t\in[ T]}\|x_{\text{GT},t}^{i}-x_{\text{est},t}^{i}\|_{2}, \tag{9}\] \[ADE_{\text{missible}} :=\frac{1}{|\mathcal{O}|\cdot T}\sum_{i\in\mathcal{O}}\sum_{t\in[ T]}\|x_{\text{GT},t}^{i}-x_{\text{est},t}^{i}\|_{2},\] where \(x_{\text{GT},t}^{i}\) and \(x_{\text{est},t}^{i}\) denote the ground truth state and the estimated position of the \(i^{\text{th}}\) agent at time step \(t\), respectively. Note that we measure the trajectory estimation error separately for visible and invisible agents, rather than measuring the total error, as in [1]. Results reveal the great difference of estimation performance between the two categories, and hence the necessity of separation. ### _Simulation Experiments_ #### Iv-C1 Identifying an agent with constant velocity We evaluate our method in a Monte Carlo study of the running example of a visible agent avoiding collision with an occluded agent with constant velocity. This example represents a simplified scenario where the occluded agent does not react to other agents and keeps his velocity constant. In this example, the problem degrades from solving a dynamic game problem to solving the following forward optimal control problem \[\min_{\textbf{x},\textbf{u},v} \quad p(\textbf{y}^{1}|\textbf{x},\textbf{u},v)\] (10) s.t. \[\quad\textbf{x}_{t+1}^{1}=x_{t}^{1}+u_{t}^{1}\Delta t,\] \[\quad\textbf{x}_{t+1}^{2}=x_{t}^{2}+v\Delta t,\quad\forall t\in[ T-1],\] where \(v\) takes an unknown constant value, and the inverse optimal control problem \[\max_{\theta,\textbf{x},\textbf{u},v} \quad p(\textbf{y}^{1}|\textbf{x},\textbf{u},v)\] (11) s.t. \[\quad(\textbf{x}^{1},\textbf{u}^{1})\text{ is optimal under (\ref{eq:eq example's added complexity. It is reasonable to infer that the estimation performance decreases with the number of agents. #### V-B3 Identifying an occluded pedestrian in traffic Next, we consider a more realistic road crossing scenario, depicted in Figure 1. In this scenario, each agent not only tries to keep a safe distance from other agents but also seeks to move forward while keeping himself in the lane according to traffic rules. Figure 1 demonstrates that the blue pedestrian and the driver in the red vehicle are observable to each other. However, the blue pedestrian is occluded by the red vehicle and hence invisible to the driver in the green vehicle. Both the blue pedestrian and the red vehicle apply open-loop Nash strategies: the red vehicle brakes and the blue pedestrian sightly turns left to avoid collisions. Although the movement of the blue pedestrian is invisible to the driver in the green vehicle, he can make the inference that someone is running across the road and predict that agent's trajectory based on the observed deceleration of the red vehicle next to him. He can then also brake in advance to avoid a potential collision with the blue pedestrian. Figure 2(c) and Figure 2(f) display the estimation performance for the weighting parameters and the trajectory for both agents at different levels of observation noise. Our method still achieves accurate and noise-robust estimation of the unknown weighting parameters as well as the visible agent's trajectory. Compared with the simpler example in Section V-C2, a more complicated game model contributes to a greater estimation error of the invisible agent's trajectory. Note that in the traffic scenario, the visible agent's motion is highly restricted by the traffic rules, and we only have access to part of the trajectory observation (first 5 steps), therefore the estimation error of the visible agent's trajectory is lower than that of the two agent example in Section V-C2. ## VI Conclusion & Future Work In this work, we have proposed a novel method based on the inverse dynamic game technique to identify behaviors of invisible agents in occluded areas from noise-corrupted observations of only visible agents. Our proposed method recovers unobserved weighting parameters in the game model that best explain the observed trajectories, and simultaneously computes open-loop Nash trajectories for both visible and invisible agents. The computed results can be further utilized in agents' trajectory estimation and prediction. Numerical results in simulation experiments show that our method is robust to observation noise and provides an accurate estimation for both weighting parameters and invisible agents' trajectories. Currently, we evaluate our method in simulation experiments. Future work should investigate techniques to incorporate our method with first-person view sensing results from real sensors (lidar, camera, etc.) so that our proposed method can be utilized in real urban traffic scenarios.
2303.06477
Reproduction Report for SV-COMP 2023
The Competition on Software Verification (SV-COMP) is a large computational experiment benchmarking many different software verification tools on a vast collection of C and Java benchmarks. Such experimental research should be reproducible by researchers independent from the team that performed the original experiments. In this reproduction report, we present our recent attempt at reproducing SV-COMP 2023: We chose a meaningful subset of the competition and re-ran it on the competition organiser's infrastructure, using the scripts and tools provided in the competition's archived artifacts. We see minor differences in tool scores that appear explainable by the interaction of small runtime fluctuations with the competition's scoring rules, and successfully reproduce the overall ranking within our chosen subset. Overall, we consider SV-COMP 2023 to be reproducible.
Marcus Gerhold, Arnd Hartmanns
2023-03-11T18:28:35Z
http://arxiv.org/abs/2303.06477v2
# Reproduction Report for SV-COMP 2023+ ###### Abstract The Competition on Software Verification (SV-COMP) is a large computational experiment benchmarking many different software verification tools on a vast collection of C and Java benchmarks. Such experimental research should be reproducible by researchers independent from the team that performed the original experiments. In this reproduction report, we present our recent attempt at reproducing SV-COMP 2023: We chose a meaningful subset of the competition and re-ran it on the competition organiser's infrastructure, using the scripts and tools provided in the competition's archived artifacts. We see minor differences in tool scores that appear explainable by the interaction of small runtime fluctuations with the competition's scoring rules, and successfully reproduce the overall ranking within our chosen subset. Overall, we consider SV-COMP 2023 to be reproducible. ## 1 Introduction The International Competition on Software Verification (SV-COMP) compares software verification tools on a very large amount of benchmark verification tasks. Associated to the TACAS conference, its first edition took place in 2012 [2]. This report is about SV-COMP 2023, the competition's 12th edition. SV-COMP 2023 is described in a competition report [4] and on its website [3]. The competition report provides a summary of the competition setup and presents the overall tool results and winners in different categories. The website provides the full details about the competition, including its benchmark set and scoring scheme, and detailed plots and tables of the tools' results. 52 different verification tools participated in SV-COMP 2023, which consists of 24,391 different benchmark problems (of which 23,805 are in C and 586 are in Java) in nine categories. SV-COMP is an example of a large computer science experiment that evaluates many tools on many benchmark instances. The outcomes of this experiment are used to rank tools in terms of their ability to solve problems correctly, and in terms of their performance concerning runtime and energy usage. For the outcomes of experimental research to be trustworthy, the experiment needs to be repeatable and reproducible. Here, following the ACM terminology [1], _repeatability_ means that the same researchers that performed the original experiment can repeat it, i.e. run the same benchmarks again on the same system obtaining the same results (up to stated precision limits, which is particularly relevant for randomised/statistical experiments that _will_ show different results on each repetition). This is a very basic requirement; arguably, non-repeatable experiments should not be published in the first place. _Reproducibility_, on the other hand, means that _The measurement can be obtained with stated precision by a different team using the same measurement procedure, the same measuring system, under the same operating conditions, in the same or a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using the author's own artifacts. [1]_ This requires a separate set of researchers, but not necessarily a separate system to run the experiments on. The EAPLS similarly defines results to have been reproduced so that a "Results Reproduced" artifact badge can be awarded if _The main results reported in the paper have been obtained in a subsequent study by a person or team other than the author(s), using (in part) artifacts provided by the author(s). [5]_ Of note here is that this definition only requires "the main results" reported in a paper describing an experimental study to be obtained once more. Reproducibility is a sign of quality for experimental research; in areas of computer science related to SV-COMP in particular, artifact evaluations associated to conferences such as TACAS (since 2018) encourage and reward reproducible results. In this report, we summarise our recent attempt at reproducing SV-COMP 2023 and its outcomes. ## 2 Reproducing SV-COMP To the best of our knowledge, our work is the first documented attempt to reproduce a tool competition of the scale of SV-COMP. We thus took a practical approach to find out if we can, with limited effort, reproduce enough of SV-COMP 2023 in a sufficiently independent manner to consider the competition as a whole _likely reproducible_. SV-COMP provides artifacts [4, Table 3] that include the participating tools (in binary or source code form), the benchmark instances that they are executed on, and the scripts that were used to run the competition. Owing to SV-COMP's large size--running SV-COMP 2023 in its entirety once required 1,114 days of CPU time to execute its 490,858 verification runs--the benchmarking is performed on a large cluster of 168 machines administrated by the competition organiser's group at LMU Munich. Inspecting these artifacts, we find that SV-COMP should clearly be repeatable. Without comparable resources, however, any attempt at reproducing anything but a very small fraction of SV-COMP within a reasonable amount of time is infeasible; and even if one had enough time to spare, the competition's scripts are closely geared towards its specific execution environment, requiring additional work to adapt them to different settings. Yet, following the ACM definition, reproducibility does not require the experiments to be in a different setting using a different setup: as long as the team performing the measurements is different, they may use "the same measurement procedure [and] the same measuring system." This is the first point where we apply our practical-with-limited-effort methodology: The organiser of SV-COMP 2023 granted us access to their cluster, so that we could re-use the same "measurement procedure" (scripts and setup) on the same "measuring system" (the cluster with the same machines). Still, running SV-COMP on this cluster originally took the organiser about 8 days (of wall-clock time). Given that our intention was for this reproduction report to be available together with the competition report, and our unfamiliarity with the infrastructure surely resulting in additional delays, a full reproduction remained infeasible for us despite access to the cluster. Instead, as the second practical-with-limited-effort point, we selected a subset of the competition to reproduce, as a "spot check". If our reproduction results are close to the original results, and if we made a representative selection, we should be allowed to generalise our reproduction study's outcome to the whole competition. ### Results to Reproduce We consider the main result presented for SV-COMP 2023 in its competition report and on its website to be the scoring and ranking of the regularly participating verification tools [4, Table 8]. For each benchmark instance, SV-COMP uses a soft timeout of 900 s after which a tool's result no longer counts for scoring, and a hard timeout of around 960 s after which the tool is forcibly terminated. For every result produced within the soft timeout, the tool receives between \(-32\) (for incorrectly reporting "true") and \(+2\) (for correctly reporting "true") points [4, Table 1]. In this way, the actual tool runtime--although collected and presented in the detailed results tables--does not matter as long as it is within the timeout and except in case of a tie in score. Similar to the timeout, the memory available to tools was limited to 15 GB. Our reproduction attempt thus seeks to check whether we obtain the same _ranking_, checking the scores to see how much (if any) deviation we observe. In particular, given its evaluation scheme, the main results of SV-COMP should be resilient to small changes in runtime, unless the time a tool needs for a certain benchmark instance is right on the timeout boundary. However, SV-COMP would not be resilient to highly nondeterministic tools that have vast runtime or memory usage differences between executions with the same inputs, and to tools that are significantly nondeterministic in their results (which would arguably be tools of limited usefulness, thus tool authors are expected to avoid such behaviour--so that it would indicate tool bugs). ### Selected Subset of Experiments The subset we selected to reproduce SV-COMP's main results has two parts. #### 2.2.1 Ranking check. First, we spot-check the regular verification tools ranking presented in the competition report (explicitly listed for the top three tools [4, Table 10] and implicitly given via the total scores otherwise [4, Table 8]): * Re-run category _ConcurrencySafety_ for ranking places 2-4 (tools _UAutomizer_, _UGemCutter_, and _UTaipan_, respectively) because the scores are close (being 2717, 2710, and 2612, respectively). * Re-run category _SoftwareSystems_ for ranking places 1-2 (tools _Symbiotic_ and _Bubaak_, respectively), again since they are close in scores (of 1604 and 1589, respectively), and because they produce one incorrect result each. * Re-run category _JavaOverall_ for ranking places 1-3 (tools _JBMC_, _GDart_, and _MLB_, respectively) to check that the Java-based part of the competition (which is much smaller--just one category--than its C-based part) is in order. #### 2.2.2 Tools check. Second, we spot-check specific tools in specific categories that showed different interesting behaviour or characteristics: * Re-run tool _VeriFuzz_ in categories _NoOverflows_ and _Termination_ because it has one quite negative score in one category (\(-500\) in _NoOverflows_) while winning a gold medal (first place) in the other (with score 2305). * Re-run tool _Symbiotic_ in categories _MemSafety_ and _Termination_ because it is a "portfolio" tool, where the use, selection, or ordering of the multiple algorithms could lead to nondeterministic behaviour. * Re-run tools _LF-checker_ and _Deagle_ in category _ConcurrencySafety_ because they participate in only this category, and one of them (_Deagle_) wins the gold medal there. All scores we mention above are normalised category scores as described at sv-comp.sosy-lab.org/2023/rules.php. They are computed from the raw scores of several sub-categories. In Section 3.2, we report raw scores instead, as they are the ones shown on the detailed per-tool tables; as long as the relationship between raw scores remains the same, the overall ranking will not change. However, smaller differences in raw scores will have a larger impact on the normalised category scores in smaller categories. ## 3 Reproduction Results After having identified the main results we want to check, and the subset of experiments to use for this purpose, we started our reproduction attempt. The artifacts for SV-COMP contain a readme file describing the organisation of the included data (such as per-tool benchmark results), which covers all the data generated in SV-COMP 2023 and processed to produce the main results. However, they do not contain detailed instructions for reproducing the competition. In addition to granting us access to their cluster, the competition's organiser thus also provided such instructions to us. In the subsequent reproduction process, we made observations about the reproduction process, and about the reproduction of the competition's main results. ### Reproduction Process In the process of re-executing our selected subset, we encountered several small problems in following the instructions. For example, * a few required Python dependencies were not installed, and installing them was not part of the first version of our instructions; * our instructions were intended for reproducing one sub-category at a time, but not an entire category in one go--we then received expanded instructions for how to assemble the necessary parameters for entire categories; and * creating tables according to our instructions at first listed results only as "correct" or "incorrect", but not also as "correct-unconfirmed" as in the official result tables, leading to significantly different scores--this was because the organiser had assumed we would not want to run the results validation procedure, for which we then received expanded instructions. Overall, this was a learning process both for us as well as for the competition organiser: We increasingly understood the competition's setup, and the organiser gained an understanding of what level of documentation and tooling is necessary to support a smooth reproduction. In particular, all information and data was in principle available from the start: We could have studied the shell scripts, tools, folder structure, etc. that were part of the artifacts in detail and thereby reverse-engineered the entire process. This however would not have been feasible given the limited time we had, and in general would make independent reproduction hard and unlikely to happen. Overall, though, the problems we encountered during the reproduction process actually increased our confidence in the soundness of the competition setup and its artifacts: By running scripts in unintended ways, deviating from the original instructions, and creating our own variants of e.g. the table templates, we tested the flexibility of the artifacts and ensured that they are reusable. In particular, it would be very unlikely for artifacts exercised in this way to merely "simulate" running the competition and delivering the desired results--it rather looks like we indeed performed the experiments that SV-COMP claims to have performed once again! ### Reproduction of the Main Results The result tables for our selected subset of (as described in Section 2.2) are available at arnd.hartmanns.name/sv-comp-2023-repro. Overall, the results we obtained are in line with those of SV-COMP 2023, with small deviations in scores throughout but no change in ranking. **Ranking check.** We first look at our spot-check of the ranking in 3 categories. In category _ConcurrencySafety_, we find a small increase in scores compared to the original SV-COMP results for _UAutomizer_ (from 2725 to 2733) and _UTaipan_ (from 2607 to 2613). With _UGemCutter_ reproducibly at 2714 and the gold-medal winner _Deagle_ at 4754 in SV-COMP, the ranking remains unchanged. However, especially _UAutomizer_ and _UGemCutter_ are very close (score difference of 11), and the changes in scores (of +8 and +6) are about on the same order of magnitude as the differences between the tools' scores here. The same happens in category _JavaOverall_, though with smaller absolute differences given the smaller category (with _JBMC_ going from 669 to 667 and _MLB_ from 495 to 496). In _SoftwareSystems_, scores and correct/incorrect result numbers match exactly. **Tools check.** For the individual tools, we confirmed the negative result of _VeriFuzz_ in _NoOverflows_, albeit with a small improvement (from \(-87\) to \(-80\)). In the _Termination_ category, something went wrong in our reproduction: We obtained the same number of "correct true" results, but not a single "correct false" result; and also the distinction between "correct" and "correct-unconfirmed" is missing in our results table, despite the validation clearly having worked for _VeriFuzz_ in the _NoOverflows_ category. These differences look more like a bug in the scripts or an error on our side than a failure in reproducibility related to the tool or its execution; we are currently investigating what the root problem is. For _Symbiotic_ and _Deagle_, we obtained exactly matching scores, and small differences only for _LF-checker_. In terms of the secondary characteristics like runtime, we only saw small changes throughout all our experiments. We looked into the raw results data (in the corresponding.csv files) for some of the cases of slightly different scores. We found various types of differences that appear reasonable overall. For example, in the case of _UAutomizer_, * several benchmark instances changed from timeout to out-of-memory and vice-versa--which is reasonable for difficult instances where the tool needs or tries to use all available resources; and * some changed between a timeout with a result and a pure timeout--which means that the tool once ran into the soft and once into the hard timeout, showing executions at the boundary of the runtime budget that make it or not due to small fluctuations. ## 4 Conclusion We, as researchers independent of the organiser of SV-COMP 2023, were able to re-run a manually but carefully selected subset of SV-COMP 2023 using the organiser's setup and infrastructure. Our reproduction results show small differences in scores, which are mostly well-explained due to occurring for benchmark instances that are barely (not) feasible. These differences do not change the ranking of tools, which we consider the main result of SV-COMP 2023. Thus: Based on a spot-check of a subset of its experiments, using the same experimental setup and environment, we consider **the main results of SV-COMP 2023 to be reproducible**. However, the fluctuations we see combined with the closeness of some of the tools' scores in some categories should act as a warning to the SV-COMP organiser to consider the fairness of the competition's ranking in such close calls. We also found that SV-COMP is currently not set up for "easy" reproduction: While all the material is available, we needed to obtain instructions from the competition organiser, which we had to get updates for in an iterative process whenever we found a bug in the instructions or encountered an unforeseen situation. This however increased our confidence in the "honesty" of the SV-COMP artifacts, and provided valuable insights to the organiser for easing the reproducibility of future editions of SV-COMP. Finally, an important consideration is whether a spot-check-based approach like ours is useful or sufficient to establish whether an extensive experiment like SV-COMP is reproducible, or has successfully been reproduced. Given the extent of SV-COMP, a full reproduction is a significant time investment with access to the competition's cluster infrastructure, and likely not feasible without. Although we put thought into our selection of the subset, we could naturally have missed a highly nondeterministic tool that just happened to win a medal by chance. We stipulate that an approach using a _randomly sampled_ subset of experiments could result in a more formal, albeit statistical, guarantee. Data availability.The tables of results that we reproduced as described in this report are available at arnd.hartmanns.name/sv-comp-2023-repro [6].
2307.14584
$ \mathrm{Sr}_{4}\mathrm{Al}_{2}\mathrm{O}_{7}$: A New Sacrificial Layer with High Water Dissolution Rate for the Synthesis of Freestanding Oxide Membranes
Freestanding perovskite oxide membranes have drawn great attention recently since they offer exceptional structural tunability and stacking ability, providing new opportunities in fundamental research and potential device applications in silicon-based semiconductor technology. Among different types of sacrificial layers, the $ \mathrm{(Ca, Sr, Ba)}_{3}\mathrm{Al}_{2}\mathrm{O}_{6}$ compounds are most widely used since they can be dissolved in water and prepare high-quality perovskite oxide membranes with clean and sharp surfaces and interfaces. However, the typical transfer process takes a long time (up to hours) in obtaining millimeter-size freestanding membranes, let alone realize wafer-scale samples with high yield. Here, we introduce a new member of the $ \mathrm{SrO-}\mathrm{Al}_{2}\mathrm{O}_{3}$ family,$ \mathrm{Sr}_{4}\mathrm{Al}_{2}\mathrm{O}_{7},$, and demonstrate its high dissolution rate, about 10 times higher than that of $ \mathrm{Sr}_{3}\mathrm{Al}_{2}\mathrm{O}_{6}$. The high-dissolution-rate of $ \mathrm{Sr}_{4}\mathrm{Al}_{2}\mathrm{O}_{7}$ is most likely related to the more discrete Al-O networks and higher concentration of water-soluble Sr-O species in this compound. Our work significantly facilitates the preparation of freestanding membranes and sheds light on the integration of multifunctional perovskite oxides in practical electronic devices.
Leyan Nian, Haoying Sun, Zhichao Wang, Duo Xu, Hao Bo, Shengjun, Yan, Yueying Li, Jian Zhou, Yu Deng, Yufeng Hao, Yuefeng Nie
2023-07-27T02:06:18Z
http://arxiv.org/abs/2307.14584v1
# Sr4Al\({}_{2}\)O\({}_{7}\): A New Sacrificial Layer with High Water Dissolution Rate ###### Abstract Freestanding perovskite oxide membranes have drawn great attention recently since they offer exceptional structural tunability and stacking ability, providing new opportunities in fundamental research and potential device applications in silicon-based semiconductor technology. Among different types of sacrificial layers, the (Ca, Sr, Ba)\({}_{3}\)Al\({}_{2}\)O\({}_{6}\) compounds are most widely used since they can be dissolved in water and prepare high-quality perovskite oxide membranes with clean and sharp surfaces and interfaces. However, the typical transfer process takes a long time (up to hours) in obtaining millimeter-size freestanding membranes, let alone realize wafer-scale samples with high yield. Here, we introduce a new member of the SrO-Al\({}_{2}\)O\({}_{3}\) family, Sr\({}_{4}\)Al\({}_{2}\)O\({}_{7}\), and demonstrate its high dissolution rate, about 10 times higher than that of Sr\({}_{3}\)Al\({}_{2}\)O\({}_{6}\). The high-dissolution-rate of Sr\({}_{4}\)Al\({}_{2}\)O\({}_{7}\) is most likely related to the more discrete Al-O networks and higher concentration of water-soluble Sr-O species in this compound. Our work significantly facilitates the preparation of freestanding membranes and sheds light on the integration of multifunctional perovskite oxides in practical electronic devices. s acrificial layer, Sr\({}_{4}\)Al\({}_{2}\)O\({}_{7}\), freestanding oxide membranes, molecular beam epitaxy + Footnote †: These authors contributed equally to this work. *Electronic address: [email protected] *Electronic address: [email protected] **Keywords**: sacrificial layer, Sr\({}_{4}\)Al\({}_{2}\)O\({}_{7}\), freestanding oxide membranes, molecular beam epitaxy **Introduction**: The unique properties of freestanding perovskite oxide membranes, such as extraordinary strain tunability,[1, 2, 3] stacking ability and declamping effect,[4, 5, 6, 7] facilitate the ferroelectric and ferromagnetic phase engineering,[1, 2, 3, 8] super-elasticity[9, 10, 11, 12] and functionality integration of perovskite oxides on silicon wafer,[6, 13, 14, 15, 16, 17]_etc_. The recent surge of research interest in these freestanding membranes is driven by the advances of
2302.07440
Road Redesign Technique Achieving Enhanced Road Safety by Inpainting with a Diffusion Model
Road infrastructure can affect the occurrence of road accidents. Therefore, identifying roadway features with high accident probability is crucial. Here, we introduce image inpainting that can assist authorities in achieving safe roadway design with minimal intervention in the current roadway structure. Image inpainting is based on inpainting safe roadway elements in a roadway image, replacing accident-prone (AP) features by using a diffusion model. After object-level segmentation, the AP features identified by the properties of accident hotspots are masked by a human operator and safe roadway elements are inpainted. With only an average time of 2 min for image inpainting, the likelihood of an image being classified as an accident hotspot drops by an average of 11.85%. In addition, safe urban spaces can be designed considering human factors of commuters such as gaze saliency. Considering this, we introduce saliency enhancement that suggests chrominance alteration for a safe road view.
Sumit Mishra, Medhavi Mishra, Taeyoung Kim, Dongsoo Har
2023-02-15T03:08:53Z
http://arxiv.org/abs/2302.07440v1
# Road Redesign Technique Achieving Enhanced Road Safety by Inpainting with a Diffusion Model ###### Abstract Road infrastructure can affect the occurrence of road accidents. Therefore, identifying roadway features with high accident probability is crucial. Here, we introduce image inpainting that can assist authorities in achieving safe roadway design with minimal intervention in the current roadway structure. Image inpainting is based on inpainting safe roadway elements in a roadway image, replacing accident-prone (AP) features by using a diffusion model. After object-level segmentation, the AP features identified by the properties of accident hotspots are masked by a human operator and safe roadway elements are inpainted. With only an average time of 2 min for image inpainting, the likelihood of an image being classified as an accident hotspot drops by an average of 11.85%. In addition, safe urban spaces can be designed considering human factors of commuters such as gaze saliency. Considering this, we introduce saliency enhancement that suggests chrominance alteration for a safe road view. Traffic safety, Safe road design, Road intervention, Traffic calming, Road saliency ## I Introduction According to a report from the United Nations, road accidents are responsible for 1.3 million deaths and 50 million injuries annually worldwide [1]. The UN General Assembly has proclaimed a "Decade of Action for Road Safety 2021-2030" with the ambitious target of preventing at least 50% of road traffic deaths and injuries by 2030. A holistic approach to road safety includes, among others, road safety policy awareness drives, accident hotspot identification, placement of road warning signs, use of advanced driver assistance systems (ADAS), and changes of infrastructural road design. The existing approaches for accident prediction are based on features extracted from raw data [2 - 4]. For proactive measures, more targeted information is required to increase public awareness of the dangerous road features of existing accident hotspots in cities. Each dangerous road feature is highly related to accident occurrence and is referred to as an accident-prone (AP) feature. Effectiveness of public awareness drives is limited due to a lack of targeted approaches toward human behavior and social psychology [5]. The ADAS aims to reduce human error which is the fundamental cause of almost all road accidents. ADAS applications related to safety include pedestrian detection/ avoidance, lane departure warning/ correction, and blind spot detection. However, the ADAS may affect a driver's risk perception ability and behavior in near-crash scenarios, and can even be detrimental for skilled drivers [6]. A driver's reaction time to accidents varies due to personal behavior, driving capability, age, etc. Significant variations are observed among driver reaction times: from 0.6s for a professional driver to 0.8 - 1s for an "average" driver, and up to 1.5 - 2s for some elderly drivers. The most recent state-of-the-art ADAS notification system claims to have a time-to-collision of up to 2.5s for a recall rate of 0.9 [8 -10]. Hence, reliability of ADAS for accident prevention is not certain. Image processing with street view images can be utilized for safe road design. For structural design changes, reducing the image hazard score by identifying a similar street view via a greedy heuristics search is presented in [11]; however, finding a similar street view image to introduce safety features is a cumbersome task. On one hand, it requires collecting a massive dataset which may be unviable for deploying suggested changes based on the similarity of street view images in some locations. The works in [12, 13] present a model to beautiy urban images using a generative adversarial network (GAN) efficient at producing high-quality synthetic data [45] by adding/removing street elements according to specific metrics. The search for new images similar to the synthetic images is reliable, yet the result is different than the original due to contextual loss. During beautification, the GAN model considers the full image context; thus, the original street image may undergo multiple changes to achieve the given beauty standard. This indicates the need for large changes which are difficult or unviable to deploy. Heavy modifications in road design and subsequent re-construction work is a hassle for commuters and residents in terms of construction and demolition waste hazards and restricted traffic movement. Additionally, from the authority's viewpoint, there are practical issues in implementation such as a limited budget and time. Considering this, an efficient methodology of generating masks under which a specific region of the image is eligible for modifications can be helpful. The specific region of the image and the required changes are identified based on AP features the of road view along with human-in-the-loop. This approach acts as an additional layer with other services such as traffic cameras [14], smart signaling [20], and safe routing [25] in making a city smart and safe to drive. The image inpainting technique fills visual information to present complete, high-quality, and highly detailed images that can be used for accident prevention. For inpainting, a mask is used to create new visual information to replace damaged or undesirable parts of a given image. The mask of an image is a binary image consisting of zero and non-zero values: zero at undesirable parts and non-zero at the remaining parts of the image. A new class of deep generative models called diffusion models have been used to inpaint missing or damaged elements in the image. The key point of using a diffusion model is that the deep learning model learns the systematic decay of information due to noise, and enables it to reverse the process to recover the original information. In [15], a latent variable based deep generative model that maps to latent space using a fixed Markov chain is used. This model generates high-resolution images. However, for training these models from scratch, a huge dataset of design features such as chicanes, chokers, street plazas, raised medians, etc. are required to achieve road safety. Therefore a pre-trained diffusion model with generic data is fine-tuned with a limited and available dataset of design features. In [16], GazeShiftNet is presented, which is a model to enhance the saliency of important safe design features in images for a given (saliency) mask while preserving image fidelity. GazeShiftNet can be used to enhance the saliency by chrominance alteration for changing color(s) of AP features as well as marking signs and traffic signals with bright color(s). When the bright colors obtained from the use of the model are actually used, they can redirect the drivers' attention for improved driving safety. In [17], a visual notification system is presented using class activation maps (CAMs) to activate only the important AP features for the classification of accident hotspot images. The image processing pipeline presented in [17] highlights the AP features. Our method uses these AP features as a guide for generating masks for inpainting with human supervision. Using the mask of road view images made by a human operator for accident hotspot locations, images are inpainted by a diffusion model for enhancing the safety of the scene. The overall road redesign process of our novel method is shown in Fig. 1. Firstly, using the dataset of actual accident events, both hotspots and non-hotspots are collected. A hotspot is identified based on clustering locations of accident events and by using the location road view images of hotspots and non-hotspots are then collected. From this context, a hotspot is an area rather than a single location. A deep learning classifier is trained to detect hotspot images and then various types of CAM [17] such as GradCAM, GradCAM++, and ScoreCAM can be leveraged to inspect what AP features lead to the classification of a hotspot. From these AP features, masks, which will be inpainted, are generated by human-in-the-loop. The AP features mask can be combined with other masks of road markings, traffic signs, and traffic signals to make a saliency mask. Safety critical elements such as road markings that might not be detected by CAM due to their common presence in both hotspots and non-hotspots should also be considered to achieve road safety. A small dataset of safe-road design features is collected and used to fine-tune the diffusion model with a text prompt, such as a class prompt, along with a subject word. For this work, the seven safe road elements listed in TABLE II are used for our diffusion model. The base diffusion model used for this work is Hugging Face stable-diffusion-v1-5 and fine-tuning is executed with DreamBooth [23] and Textual inversion [24]. For the saliency mask designated by the human operator, safe road elements also chosen by a human operator are inpainted with the fine-tuned diffusion model. Because the color of the generated safe road elements can be close to that of adjoining roadway parts, chrominance alteration might be necessary. To this end, a saliency model such as GazeShiftNet can be used. Features of this article can be listed as follows * An image inpainting technique that can assist authorities in achieving safe roadway design with minimal intervention in the current structure of a roadway is introduced. * Demarcation of AP features in street view images of accident hotspots is presented. * With the fine-tuning of a diffusion model, a methodology for safe road design ensuring minimal intervention of current road design is introduced. * For redirecting a driver's attention towards AP features and other accident critical elements in road view scenes, visual saliency enhancement by chrominance alteration is presented. Each feature is fully explained in Section IV. ## II Related Works ### _AP features_ AP features are the unsafe features of a road in the visual area of the driver's view and are significantly related to Fig. 1: Road redesign process in our methodology. -accident occurrence. These unsafe features in the road view have been investigated in previously published works. Some works use a manual inspection process while others use an automatic process by leveraging machine learning techniques. In [18], a cognitive work analysis was used for measuring the effectiveness of road design features based on safety, positive subjective experience, and compliance by drivers. Similarly, in [19], the impact of design features such as zebra crossings, speed bumps, etc., which make roads safe and self-explanatory, were studied. Machine learning techniques like regression trees have been used to classify road intersections associated with vehicle-to-pedestrian collisions. In [22], convolutional neural networks (CNNs) are used to analyze satellite images of intersections after extracting high-level features by using an autoencoder. These features are clustered in an unsupervised way based on accident events. This AP feature detection method provides a reliable and objective assessment of road design features. However, the unsupervised method still can not adequately discriminate AP features, but points out the features for being accident-prone. Therefore, as a straightforward methodology, supervised detection of AP features can be used. In our work, a historical accident dataset is used for identifying accident hotspots. Street view images of those hotspot locations are collected to train a binary classification model. For a more effective and direct search of AP features, a CAM-based method is leveraged to inspect why a particular street view image was chosen as a hotspot image. ### _B. Safe road design_ Road design engineering focuses on factors that mitigate the risk of accidents due to speeding, blind curves, vehicle-pedestrian collision, etc. [18]. A safe structural design of pathways reduces accident proneness. Road design features such as chicane, choker, and roundouts help in creating safer and more efficient environments by encouraging drivers to lower driving speed. Chicanes are a series of alternating mid-block curb extensions that narrow the roadway and require vehicles to follow a curving (S-shaped) path, whereas chokers are curb extensions that narrow a street by widening the side-walks or planting strips, effectively creating a pinch point along the street. Roundabouts in large intersections help in both reducing speed and organizing traffic. Raised medians are barriers in the center portion of a street or roadway, helping in speed reduction as well as providing refuge for pedestrians crossing the road. In addition to limiting vehicle speed, to reduce the vehicle-pedestrian collision rate [21], treatment of curb extensions, medians and street plazas is instrumental. Street plazas are semi-enclosed pedestrian-friendly zones adjoining a sidewalk or transit stop while curb extensions widen a sidewalk for a short distance creating safer crossings for pedestrians. ### _Fine-tuning technique of diffusion model_ Recent advances in the latent variable based deep generative model such as the diffusion model and growing interest in personalizing the final output image have led to the study of fine-tuning with a small set of personalized data. DreamBooth, an approach to fine-tuning diffusion models, is presented in [23]. In this approach, a new subject is embedded in the output domain of the model using a new loss function named as autogenous class-specific prior preservation loss. Therefore, the newly added subject can be contextualized in different scenes to generate photorealistic images. Textual inversion [24] is another approach that uses new 'words' (guiding personalized creation) in the embedding space of pre-trained text-to-image models. Pre-trained models enhance the picture quality manifold as compared to from-scratch techniques, generating high resolution and aesthetically appealing realistic results. The developer community tested and compared these two methods [26], and a combination of both has shown improved results [27]. Therefore, we employ both fine-tuning techniques for the diffusion model to introduce design modifications for safer streets and roadways. Firstly, the diffusion model is fine-tuned using the DreamBooth technique and then Textual inversion is used with the fine-tuned model. Inpainting with the diffusion model requires an image, mask, and text as inputs. The mask is created manually while being guided by AP features as well as object segmentation. ## III Data Construction For AP feature detection, two datasets are needed: (a) a dataset of real accident events and (b) image data of accident hotspots. For (a), we used real accident event data provided by New York City [17]. The DBSCAN algorithm, widely used for clustering, was chosen for the identification of accident hotspots because of its proven efficacy in [29], [30], and [31]. The average latitude and longitude location of all the accidents of a given cluster is used to find the center of the hotspot. The algorithm in [28] detects hotspots based on raw vehicle data such as vehicle braking, accelerating, and frequency of accidents. For (b), using Google street view, we capture images for the center location of hotspots to cover an approximately 240 degree field of view. Accordingly, 5,088 images belonging to hotspots were collected. Similarly, 4,908 street view images of non-hotspots were randomly collected to make a balanced dataset for classifier training. As per previous studies on roadway design engineering for safe streets [32], [33] and minimizing accident risks [34], [18], [21], safe road structure plays a major role; therefore, for this work, seven major safe road designs were finalized: chicanes, chokers, curb extensions, raised medians, roundabouts, street plazas, and road markings on big intersections. Six to seven street image samples of each safe road design were curated to the pre-train diffusion models. ## IV Proposed Mechanism ### _Selection of AP features_ In our model, the AP features are extracted by post-hoc methods such as the CAM-based method. The CAM-based method provides information by a deep learning classifier specifically about the region of the image that contributes most to classifying that image as a hotspot. For the deep learning classifier, CNN backbones, as shown in Fig. 2, are leveraged. Along with a CNN backbone, a fully connected (FC) layer with two outputs acting as a classifier is used with the Softmax normalizing function to obtain the class probability. In [17], an attention based module (ABM) to improve the inherent interpretability of the CNN architecture and select more contextual AP features is proposed. The ABM generates the attention maps and attention vectors in the training process to appropriately weigh the spatial, channel, and point features from the CNN backbone. This attention based activation can characterize the target area with improved context. For AP feature detection, CNN backbones like Squeezenet, Resnet, VGG, and Densenet can be used for classification. TABLE I shows the metrics for classification for 30% of randomly selected test images out of the 9,996 road view images. Accuracy is the indicator of the ratio of correct prediction to the total number of input samples. Precision indicates the ratio of a total number of correct prediction results to the positive results as predicted by the classification model. Recall indicates the ratio of a correct positive result by the classification model to the total number of positive samples. The F1 score is the indicator of precision as well as the robustness of the model and is the mathematical harmonic mean of the precision and recall. The accuracy metric with the ABM block is better for most CNN backbones. Densenet, as a backbone with an ABM module, provides the most accurate results. To detect AP features, different types of CAM such as GradCAM, GradCAM++, and ScoreCAM are used to inspect features leading to the classification of a hotspot. GradCAM, when applied to Squeezenet-ABM for hotspot classification, gives best AP features having more context, as stated in [17]. ### _Mask generation for inpainting by human-in-the-loop_ The introduction of safe road design with minimal intervention in the current structure of a roadway requires street view images and corresponding masks as input for inpainting. However, generating a mask for inpainting a safe design with minimal intervention is complex. For street images, making masks using only AP features may be vague, because the features may not enclose quantifiable objects. For example, masks for sidewalks can have a broader proportionate area than the actual sidewalk, including sections of the adjoining road area. The masks for greenery on the roadside should be touching the ground and can be of some height alongside the road for inpainting bushes or trees accordingly. For simplicity, masks can be generated by a deep learning based object segmentation model with human interaction such as clicks [35], scribbles [36], bounding boxes [37], [38], [39], or extreme point selections [40]. Figure 3 (a) presents object segmentation and Fig.3 (b) provides AP features detected by the CAM-based method. A human operator generates masks using the green marking, as shown in Fig. 3 (c). Object segmentation can be used while considering AP features so that, if possible, the meaningful objects are considered while creating masks [35]. Scribbles is best suited [41] for complex images such as street view images. For our purpose, a human operator can add strokes for mask making while considering road redesign construction factors such as the required time and type of construction, funds, ease of deployment, traffic safety, etc. After mask creation, for the generation of inpainted images based on the provided mask and street view image, the fine-tuned diffusion model can be used. Fine-tuning of the diffusion model is done on seven road design components: chicanes, chokers, curb extensions, raised median, roundabouts, street plazas, and road markings on large intersections. ### _Training and inferencing_ After detection, we then generate a safe road design to replace the AP features. The region considered for safe road design generation is marked by a mask, as described in the previous sub-section. For this purpose, a diffusion model which is already trained on generic data is further trained or fine-tuned to generate a specific safe road design. Fine-tuning of the diffusion model is done using DreamBooth and Textual inversion. The base diffusion model used is Hugging Face stable-diffusion-v1-5. For DreamBooth, class prompts describing the property of safe road designs are used to generate some example images for generalization while training. Class prompts describe a new design, but in generic English language that has been used for base model training. For this purpose, respective text representing class prompts for each road design, as shown in TABLE II, are used. To generalize, 50 images for each class are generated. As the text Fig. 3: Guidance for mask generation for inpainting: (a) object segmentation; (b) heatmap of AP features; (c) marking of driver relevant AP features in green. Fig. 2: Generic layer-wise architecture of the deep learning model for classification of hotspot and non-hotspot images. The large box represents the CNN backbone. based inpainting model is fine-tuned to create new safe road designs, new textual names for those safe road designs are needed during training and inferencing. These texts are called subject words. In the class prompt text, as a new subject, a random word not in the English language is preferably used as an identifier word to bind a unique identifier with that specific safe road design. For DreamBooth, a new subject word in the class prompt after the words "photo of..." was included. Training was done for 2000 epochs with a learning rate of '1\(\times\)10\(\cdot\)6'. The original hyperparameters, as used in [23], are adopted. The training script, suitable for Google Colab, is available in [42]. For Textual inversion, a model trained by DreamBooth is used. The same subject word is used to create embedding that has been used for DreamBooth for a particular safe road design. To create embedding, 8 tokens per word are selected. The class prompts are used and added in the text file for each respective image for input in the training instance, as provided in the Automatic1111 UI training tab [43]. When training the embeddings of Textual inversion, some text describing the image along with the class prompt and new subject is provided for each image in a text file template. This text prompt is used to create similar images for generalization. Training is performed for 2000 epochs with an embedding learning rate of 0.005. Other hyperparameters are similar to that of the original setting in [24] and is pre-set in the Automatic1111 UI training session. Following training, inferencing for new safe road design is conducted. For efficient inferencing, Automatic1111 is widely used. Figure 4 shows the user interface (UI) of Automatic1111 along with different inputs and hyperparameters. The trained DreamBooth model or the fine-tuned model is loaded in the Automatic1111 UI. For inpainting, a mask is required which can be drawn directly in Automatic1111 or can be a separate input. Text input is also required to guide the generation of a new inpainted image. For the text input (prompt), the class prompt along with a new subject word, as used in DreamBooth training, is used. The text should be given according to the position of the mask and the requirement of creating a particular safe road design. For inpainting, other various hyperparameters can be calibrated with the UI. For example, the sampling method can be chosen from a range of available options. The classifier-free guidance scale (CFG Scale), responsible for the weight dependency of the text prompt on the inpainted image, can be adjusted from a range of 0 to 30. Similarly, Denoising strength provides the weight dependency of the inpainted region to the original image. This can also be adjusted by a slider in the UI and varies from 0 to 1. After generating a few samples, a human operator can select the best one. From our empirical experience, a suitable range of 7 to 18 for the CFG Scale and a range of 0.65 to 0.75 for Denoising strength is best for designing photorealistic results. ### _Visual Saliency for AP features_ To find the visual complexity, [11] illustrates the concept of Scene Disorder (SD), directly related to the number of object categories present in a given scene. A higher SD value implies a more complex image resulting in reduced attention towards objects that are relevant to accident risk. However, SD might not be directly linked to driver attention. As a more direct metric, the saliency drawing a driver's attention in complex scenes can be compared with the AP features of the scene. If the saliency of the AP features is less, indicating a scene is Fig. 4: Automatic1111 user interface dashboard shown for inferencing along with different inputs. complex, then the driver may divert their attention away from the features making them more prone to accidents. As a metric, saliency is defined as the percentage ratio of the salient AP feature to the whole AP feature area. A salient area of the original image detected by the GazeShiftNet [16] is calculated and its intersection with an AP feature detected by the CAM-based method is defined as the salient AP feature and calculated for given images. Then, the percentage ratio of the intersected area with respect to the AP feature area is calculated. The average of the percentage ratio is taken for all images and enlisted, as shown in Table III. Here, we studied different architectures along with different CAMs. As a result, we found that a combination of Squeezenet-ABM with GradCAM gives the lowest value of visual saliency. This shows that the best AP features have the least saliency, and to make the road view safe, some intervention in the form of chrominance alteration is required for design improvement. This will increase the saliency of AP features by introducing contrastive bright colors to make them more noticeable. ### _Saliency enhancement_ The physical saliency of items that attract attention is a prominent factor affecting a driver's behavior [19]. Therefore, saliency enhancement of AP features, road markings, signs, and traffic signals in the road view scene of drivers can increase safety. For this, masks are generated based on object segmentation and AP feature maps created by the use of a CAM-based method. Object segmentation helps identify different categories of objects present in the scene. For example, traffic signals and signs are detected. For road marking, we leverage the Inter-Region Affinity KD method presented in [44]. After combining all the masks of the AP feature maps, road markings, traffic signs, and signals, the saliency mask is generated. For a given image and saliency mask, the saliency model is used to change the saliency in the masked region [51]. This results in a generated image that redirects the attention of a driver to the AP features. The new inpainted image of safe structures and the final mask generated by using the original road view image are used as the two inputs into the deep saliency model. ## V Observations and Experiments ### _Analysis of road design intervention_ As mentioned in the inferencing stage in Subsection IV-C, a human operator is present to decide the mask and text prompt for inpainting. The operator can also adjust the input hyperparameters and select the best visually inpainted result. Using the ABM based deep learning classifier, as used for selection of AP features, we selected 50 images classified as hotspot images. The mask, text prompt, hyperparameter setting, and selection of the best result is performed by a human operator to obtain a new safe road element with minimal intervention for each of the 50 images. Samples of the safe road elements are shown in Fig. 5. A human operator spent approximately 2 min, on average, for each image. From the various results, as shown for the case in Fig. 6, cherry picking of the results can be done according to the time and type of construction required, funds, ease of deployment, traffic safety, etc. The qualitative results show that with minimalistic changes, a safe road design can be introduced to enhance the safety of the road view. Furthermore, as a quantitative study, we note the average probability of the deep learning classifier to select 50 images as hotspot images. Different CNN backbones along with the ABM module were tested. The results of the qualitative study for the 50 images are shown in TABLE IV. The hotspot Fig. 5: Road view inpainted with safe road elements. Left column represents the original images and middle column shows inpainted images and right column presents the names of newly added safe road elements. classification probability of the best model, Squeezenet-ABM with the new safe and intervened images, dropped by 11.85% on average. Even though the percentage drop by the Densenet CNN backbone is larger, it is not considered as a suitable model, as its average classification probability for the original 50 images is just 0.70. ### _Discussion on enhancing Visual saliency for new road design intervention_ As stated, saliency enhancement by chrominance alteration can mitigate accident risk by introducing bright contrastive color in the saliency mask area. Therefore, we performed experiments to increase the saliency of AP features, road signs, road markings, and traffic signals in road view images by using the saliency mask. Observing the new road view scenes with enhanced saliency, frequent road management practices such as timely paint coating of traffic signals, road marking, and road signs are suggested. As per safe road designs [46], authorities can undertake measures such as raised crossings, speed bumps, etc. to increase visibility of roads wherever necessary. Moreover, other road design interventions for enhancing visual saliency can use contrast color to highlight potential conflict zones or intersecting areas of lanes [47]. For this, the use of photoluminescence paint for pavements to increase night-time visibility [48], colored asphalt pavements durable in dry and wet weather conditions [49], and retroreflective material-based road signs and markings [50] should be encouraged. ## V Conclusion In this article, a methodology for minimal intervention in current road design under human supervision to mitigate accident risk is proposed. This road redesign methodology is introduced considering two levels: safe road design and the human factor of road users. With our methodology, street images are first classified as accident hotspots by using a deep neural network. The CNN backbone Densenet along with ABM shows 92% accuracy and 94% precision. For a more precise contextual selection of AP features, we also inspected SqueezeNet-ABM using GradCAM. This classification helps determine the AP features that are the major cause for classification of a given image into an accident hotspot. Road design elements that are easy to deploy as well as impactful in preventing accidents are explored. Using this, we introduce a methodology for safe road design using fine-tuning techniques for a diffusion model such as DreamBooth and Textual inversion. Under human supervision, inpainted street images with safe design features such as a raised median, roundabout, etc. are generated. The image inpainted with a safe road design reduces the chance of hotspot classification by approximately 11.85% with SqueezeNet-ABM. Additionally, to understand scene complexity, we discussed the impact of the overlap of the mask area between the saliency area and AP features mask. Our assessment shows that, in complex street view scenes, saliency of AP features is subdued. Thus, visual saliency enhancement for making AP features with bright and contrastive colors is required as an intervention to attract a driver's attention. For this, chrominance alteration by using contrasted painting of pavements and speed bumps, use of retroreflective material for road markings and signages, etc. are suggested. Chrominance alteration is likely to redirect a driver's attention to AP features, road markings, signs and traffic signals, thereby aiding in accident prevention. ## VI Acknowledgment This work was supported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant funded by the Korea Government (MSIT) (Development of Artificial Intelligence Technology that Continuously Improves Itself as the Situation Changes in the Real World) under Grant 2020-000440.
2301.11772
Electromagnetic memory in arbitrary curved space-times
The gravitational memory effect and its electromagnetic (EM) analog are potential probes in the strong gravity regime. In the literature, this effect is derived for static observers at asymptotic infinity. While this is a physically consistent approach, it restricts the space-time geometries for which one can obtain the EM memory effect. To circumvent this, we evaluate the EM memory effect for comoving observers (defined by the 4-velocity $u_{\mu}$) in arbitrary curved space-times. Using the covariant approach, we split Maxwell's equations into two parts -- projected parallel to the 4-velocity $u_{\mu}$ and into the 3-space orthogonal to $u_{\mu}$. Further splitting the equations into $1+1+2$-form, we obtain \emph{master equation} for the EM memory in an arbitrary curved space-time. We provide a geometrical understanding of the contributions to the memory effect. We then obtain EM memory for specific space-time geometries and discuss the salient features.
Susmita Jana, S. Shankaranarayanan
2023-01-27T15:15:12Z
http://arxiv.org/abs/2301.11772v3
# Electromagnetic memory in arbitrary curved space-times ###### Abstract The gravitational memory effect and its electromagnetic (EM) analog are potential probes in the strong gravity regime. In the literature, this effect is derived for static observers at asymptotic infinity. While this is a physically consistent approach, it restricts the space-time geometries for which one can obtain the EM memory effect. To circumvent this, we evaluate the EM memory effect for comoving observers (defined by the 4-velocity \(u_{\mu}\)) in arbitrary curved space-times. Using the covariant approach, we split Maxwell's equations into two parts -- projected parallel to the 4-velocity \(u_{\mu}\) and into the 3-space orthogonal to \(u_{\mu}\). Further splitting the equations into \(1+1+2\)-form, we obtain _master equation_ for the EM memory in an arbitrary curved space-time. We provide a geometrical understanding of the contributions to the memory effect. We then obtain EM memory for specific space-time geometries and discuss the salient features. Introduction LIGO-VIRGO-KAGRA has detected close to 100 gravitational wave (GW) sources. GW signals emanating from a black hole or neutron star binaries have opened many new research avenues in astronomy, cosmology, and fundamental physics [1; 2; 3; 4]. GWs provide a unique way to test gravity's most extreme, non-linear regime in novel ways. The planned third-generation ground-based detector (Cosmic Explorer and the Einstein Telescope) will allow us to peer far deeper, and LISA will open a new observational window at low frequencies. With more sensitive detectors shortly, the focus has been to understand the physical effects of GWs. _Gravitational wave memory_ is one such effect [5; 6; 7; 8; 9; 10; 11; 12; 13]. GW memory effects -- physically observable phenomena that modify the state of gravitational-wave detectors a little bit from their original undisturbed state -- are one of the key predictions of general relativity [6; 7; 9; 14]. GW memory effects can be divided into two types [12; 13]: _null memory_ that occurs when radiation or massless particles escape from a system to null infinity, and _ordinary memory_ that occurs when the detector recoils relative to its initial center of mass frame. The GW memory is characterized as a gravitational wave signal approaching a nonzero finite value. This aspect of the GW signal is yet to be observed, although LISA is predicted to observe it [15]. Recently, it has been realized that the memory effect can be thought of as a vacuum transition between two different states related by an asymptotic transformation [16; 17]. Since such asymptotic transformations also occur for other gauge theories, there has been an intense activity to obtain analogous memory effects in other gauge theories [18; 19; 20; 21; 22]. Since electromagnetic (EM) theory is the simplest of all gauge theories and can be a potential probe, _electromagnetic memory_ has received much attention [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Like in GW memory, an EM wave generates a permanent change in the relative velocity of test-charged particles attached to a detector in the 2-D surface perpendicular to the direction of propagation of the wave while passing through the detector [cf. (Fig. 1)]. In other words, EM waves directly displace test particles by giving them a momentum (kick), resulting in a relative velocity change. This is different from GW memory as the GW does not displace test particles. Instead, GW distorts the space-time geometry itself, which causes a change in separation between two test particles. Bieri and Garfinkle were the first to propose the memory effect due to electromagnetic waves [18]. Like in GW memory, they showed that EM waves produce two types of momentum kicks. In Ref. [19], Winicour showed the absence of memory effect generated by the electromagnetic field coming from distant sources for a bound charge distribution and the non-existence of memory effect due to the magnetic field. In the case of GW memory, gravitational radiation must reach the detector. Likewise, EM radiation also has to reach null infinity to generate _null kick_ memory. Hence to calculate EM memory, one needs to know the properties of the electric field and radiation at null infinity [18]. More specifically, the original approach by Bieri and Garfinkle requires prior knowledge about the behavior of the fields in asymptotic limits. It can be extended to conformally flat space-times [32; 34]. Also, the analysis does not provide any physical understanding of why the EM memory has such a form in flat and conformally flat space-times. This leads us to the following questions: Can we derive a master equation for _EM memory_ in a generic curved space-time? What role does curved geometry play in EM memory? Can we have a physical understanding of the various contributions to EM memory? This work addresses these three questions using \(1+3\) covariant formalism [35; 36; 37; 38; 39; 40]. There are two reasons why covariant formalism is better suited to studying EM memory. First, as mentioned earlier, when the EM wave propagates in a given spatial direction, the net momentum experienced by the particle lies in the 2-D surface orthogonal to the Figure 1: Electromagnetic memory effect that lies in the 2-D surface orthogonal to the direction of the coming wave. direction of propagation of the EM wave (for a pictorial representation, see Fig. 1). In other words, the EM memory affects the test particle lying on the 2-D surface. Hence, it is more natural to have a formalism that identifies such a dynamical 2-D surface and evaluates EM memory. Second, like in fluid mechanics, we can observe the flow of EM radiation in two ways. First, as in Refs. [18; 19], an asymptotic stationary observer monitors changes in Electric and Magnetic fields of the incoming EM radiation. Second, a comoving observer monitors changes in Electric and Magnetic fields. In fluid mechanics, these are referred to as the Lagrangian and Eulerian descriptions of flow, respectively. It is well-known that the Eulerian description is better suited for fluids and in cosmology [37; 38; 40]. In this work, we evaluate the memory effect using the \(1+1+2\) covariant formalism [41; 42; 37; 43; 44]. The \(1+1+2\) decomposition of space-time is a natural extension of the \(1+3\) formalism in which the three-space is further decomposed to a given spatial direction. This approach is also referred to as _semi-tetrad formalism_[45; 46; 47; 48; 49]. The principle advantage is that we can evaluate the net momentum (kick) vector on the 2-D surface for arbitrary space-time. Since this affects all the test particles on the 2-D surface, we refer to this as _memory vector_. This can also be understood using the fact that the electric and magnetic fields are transverse to the direction of propagation of the EM wave. Using the \(1+1+2\) covariant formalism, we obtain the master equation for the EM memory in arbitrary space-time. We provide a geometrical understanding of the various contributions to the memory effect. We then obtain the EM memory for specific space-times. The rest of this work is organized as follows: In Sec. II, we provide an overview of the two -- \(1+3\) and \(1+1+2\) -- covariant formalisms and obtain the key geometrical quantities. Then, in Sec. III, we rewrite Maxwell's equation in \(1+3\) and \(1+1+2\) covariant formalisms in arbitrary space-time. Next, in Sec. IV, we obtain the master equation for the EM memory in arbitrary space-time and discuss the key features. In Sec. V, we then obtain EM memory for specific space-times and compare them with the known results in the literature. Finally, in Sec. VI, we summarise our results and discuss possible future directions. In this work, we use \((-,+,+,+)\) metric signature and set \(c=1/(4\pi\epsilon_{0})=1\). A dot denotes a derivative with respect to the proper time \(\tau\). A prime denote derivative w.r.t the space-like vector \(n^{\mu}\). For easy comparison, we follow the notations of Ref. [40]. Overview of covariant formalism A covariant theory like general relativity does not favor any particular coordinates. However, splitting tensors in time and space is typically required for its physical meaning. Thus, the splitting achieves this by rewriting Einstein's equations as a set of constraint and evolution equations in a three-dimensional framework. This allows for an intuitive evaluation of the relevant physical system. A choice of coordinates defines a threading of space-time into lines and a slicing into hypersurfaces [50]. Thus, the splitting procedure can be carried out in two distinct ways: First, by employing the so-called \((3+1)-\) formalism or slicing of space-time [51]. Second, by employing \((1+3)-\) formalism, or threading of space-time [37; 38; 40]. In the \((3+1)-\) decomposition, the time is a label of space-like slices \(\Sigma_{t}\) with space coordinates \(x_{i}\). In contrast, in the \((1+3)-\) splitting, the time-like world lines have coordinate \(\tau\) and are labeled by \(x^{\mu}\). In the \((3+1)-\) formulation, the construction only requires space-like hypersurfaces and does not demand causality of the time curves. However, in the \((1+3)-\) approach, every tensor is split into the parallel and orthogonal directions to a time-like vector (curves). Furthermore, it does not provide any condition on the causality of the spatial distances. Though the two approaches provide different points of view, it has been shown that they are equivalent for space-times with symmetries [50]. We use the covariant \(1+3\) formalism in this work to obtain EM memory. As mentioned in the introduction, covariant formalism provides a physical understanding of the origin of EM memory in arbitrary space-time. ### Covariant 1+3 Formalism Heckmann, Schucking, and Raychaudhuri developed the covariant approach to General relativity in the 1950s [35; 36] and was later used in different gravitational and cosmological models [37; 38; 39; 40]. To decompose the 4-D space-time in \((1+3)-\) formalism, we introduce a family of observers with worldlines tangent to a timelike 4-velocity vector \(u^{\mu}\) satisfy the following: \[u^{\mu}=\frac{dx^{\mu}}{d\tau};\quad u^{\mu}u_{\mu}=-1\,, \tag{1}\] where \(\tau\) is the proper time measured along the fundamental world line. See Fig. 2. Using the 4-velocity (\(u^{\mu}\)) we can define the following projection tensors [38; 40]: \[U^{\mu}\,_{\nu}=-u^{\mu}u_{\nu};\quad U^{\mu}\,_{\nu}\,U^{\nu}\,_{ \gamma}=U^{\mu}\,_{\gamma};\quad U^{\mu}\,_{\mu}=1 \tag{2a}\] \[h_{\mu\nu}=g_{\mu\nu}+u_{\mu}u_{\nu};\quad h^{\mu}\,_{\nu}\,h^{ \nu}\,_{\gamma}=h^{\mu}\,_{\gamma};\quad h^{\mu}\,_{\mu}=3;\quad h_{\mu\nu}\,u ^{\nu}=0 \tag{2b}\] \(u^{\mu}\), and hence \(U^{\mu}\,_{\nu}\), projects physical quantities parallel to the 4-velocity of the observer and \(h_{\mu\nu}\) projects quantities into the 3-space orthogonal to \(u^{\mu}\). The tensor \(h_{\mu\nu}\) provides the metric properties of the instantaneous 3-space as well in the absence of rotation or vorticity. In this formalism, the projection of the vector (\(V^{\nu}\)) orthogonal to \(u^{\mu}\) is defined as \(V_{<\mu>}\). Similarly, the trace-less part of a rank-2 tensor (\(S^{\alpha\beta}\)) projected into space orthogonal to Figure 2: Visualisation of \(1+3\) formalism. is defined as \(S_{<\mu\nu>}\). Mathematically, these are given by: \[V_{<\mu>}:=h_{\mu\nu}\,V^{\nu};\ \ \ \ S_{<\mu\nu>}:=\left(h_{\mu\alpha}h_{\nu \beta}-\frac{1}{3}h_{\mu\nu}h_{\alpha\beta}\right)\,S^{\alpha\beta} \tag{3}\] The projection of the time derivative and orthogonal spatial derivative of any vector (\(V^{\nu}\)) and tensor (\(S^{\alpha\beta}\)) are defined as: \[\dot{V}^{<\mu>}:=h^{\mu}\,_{\alpha}u^{\nu}\nabla_{\nu}\,V^{\alpha};\ \ \ \ D_{\alpha}\,S^{\beta\gamma}:=h^{\mu}\,_{\alpha}\,h^{\beta}\,_{\nu}\,h^{ \gamma}\,_{\rho}\,\nabla_{\mu}\,S^{\nu\rho} \tag{4}\] The covariant derivative of \(u_{\mu}\) can be split into two parts: 1) directional derivative along the tangent to the world line, 2) spatial derivative in the 3-space orthogonal to \(u^{\nu}\). This can further be split into trace, traceless symmetric and anti-symmetric tensor: \[\nabla_{\nu}u_{\mu}=\frac{\Theta}{3}h_{\mu\nu}+\sigma_{\mu\nu}+\omega_{\mu\nu} -\dot{u}_{\mu}u_{\nu}\,. \tag{5}\] In the above equation, \(\sigma_{\mu\nu}\) is the symmetric expansion tensor that describes the distortion in the matter flow, \(\Theta\) corresponds to the expansion rate of the matter w.r.t the observer, \(\omega_{\mu\nu}\) is the anti-symmetric vorticity tensor describing the rotation of the matter w.r.t a non-rotating frame. The last term refers to the relativistic acceleration vector (the directional derivative) \(\dot{u}_{\mu}=u^{\nu}\nabla_{\nu}\) which corresponds to the degree to which the matter moves under forces other than gravity plus inertia. Further, using the vorticity tensor, we can define the following quantity called the vorticity vector: \[\omega^{\nu}=-\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}\omega_{\alpha\beta}\,u_{\mu} \tag{6}\] where, \(\epsilon^{\mu\nu\alpha\beta}=\frac{1}{\sqrt{-g}}\eta^{\mu\nu\rho\sigma}\) is fully antisymmetric tensor, \(\eta^{\mu\nu\rho\sigma}\) is Levi-Civita symbol whose values are \(\pm 1\) and we set \(\eta^{0123}=1=-\eta_{0123}\)[52]. The Levi-Civita 3-tensor is defined as: \[\epsilon_{\mu\nu\alpha}\equiv\epsilon_{\mu\nu\alpha\beta}u^{\beta}\,, \tag{7}\] and satisfies the following relations: \(\epsilon_{\mu\nu}u^{\nu}=0\) and \(\epsilon^{\mu\nu\alpha\beta}=2\left(\,u^{[\mu}\epsilon^{\nu]\alpha\beta}- \epsilon^{\mu\nu[\alpha}u^{\beta]}\,\right)\). The square bracket w.r.t the indices refers to antisymmetrization. ### 1+1+2 covariant formalism The \(1+3\)-_covariant_ formalism is well-suited for relativistic cosmology because, at the largest observable scales, the universe is homogeneous and isotropic [38]. These symmetries allow the slicing or threading of the 4-D space-time manifold into a one-parameter family of spacelike hypersurfaces corresponding to cosmic time. Interestingly, it is easy to show that in the Friedmann-Lemaitre-Robertson-Walker (FLRW) background, all physical quantities except for the volume expansion \(\Theta\) and the energy density vanish. Using the Stewart-Walker lemma, in this formalism, it was possible to construct gauge invariant quantities up to second order in cosmological perturbations [53; 54]. However, the \(1+3\)-formalism is not suited if the space-time is inhomogeneous, like spherical symmetry or space-times with local rotational symmetry (LRS) [41]. In such cases, splitting the 3-space orthogonal to the time-like congruence into one spacelike direction and a 2-space is apt [37]. Thus, the \(1+1+2\) decomposition of space-time is a natural extension of the \(1+3\) formalism in which the three-space is further decomposed to a given spatial direction. This approach is called semi-tetrad formalism [45; 46; 47; 48; 49]. As mentioned in the Introduction, our interest is to evaluate the net momentum experienced by a test particle after the electromagnetic wave passes through the space-time point. In the covariant \(1+3\) formalism, the test particle is the fundamental time-like observer. As depicted in (Fig. 1), when the EM wave propagates in a given spatial direction, the net momentum experienced by the particle lies in the 2-D surface orthogonal to the direction of propagation of the EM wave. In other words, the net momentum (kick) vector lies in the 2-D surface. Thus, the net memory effect of the test particle will lie on the 2-D surface; hence, we will refer to this as the _memory vector_. This can also be understood using the fact that the electric and magnetic fields are transverse to the direction of propagation of the EM wave. Thus, it is cogent to further split the 3-space to \(1+2\)-space. More specifically, choosing a generic space-like vector (\(n^{\mu}\)), we split the 3-space into 1 + 2-space [41; 42; 43; 44]. The space-like vector (\(n^{\mu}\)) satisfies the following conditions: \[n^{\mu}n_{\mu}=1,\quad n^{\mu}u_{\mu}=0\,.\] Like in the \(1+3\)-formalism, we project the vectors and tensors defined in 3-space along the space-like direction (\(n^{\mu}\)) and into the 2-space that is orthogonal to \(n^{\mu}\). Here again, the projection tensor (\(\tilde{h}_{\mu\nu}\)) need to be defined: \[\tilde{h}_{\mu\nu}=h_{\mu\nu}-n_{\mu}n_{\nu};\quad\tilde{h}^{\mu} \,_{\nu}\,\tilde{h}^{\nu}\,_{\gamma}=\tilde{h}^{\mu}\,_{\gamma};\quad\tilde{ h}^{\mu}\,_{\mu}=2;\quad\tilde{h}_{\mu\nu}\,u^{\nu}=0;\quad\tilde{h}_{\mu\nu}\,n^{ \nu}=0\,. \tag{8}\] All the vectors and tensors defined in the 3-space in the \(1+3\)-formalism can be split into \(1+2\) form. For instance, an arbitrary space-like vector \(V^{\mu}\) (defined in the 3-space) can be written as: \[V^{\mu}=\mathscr{V}n^{\mu}+\mathscr{V}^{\mu} \tag{9}\] where, \(\mathscr{V}=V^{\mu}n_{\mu}\) and \(\mathscr{V}^{\mu}=\tilde{h}^{\mu}{}_{\nu}V^{\nu}\). Similarly an arbitrary tensor \(v_{\mu\nu}\) on the 3-space can be split as: \[v_{\mu\nu}=V\left(n_{\mu}n_{\nu}-\frac{1}{2}\tilde{h}_{\mu\nu} \right)+2V_{(\mu}n_{\nu)}+V_{\mu\nu}\,, \tag{10}\] where \(V_{(\mu}n_{\nu)}=(V_{\mu}n_{\nu}+n_{\nu}V_{\mu})/2\). Similarly, the relative acceleration of the time-like observer and other geometrical quantities defined in 3-space can be written in \(1+2\) space as: \[\dot{u}^{\mu} =\mathscr{A}n^{\mu}+\mathscr{A}^{\mu} \tag{11}\] \[\dot{n}^{\mu} =\mathscr{A}u^{\mu}+\alpha^{\mu}\] (12) \[\omega^{\mu} =\Omega n^{\mu}+\Omega^{\mu}\] (13) \[\sigma_{\mu\nu} =\Sigma\left(n_{\mu}n_{\nu}-\frac{1}{2}\tilde{h}_{\mu\nu}\right) +2\Sigma_{(\mu}n_{\nu)}+\Sigma_{\mu\nu} \tag{14}\] where \(\dot{n}^{\mu}:=u_{\nu}\nabla_{\nu}\)\(n^{\mu}\) is the relative acceleration of the space-like vector along the time-like observer. Here, \(\mathscr{A}^{\mu},\alpha^{\mu},\Sigma_{\mu\nu}\), \(\Omega^{\mu}\) are orthogonal to \(n^{\mu}\) as well as \(u^{\mu}\). Also, \(\mathscr{A}^{\mu},\Omega^{\mu}(\Sigma_{\mu\nu})\) are the vectors (tensor) projected on the 2-space. In this formalism, we define the alternating Levi-Civita 2-tensor \[\epsilon_{\mu\nu}\equiv\epsilon_{\mu\nu\alpha}n^{\alpha} \tag{15}\] which is orthogonal to \(n^{\mu}\) and has components only in the 2-space. Given an arbitrary vector \(V^{\mu}\) in the 2-space, we can construct another vector \(\epsilon_{\mu\nu}V^{\nu}\) that is orthogonal to \(V^{\mu}\) which is in the 2-space and has the same length. The \(1+2\) splitting of the 3-space leads to a new directional derivative along the space-like vector \(n^{\mu}\): \[v^{\prime}_{\mu\nu}\equiv n^{\alpha}D_{\alpha}v_{\mu\nu} \tag{16}\] \[\tilde{D}_{\alpha}v_{\mu\nu}\equiv\tilde{h}_{\alpha}{}^{\beta} \tilde{h}_{\mu}{}^{\rho}\tilde{h}_{\nu}{}^{\sigma}D_{\beta}v_{\rho\sigma}\,. \tag{17}\] The derivative in Eq. (16) physically correspond to the variation of the physical quantities on the 2-space along the space-like vector \(n^{\mu}\). The derivative (\(\tilde{D}\)) in Eq. (17) corresponds to the variation of the physical quantities that lie in the 2-space. These will contribute to the memory vector. As we split the covariant derivative of \(u_{\mu}\) in Eq. (5), similarly we can split the covariant derivative of \(n_{\mu}\) as: \[D_{\nu}n_{\mu}=\tilde{D}_{\nu}n_{\mu}+n_{\mu}n^{\prime}_{\nu}=\tilde{\sigma}_{ \mu\nu}+\tilde{\omega}_{\mu\nu}+\frac{1}{2}\tilde{\Theta}\tilde{h}_{\mu\nu}+n_ {\mu}n^{\prime}_{\nu} \tag{18}\] where, \(\tilde{\sigma}_{\mu\nu}\equiv\tilde{D}_{<\nu}n_{\mu>}\), \(\tilde{\omega}_{\mu\nu}\equiv\tilde{D}_{(\nu}n_{\mu)}\) and \(\tilde{\Theta}=\tilde{D}^{\mu}n_{\mu}\) are shear, vorticity and the surface expansion-contraction scalar respectively and \(n^{{}^{\prime}}_{\mu}\) is the spatial derivative along \(n^{\mu}\). Thus, \(\tilde{D}_{\nu}n_{\mu}\) describes the kinematic properties or the relative motion of the space-like curves in the 2-surface orthogonal to \(n^{\mu}\). We can obtain the relation between the kinematic quantities derived from the motion of time-like vector \(u_{\mu}\) and kinematic quantities in 2-space derived from the space-like vector \(n^{\mu}\). See, for instance, Ref. [44]. ## III Electromagnetic theory in covariant formalism The covariant formalism has been extensively employed in studying the evolution of electromagnetic fields in curved space-time [43]. In the covariant formulation, the dynamics and kinematics are constricted by the Bianchi and Ricci identities. The \((1+3)-\) covariant formulation permits the classification of cosmological models, a fluid description of the matter field in FLRW universes. However, as mentioned earlier, the \(1+3\)-formalism is not suited if the space-time is inhomogeneous, like spherical symmetry or space-times with LRS [41]. In such cases, the \(1+1+2\)-_covariant_ or semi-triad formalism are better suited. Since we aim to derive EM memory for arbitrary space-times, we use \(1+1+2\)-covariant formalism. We obtain a generic form of the EM memory effect by evaluating the change in the velocity vector \(\Delta u^{\mu}\) that lie in the 2-space. In order to do so, we fix the space-like direction to be the direction of the propagation of the wave. In the case of spherically symmetric space-time, this naturally translates to the radial direction. One key advantage is that the electromagnetic theory in the \(1+1+2\) formalism helps to understand the evolution and dynamics of the EM fields along the space-like direction and in the 2-space normal to \(n^{\mu}\) and \(u^{\mu}\). Our approach makes geometrical contributions to the memory effect more transparent. In the next subsection, we rewrite Maxwell's equations in \(1+3\) formalism in an arbitrary space-time. Later, we formulate the evolution equations of the EM fields in the 2-space and two constraint equations of the same along \(u^{\mu}\) and \(n^{\mu}\)[44]. The key advantage is that we can obtain the memory vector from the projected acceleration vector onto the 2-space. ### In 1+3 formalism The fundamental objects are the Maxwell electromagnetic field tensor \(F^{\mu\nu}\). The \((1+3)\) covariant formalism of Maxwell's electromagnetic theory provides a way to study the interaction of EM fields with different components of general space-time geometry [43]. With the \((1+3)\) decomposition, it is possible to split \(F^{\mu\nu}\) into the electric and magnetic fields. Note that the local coordinates are mathematical parameters that label the points of the space-time manifold \(M\); therefore, the electric and magnetic fields may not have a direct physical meaning. In order to make measurements, an observer brings in an additional structure on \(M\) by introducing the orthonormal coframe field. This gives rise to the split of Maxwell's tensor \(F\) into the physical electric and magnetic fields. Specifically, formalism allows us to split the equations of motion of the fields and currents into two parts: 1. projected parallel to the 4-velocity \(u^{\mu}\) of the fundamental observer 2. projected into the 3-space orthogonal to \(u^{\mu}\). To keep the calculations tractable, we perform all the calculations in source-free and lossless regions. However, the EM memory analysis can be straightforwardly extended to these regions. In the source-free regions, Maxwell's equations are: \[\nabla_{\nu}F^{\mu\nu}=0 \tag{19}\] \[\nabla_{[\gamma}F_{\mu\nu]}=0;\quad\text{or}\quad\nabla_{\nu}{F^ {*}}^{\mu\nu}=0\,, \tag{20}\] where \({F^{*}}^{\mu\nu}\) is the dual to \(F^{\mu\nu}\) and is defined as \({F^{*}}^{\mu\nu}=(1/2)\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}\). In the \(1+3\) formalism, by projecting \(F^{\mu\nu}\) and \({F^{*}}^{\mu\nu}\) along the time-like 4-velocity vector, we can decompose them into electric and magnetic parts. The electric (\(E^{\mu}\)) and magnetic (\(B^{\mu}\)) 4-vectors are defined as: \[E^{\mu}:=F^{\mu\nu}u_{\nu} \tag{21}\] \[B^{\mu}:={F^{*}}^{\mu\nu}u_{\nu} \tag{22}\] From the above definitions, we infer: \[E^{\mu}u_{\mu}=0;\quad B^{\mu}u_{\mu}=0 \tag{23}\] which implies \(E^{\mu}\) and \(B^{\mu}\) have only spatial components. Given this, we can rewrite \(F_{\mu\nu}\) and \({F^{*}}^{\mu\nu}\) as: \[F_{\mu\nu} =u_{\mu}E_{\nu}-u_{\nu}E_{\mu}+\epsilon_{\mu\nu\alpha\beta}B^{ \alpha}u^{\beta} \tag{24}\] \[\tilde{F}^{\alpha\beta} =\epsilon^{\alpha\beta\mu\nu}u_{\mu}E_{\nu}+\left(\,u^{\alpha}B^{ \beta}-u^{\beta}B^{\alpha}\,\right)\,. \tag{25}\] From the above expressions, we see that the simultaneous transformations \(E^{\mu}\rightarrow-B^{\mu}\), \(B^{\mu}\to E^{\mu}\) leads to \({F^{*}}^{\mu\nu}\to F^{\mu\nu}\). This implies that we can obtain the second Maxwell's equation (20) from the first Maxwell's equation (19) or vice-versa. More specifically, if we obtain the time-like part and space-like part of Maxwell's equations (20), we can write the time-like part and space-like part of the other Maxwell's equations (19) by substituting \(E^{\mu}\rightarrow-B^{\mu}\), \(B^{\mu}\to E^{\mu}\). In the rest of this subsection, we obtain Maxwell's equations by projecting along \(u_{\mu}\) (time-like part) and \(h_{\mu\nu}\) (space-like part) [55]. We first obtain the time-like part of Eq. (20) by multiplying it with \(u_{\mu}\): \[u_{\alpha}\left(\,\nabla_{\beta}\tilde{F}^{\alpha\beta}\,\right)=0 \tag{26}\] Using the decomposition in Eq. (25), the above expression becomes: \[\nabla_{\beta}B^{\beta}-B^{\beta}\dot{u}_{\beta}+\left(\nabla_{ \beta}u_{\alpha}\right)\,\epsilon^{\alpha\beta\mu\nu}u_{\mu}E_{\nu}=0 \tag{27}\] We simplify the above equation using the following steps: First, we combine the first two terms in the LHS. From Eq. (26), we have \(B^{\beta}\dot{u}_{\beta}=-u_{\beta}\dot{B}^{\beta}=-u_{\beta}u^{\alpha}\nabla _{\alpha}B^{\beta}\). Substituting in the second term of the above expression, we have \(\delta^{\alpha}_{\beta}\,\nabla_{\alpha}B^{\beta}+u_{\beta}u^{\alpha}\nabla_{ \alpha}B^{\beta}=h^{\alpha}_{\beta}\left(\nabla_{\alpha}B^{\beta}\right)\). Substituting \(\nabla_{\beta}u_{\alpha}\) from Eq. (5) and using the definition of vorticity vector in Eq. (6), the third term in the LHS of the above expression simplifies to \(-2\omega^{\beta}E_{\beta}\). Thus, the time-like part of Eq. (20) reduces to: \[D_{\beta}B^{\beta}=2\omega^{\beta}E_{\beta}\,. \tag{28}\] The space-like part of Eq. (20) can be obtained by multiplying it with \({h_{\mu}}^{\nu}\), \[{h_{\alpha}}^{\rho}\left(\,\nabla_{\beta}\tilde{F}^{\alpha\beta} \,\right)=0 \tag{29}\] Using a series of steps, the above expression can be rewritten as: \[\dot{B}^{<\rho>}=\left[\sigma^{\rho}\,_{\beta}+\omega^{\rho}\,_{\beta}-\frac{2 \Theta}{3}\,h^{\rho}\,_{\beta}\right]B^{\beta}-\epsilon^{\rho\mu\nu}\,\dot{u}_{ \mu}E_{\nu}-\epsilon^{\rho\mu\nu}\,\nabla_{\mu}E_{\nu}\,. \tag{30}\] where, \(\epsilon^{\mu\nu\alpha}\) is defined in Eq. (7). The above equation provides the dynamical evolution of the magnetic field, while Eq. (28) is the constraint equation. As mentioned above, performing simultaneous transformation \(E^{\mu}\to-B^{\mu}\) and \(B^{\mu}\to E^{\mu}\) in Eqs. (31) and (32), we obtain the time-like and space-like parts of the first Maxwell's equation (19): \[D_{\beta}E^{\beta}=-2\omega^{\nu}B_{\nu} \tag{31}\] \[\dot{E}^{<\rho>}=\left[\sigma^{\rho}\,_{\beta}+\omega^{\rho}\,_{ \beta}-\frac{2\Theta}{3}\,h^{\rho}\,_{\beta}\right]E^{\beta}+\epsilon^{\rho \mu\nu}\,\dot{u}_{\mu}B_{\nu}+\epsilon^{\rho\mu\nu}\,D_{\mu}B_{\nu}\,. \tag{32}\] Similarly, the above equation provides the dynamical evolution of the electric field, while Eq. (31) is the constraint equation. ### In 1+1+2 formalism We aim to calculate the memory effect of EM fields. As the memory vector resides in the 2-surface orthogonal to the direction of propagation of the in-coming wave, we need to decompose the 3-space to \(1+2\)-space w.r.t a given spatial direction. In this subsection, we rewrite Maxwell's equations (19, 20) using the space-like vector \(n^{\nu}\) and the projection tensor (8) in \(1+1+2\) formalism. To do this, we first express the EM fields and currents in 3-space into \(1+2\) form: \[E^{\mu}=\mathscr{E}n^{\mu}+\mathscr{E}^{\mu} \tag{33}\] \[B^{\mu}=\mathscr{B}n^{\mu}+\mathscr{B}^{\mu}\,. \tag{34}\] where, \(\mathscr{E}\equiv E^{\mu}n_{\mu}\), \(\mathscr{E}^{\mu}\equiv\tilde{h}^{\mu}\,_{\nu}E^{\nu}\), \(\mathscr{B}\equiv B^{\mu}n_{\mu}\), and \(\mathscr{B}^{\mu}\equiv\tilde{h}^{\mu}\,_{\nu}B^{\nu}\). Following the discussion in Sec. (II.2), it follows that \(\epsilon_{\mu\nu}\mathscr{E}^{\nu}\) is orthogonal to \(\mathscr{E}^{\mu}\) and, similarly, \(\epsilon_{\mu\nu}\mathscr{B}^{\nu}\) is orthogonal to \(\mathscr{B}^{\mu}\). If electric and magnetic fields are orthogonal to each other in 2 space, then we have \[\mathscr{E}^{\nu}=\epsilon_{\mu\nu}\mathscr{B}^{\nu}\quad\mathscr{B}^{\nu}=- \,\epsilon_{\mu\nu}\mathscr{E}^{\nu}\,. \tag{35}\] These relations will play an important role in Sec. (IV) to derive the memory effect. The second step is to split the evolution equations (30, 32) interms of \(\mathscr{E},\mathscr{E}^{\mu},\mathscr{B},\mathscr{B}^{\mu}\). To do that, we project Eq. (32) along spacelike direction \(n^{\mu}\) and multiply Eq. (32) with projection tensor (8). After a long calculation, we obtain the following evolution equations for \(\mathscr{E}\) (along \(n^{\mu}\)) and \(\mathscr{E}^{\mu}\) (in the orthogonal 2-space): \[\dot{\mathscr{E}}+\Theta\mathscr{E} =\alpha^{\mu}\mathscr{E}_{\mu}-2\tilde{\omega}\mathscr{B}+ \epsilon_{\mu\rho}\tilde{D}^{\mu}\mathscr{B}^{\rho} \tag{36}\] \[\dot{\mathscr{E}}_{\tilde{\mu}}+\frac{\Theta}{2}\mathscr{E}_{\mu}= -\left(\alpha_{\mu}+2\epsilon_{\mu\rho}\Omega^{\rho}\right) \mathscr{E}+\left(\Sigma_{\mu\rho}+\Omega\epsilon_{\mu\rho}\right)\mathscr{E }^{\rho}+\epsilon_{\mu\rho}\left(\mathscr{A}^{\rho}-n^{\prime\rho}+\tilde{D}^ {\rho}\right)\mathscr{B}\] \[-\epsilon_{\mu\rho}\left(\mathscr{A}\mathscr{B}^{\rho}+\mathscr{ B}^{\prime\rho}-\left(\tilde{D}^{\rho}\mathscr{B}_{\nu}\right)n^{\nu} \right)\,, \tag{37}\] where, \(\tilde{\omega}=\tilde{\omega}_{\mu\nu}\,\epsilon^{\mu\nu}\), \(\Theta\) is the expansion factor defined in Eq. (5), \(\mathscr{A}^{\mu}\) is the relative acceleration vector in 2-space defined in Eq. (11), \(\tilde{\omega}\) is the vorticity defined in Eq. (18). \(\Omega^{\mu}\), \(\Omega\) is defined in Eq. (13) and \(\Sigma_{\mu\nu}\) is in Eq. (14). The 2-space component of \(\dot{n}^{\mu}\) is \(\alpha^{\mu}\) which is defined in Eq. (12), whereas \(\mathscr{A}=n^{\mu}\dot{u}_{\mu}=-u^{\mu}\dot{n}_{\mu}\) mentioned in Eq. (11), (12). We want to highlight the following points regarding the above expressions: First, the above equations generalize Ampere's law for arbitrary space-time. For example, in Eq. 36, the first term in the LHS corresponds to the time derivative of the electric field along spacelike direction \(n^{\mu}\) and the last term in RHS is the curl of the magnetic field in 2-space. Similarly, the LHS of Eq. (37) is the time derivative of the electric field in 2-space, and in the last term in the RHS is the curl of \(\mathscr{B}^{\rho}\). Second, in the flat space-time, the expansion factor (\(\Theta\)), the relative acceleration vector (\(\alpha^{\mu}\)), and vorticity (\(\tilde{\omega}\)) vanish, and the above expression lead to Ampere's law in flat space-time. Thus, background space-time introduces new couplings between the electric and magnetic field components. Lastly, we showed that the simultaneous transformation \(E^{\mu}\to-B^{\mu}\), \(B^{\mu}\to E^{\mu}\) leads to \(F^{*\mu\nu}\to F^{\mu\nu}\). Substituting \(\mathscr{E}\to\mathscr{B}\); \(\mathscr{E}^{\mu}\to\mathscr{B}^{\mu}\) and \(\mathscr{B}\to-\mathscr{E}\); \(\mathscr{B}^{\mu}\to-\mathscr{E}^{\mu}\) in Eqs. (36, 37), we have: \[\dot{\mathscr{B}}+\Theta\mathscr{B}= \mathscr{B}^{\mu}\alpha_{\mu}+2\tilde{\omega}\mathscr{E}-\epsilon _{\mu\rho}\tilde{D}^{\mu}\mathscr{E}^{\rho} \tag{38}\] \[\dot{\mathscr{B}}_{\tilde{\mu}}+\frac{1}{2}\Theta\mathscr{B}_{\mu}= -\left(\alpha_{\mu}+2\epsilon_{\mu\rho}\Omega^{\rho}\right) \mathscr{B}+\left(\Sigma_{\mu\rho}+\Omega\epsilon_{\mu\rho}\right)\mathscr{B} ^{\rho}-\epsilon_{\mu\rho}\left(\mathscr{A}^{\rho}+\tilde{D}^{\rho}-n^{\prime \rho}\right)\mathscr{E}\] \[+\epsilon_{\mu\rho}\left(\mathscr{A}\mathscr{E}^{\rho}+\epsilon _{\mu\rho}\mathscr{E}^{\prime\rho}-\left(\tilde{D}^{\rho}\mathscr{E}_{\nu} \right)n^{\nu}\right) \tag{39}\] Note that we obtain the above equations by projecting Eq. (30) along spacelike direction \(n^{\mu}\) and multiply Eq. (30) with projection tensor (8). Again, the above equations generalize Faraday's law for arbitrary space-time. The last step is to split the constraint equations (31, 28) interms of \(\mathscr{E},\mathscr{E}^{\mu},\mathscr{B},\mathscr{B}^{\mu}\). Substituting (33, 34) and the kinematic quantities (11-14), we get: \[\tilde{D}^{\mu}\mathscr{E}_{\mu}+n^{\mu}\mathscr{E}_{\mu}^{\prime} +\mathscr{E}^{\prime}+\tilde{\Theta}\mathscr{E}+2\left(\Omega\mathscr{B}+ \Omega^{\mu}\mathscr{B}_{\mu}\right) =0 \tag{40}\] \[\tilde{D}^{\mu}\mathscr{B}_{\mu}-n^{\prime\mu}\mathscr{B}_{\mu}+ \mathscr{B}^{\prime}+\tilde{\Theta}\mathscr{B}-2\left(\Omega\mathscr{E}+ \Omega^{\mu}\mathscr{E}_{\mu}\right) =0 \tag{41}\] where \(\tilde{\Theta}\) is the expansion along the space-like vector defined in Eq. (18). The above equations are generalizations of Gauss law. Here again, in the flat space-time, the expansion factor (\(\tilde{\Theta}\)), the relative acceleration vector (\(\alpha^{\mu}\)), vorticity (\(\Omega\)) vanish, and the above expressions lead to Gauss law in flat space-time. ### Energy-momentum tensor of the electromagnetic field As we will show in the next section, the electromagnetic stress tensor plays a crucial role in understanding the memory effect. This subsection evaluates the electromagnetic stress tensor in \(1+1+2\) formalism for an arbitrary space-time. The EM action in an arbitrary background is: \[S=-\frac{1}{4}\int d^{4}x\;\sqrt{-g}\;F_{\mu\nu}F_{\rho\sigma}g^{\mu\rho}g^{ \nu\sigma}\,. \tag{42}\] Varying the above action w.r.t the metric (\(g^{\mu\nu}\)) leads to the following energy-momentum tensor: \[T_{\mu\nu}=\frac{1}{2}g^{\rho\sigma}F_{\mu\rho}F_{\nu\sigma}-\frac{1}{8}g_{ \mu\nu}g^{\rho\sigma}g^{\alpha\beta}F_{\rho\alpha}F_{\sigma\beta}\,. \tag{43}\] In \(1+3\)-formalism, the stress-tensor of matter field (\(T_{\mu\nu}\)) can written as: \[T_{\mu\nu}=\rho\,u_{\mu}u_{\nu}+2\,S_{(\mu}\,u_{\nu)}+W_{\mu\nu}\,, \tag{44}\] where, the energy-density \(\rho\), the energy flux \(S^{\alpha}\) and stress-tensor \(W^{\alpha\beta}\) as measured in the observer's worldline are given by [56]: \[\rho=\text{T}^{\mu\nu}u_{\mu}u_{\nu},\quad S^{\alpha}=-h_{\mu}^{\alpha}\,T^{ \mu\nu}u_{\nu},\quad W^{\alpha\beta}=h_{\mu}^{\alpha}\,T^{\mu\nu}h_{\nu}^{\beta} \tag{45}\] For the electromagnetic fields in \(1+3\)-formalism, \(\rho\), \(S_{\mu}\) and \(W_{\mu\nu}\) are: \[\rho\equiv\frac{1}{2}\left(E^{\mu}E_{\mu}+B^{\mu}B_{\mu}\right); \quad S_{\mu}\equiv\epsilon_{\mu\nu\rho}E^{\nu}B^{\rho} \tag{46}\] \[W_{\mu\nu}\equiv\frac{1}{2}\left(E^{\mu}E_{\mu}+B^{\mu}B_{\mu} \right)h_{\mu\nu}-E_{\mu}E_{\nu}-B_{\mu}B_{\nu} \tag{47}\] Rewriting \(\rho\) interms of the variables \((\mathscr{E},\mathscr{E}^{\mu},\mathscr{B},\mathscr{B}^{\mu})\) in \(1+1+2\) formalism, we have: \[\rho=\frac{1}{2}\left(\mathscr{E}^{2}+\mathscr{B}^{2}\right)+\frac{1}{2}\left( \mathscr{E}^{\mu}\mathscr{E}_{\mu}^{\mu}+\mathscr{B}^{\mu}\mathscr{B}_{\mu} \right)=\rho_{(n)}+\rho_{2-\text{space}} \tag{48}\] Thus, \(\rho_{(n)}\) corresponds to the energy of the EM field along \(n_{\mu}\) and \(\rho_{2-\text{space}}\) corresponds to the energy of the EM field in the 2-space. The energy flux \(S_{\mu}\) (a vector in 3-space) can be rewritten in \(1+2\) space as: \[S_{\mu}=\mathscr{S}n_{\mu}+\mathscr{S}_{\mu} \tag{49}\] where \(\mathscr{S}\) is the Poynting vector of the EM field along the space-like vector \(n^{\mu}\) and \(\mathscr{S}_{\mu}\) is the energy flux in the 2-space. These are given by: \[\mathscr{S} =S_{\mu}n^{\mu}=\epsilon_{\mu\nu}\mathscr{E}^{\mu}\mathscr{B}^{\nu} \tag{50}\] \[\mathscr{S}_{\mu} =-\epsilon_{\mu\nu}\left(\mathscr{E}\mathscr{B}^{\nu}-\mathscr{B }\mathscr{E}^{\nu}\right)=-\left(\mathscr{E}\mathscr{E}^{\nu}+\mathscr{B} \mathscr{B}^{\nu}\right) \tag{51}\] In deriving the last expression, we have used the orthogonality condition between the electric and magnetic fields in the 2-space, i. e., \(\mathscr{E}_{\nu}=\epsilon_{\nu\mu}\mathscr{B}^{\mu}\). As we will see in the next section, the memory vector depends on the part of the electromagnetic energy density \(\rho\) and \(\mathscr{S}_{\mu}\). ## IV Memory effect in arbitrary space-time Having written Maxwell's equations in \(1+1+2\) formalism for an arbitrary space-time, we now evaluate the memory effect. Usually, in the literature, one uses the Lorentz force equation to derive EM memory. The equation of motion of a charged body (of mass \(m\) and charge \(e\)) in both gravitational and electromagnetic fields are: \[m\frac{du_{\alpha}}{d\tau}-\frac{m}{2}g_{\beta\gamma,\alpha}u^{\beta}u^{ \gamma}=eF_{\alpha\beta}u^{\beta} \tag{52}\] However, the above expression does not consider the new couplings between the electric and magnetic field components in Eqs. (36) - (39). Hence, we use the complete Maxwell's equations (36) - (41) and explicitly obtain the change in velocity (\(\Delta u^{\mu}\)) of the time-like observer. More specifically, using Eqs. (37, 39), we first calculate the acceleration vector \(\mathscr{A}^{\mu}\) in the 2-space. We can then integrate the expression for the acceleration vector (\(\mathscr{A}^{\mu}\) in the 2-space) with respect to time \(t\) or null time coordinate \(u\equiv(t-r)\) leading to the memory vector. In the rest of this section, we calculate \(\mathscr{A}^{\mu}\) for observers whose tangents are congruent to the space-like geodesics. This implies \(n^{\sigma}D_{\sigma}n^{\rho}=n^{\prime\rho}=0\), i. e., \(n^{\mu}\) is tangent to a congruence of space-like geodesics [44]. Using this condition and substituting \(\dot{\mathscr{E}}_{\bar{\mu}}^{\dot{\circ}}=\tilde{h}_{\mu\nu}\dot{\mathscr{E}} ^{\nu},\,\mathscr{B}^{\prime\,\rho}=n^{\nu}D_{\nu}\mathscr{B}^{\rho}\) in Eqs. (37, 39), we get: \[\tilde{h}_{\mu\nu}\dot{\mathscr{E}}^{\nu}+\epsilon_{\mu\rho}n^{ \nu}D_{\nu}\mathscr{B}^{\rho}= -\frac{1}{2}\Theta\mathscr{E}_{\mu}-\left(\alpha_{\mu}+2\epsilon_{ \mu\rho}\Omega^{\rho}\right)\mathscr{E}+\left(\Sigma_{\mu\rho}+\Omega\epsilon _{\mu\rho}\right)\mathscr{E}^{\rho}\] \[+\left(\epsilon_{\mu\rho}\mathscr{A}^{\rho}+\epsilon_{\mu\nu} \tilde{D}^{\nu}\right)\mathscr{B}-\epsilon_{\mu\nu}\left(\tilde{D}^{\nu}n^{ \rho}\right)\mathscr{B}^{\rho}-\epsilon_{\mu\rho}\mathscr{A}\mathscr{B}^{\rho} \tag{53}\] \[\left(\tilde{h}_{\mu\nu}\dot{\mathscr{B}}^{\nu}-\epsilon_{\mu\rho }n^{\nu}D_{\nu}\mathscr{E}^{\rho}\right) =-\frac{1}{2}\Theta\mathscr{B}_{\mu}-\left(\alpha_{\mu}+2\epsilon_{ \mu\rho}\Omega^{\rho}\right)\mathscr{B}+\left(\Sigma_{\mu\rho}+\Omega\epsilon _{\mu\rho}\right)\mathscr{B}^{\rho}\] \[-\left(\epsilon_{\mu\rho}\mathscr{A}^{\rho}+\epsilon_{\mu\nu} \tilde{D}^{\nu}\right)\mathscr{E}+\epsilon_{\mu\nu}\left(\tilde{D}^{\nu}n^{ \rho}\right)\mathscr{E}_{\rho}+\epsilon_{\mu\rho}\mathscr{A}\mathscr{E}^{\rho} \tag{54}\] Multiplying Eq. (53) with \(\mathscr{B}\), multiplying Eq. (54) with \(\mathscr{E}\) and subtracting the resultant equations leads to: \[\epsilon_{\mu\nu}\mathscr{A}^{\nu} =-\frac{\epsilon_{\mu\nu}}{2}\,\frac{D^{\nu}(\mathscr{E}^{2}+ \mathscr{B}^{2})}{(\mathscr{E}^{2}+\mathscr{B}^{2})}+\left(\Sigma_{\mu\nu}+ \Omega\epsilon_{\mu\nu}-\frac{\Theta}{2}\tilde{h}_{\mu\nu}\right)\,\frac{( \mathscr{E}\mathscr{B}^{\nu}-\mathscr{B}\mathscr{E}^{\nu})}{(\mathscr{E}^{2}+ \mathscr{B}^{2})}\] \[+\epsilon_{\mu\nu}\,\left(\tilde{\sigma}^{\rho\nu}+\tilde{\omega }^{\rho\nu}+\frac{\tilde{\Theta}}{2}\tilde{h}^{\rho\nu}\right)\,\frac{( \mathscr{B}\mathscr{B}_{\rho}+\mathscr{E}\mathscr{E}_{\rho})}{(\mathscr{E}^{ 2}+\mathscr{B}^{2})}+\frac{\epsilon_{\mu\rho}\mathscr{A}\left(\mathscr{E} \mathscr{E}^{\rho}+\mathscr{B}\mathscr{B}^{\rho}\right)}{(\mathscr{E}^{2}+ \mathscr{B}^{2})}\] \[+\frac{\mathscr{B}}{(\mathscr{E}^{2}+\mathscr{B}^{2})}\left( \tilde{h}_{\mu\nu}\dot{\mathscr{E}}^{\nu}+\epsilon_{\mu\rho}n^{\nu}D_{\nu} \mathscr{B}^{\rho}\right)-\frac{\mathscr{E}}{(\mathscr{E}^{2}+\mathscr{B}^{2 })}\left(\tilde{h}_{\mu\nu}\dot{\mathscr{B}}^{\nu}-\epsilon_{\mu\rho}n^{\nu}D_ {\nu}\mathscr{E}^{\rho}\right) \tag{55}\] To have a transparent understanding, we substitute the definitions (48) - (51) in the expression above, resulting in: \[\epsilon_{\mu\nu}\mathscr{A}^{\nu}= -\frac{\epsilon_{\mu\nu}}{2}\,\frac{D^{\nu}\rho_{(n)}}{\rho_{(n)}} -\frac{\epsilon^{\nu\alpha}}{2}\left(\Sigma_{\mu\nu}+\Omega\epsilon_{\mu\nu}- \frac{\Theta}{2}\tilde{h}_{\mu\nu}\right)\,\frac{\mathscr{S}_{\alpha}}{\rho_{( n)}}-\frac{\epsilon_{\mu\nu}}{2}\,\left(\tilde{\sigma}^{\rho\nu}+\tilde{\omega}^{ \rho\nu}+\frac{\tilde{\Theta}}{2}\tilde{h}^{\rho\nu}\right)\,\frac{\mathscr{S} _{\rho}}{\rho_{(n)}}\] \[-\frac{\epsilon_{\mu\rho}\mathscr{S}^{\rho}\mathscr{A}}{2\rho_{( n)}}+\frac{\mathscr{B}}{2\rho_{(n)}}\left(\tilde{h}_{\mu\nu}\dot{\mathscr{E}}^{\nu}+ \epsilon_{\mu\rho}n^{\nu}D_{\nu}\mathscr{B}^{\rho}\right)-\frac{\mathscr{E}}{2 \rho_{(n)}}\left(\tilde{h}_{\mu\nu}\dot{\mathscr{B}}^{\nu}-\epsilon_{\mu\rho }n^{\nu}D_{\nu}\mathscr{E}^{\rho}\right)\,. \tag{56}\] This is the master equation for the EM memory in arbitrary space-time regarding which we would like to discuss the following points: First, to our understanding, this is a first time the EM memory has been obtained for an arbitrary space-time. In the previous calculations [18; 19], the authors have restricted to asymptotic flat space-times. Second, the last two terms in the RHS of the above expression vanishes in the asymptotic limit. To see this, let us consider a spherically symmetric space-time. Let \(t\) refer to the time coordinate and \(r\) to the radial coordinate and the null coordinate is \(u\equiv t-r\). In the asymptotic limit \(\partial_{u}\sim\partial_{t}\) and \(\partial_{u}\sim-\partial_{r}\). Setting \(u^{\mu}\equiv(1,\,0,\,0,\,0)\) and \(n^{\mu}\equiv(0,\,1,\,0,\,0)\), the penultimate term in the RHS of the above equation simplifies to: \[\tilde{h}_{\mu\nu}\hat{\mathscr{E}}^{\nu}+\epsilon_{\mu\rho}n^{ \nu}D_{\nu}\mathscr{B}^{\rho} \simeq\tilde{h}_{\mu\nu}u^{0}\nabla_{0}\mathscr{E}^{\nu}+\epsilon _{\mu\rho}n^{1}\nabla_{1}\mathscr{B}^{\rho}\simeq\tilde{h}_{\mu\nu}\partial_{u }\mathscr{E}^{\nu}-\epsilon_{\mu\rho}\partial_{u}\mathscr{B}^{\rho}\] \[=f(u)\partial_{u}\left(\tilde{\tilde{h}}_{\mu\nu}\mathscr{E}^{\nu} -\bar{\mathscr{E}}_{\mu\nu}\mathscr{B}^{\nu}\right) \tag{57}\] where, \(\tilde{h}_{\mu\nu}=f(u)\tilde{\tilde{h}}_{\mu\nu}\) and \(\bar{\epsilon}_{\mu\nu}=f(u)\epsilon_{\mu\nu}\). The terms with bar represent their time independent parts. The above expression vanishes if \(\mathscr{E}^{\nu}\) and \(\mathscr{B}^{\nu}\) are orthogonal to each other in the 2-space. As we mentioned earlier (35), in 2-space, the electric and magnetic fields are always orthogonal to each other. Similarly, the last term can also be shown to vanish in the asymptotic limit. Thus, the above expression reduces to: \[\epsilon_{\mu\nu}\mathscr{A}^{\nu}= -\frac{\epsilon_{\mu\nu}}{2}\frac{D^{\nu}\rho_{(n)}}{\rho_{(n)}}- \frac{\epsilon^{\nu\alpha}}{2}\left(\Sigma_{\mu\nu}+\Omega\epsilon_{\mu\nu}- \frac{\Theta}{2}\tilde{h}_{\mu\nu}\right)\frac{\mathscr{S}_{\alpha}}{\rho_{(n)}}\] \[-\frac{\epsilon_{\mu\nu}}{2}\left(\tilde{\sigma}^{\rho\nu}+ \tilde{\omega}^{\rho\nu}+\frac{\tilde{\Theta}}{2}\tilde{h}^{\rho\nu}\right) \frac{\mathscr{S}_{\rho}}{\rho_{(n)}}-\frac{\epsilon_{\mu\rho}}{2\rho_{(n)}} \mathscr{S}^{\rho}\mathscr{A} \tag{58}\] Third, the above expression provides a nice geometrical understanding of the various contributions to memory effect. The first term in the RHS corresponds to the change in the EM field energy (\(\rho_{(n)}\)) along \(n_{\mu}\) in the 2-space. This does not contain any contribution from the kinematical properties of the space-time. In other words, this term will vanish if the EM field energy does not change in the 2-space, like a 2-D flat sheet. However, as we show in the next section, this is non-zero in flat space-time expressed in spherical coordinates. The next two terms in the RHS are proportional to the energy flux (\(\mathscr{S}_{\alpha}\)) in the 2-space. However, both these terms have different kinematical information of the space-time and vanish for flat space-time. The second term in the RHS carries information about shear (\(\Sigma_{\mu\nu}\)), vorticity scalar (\(\Omega\)) related to \(n^{\mu}\) and expansion scalar (\(\Theta\)) corresponding to time-like observer \(u^{\mu}\). The third term in the RHS carries information about shear (\(\tilde{\sigma}^{\mu\nu}\)), vorticity tensor (\(\tilde{\omega}^{\mu\nu}\)) and expansion scalar (\(\tilde{\Theta}\)) corresponding to the space-like vector \(n^{\mu}\). Fourth, as mentioned earlier, we have not included external currents or charges in our analysis. Hence, the acceleration vector does not have contribution from the external sources. Hence, the memory vector we obtain is equivalent to the null-kick derived in Refs. [18; 19]. It is also important to note that these authors did not obtain the contributions due to the kinematical properties of the space-time. However, as we will see in the next section, their contribution can be significant. Lastly, to obtain the memory vector, we need to integrate the above expression w.r.t the proper time of the observer -- \(\Delta u^{\mu}\) is the memory vector. It is interesting to note that initially if the observer has non-zero velocity _only_ along the time direction, at a later time, due to the memory effect, there is a non-zero velocity in the 2-space. ## V Application to specific space-times In the previous section, we obtained a master equation for the EM vector for an arbitrary 4-D space-time using \(1+1+2\)-formalism. As we discussed, the memory vector has three distinct contributions. In order to illustrate this fact, we consider specific examples and obtain the memory vector. In this section we obtain memory vector for flat, FLRW, pp-wave and Kerr space-times. ### Minkowski space-time In order to compare the master equation with the existing results [18], we first consider Minkowski space-time in spherical coordinates: \[ds^{2}=-dt^{2}+dr^{2}+r^{2}\,\gamma_{AB} \tag{59}\] where, \[\gamma_{AB}=\left(\begin{array}{cc}1&0\\ 0&\sin^{2}\theta\end{array}\right) \tag{60}\] is the metric describing unit 2-sphere. In Minkowski space-time, the 4-velocity of the time-like congruence observer is \(u^{\mu}\equiv(1,\,0,\,0,\,0)\) and the space-like vector is \(n^{\mu}\equiv(0,\,1,\,0,\,0)\). Since \(\nabla_{\mu}u_{\nu}=0\) and \(\nabla_{\mu}n_{\nu}=0\), the _kinematics_ quantities, defined in Sec. (II.1, II.2) vanish for the Minkowski space-time. Hence only the first term in Eq. (56) will be non-zero, i. e., \[\mathscr{A}^{\nu}_{\rm Flat}=-\,\frac{1}{2}\,\frac{D^{\nu}\rho_{\rm n}}{\rho _{\rm n}}\,. \tag{61}\] As mentioned earlier, the acceleration vector corresponds to acceleration in the 2-Sphere. Hence, it is appropriate to switch to the 2-Sphere index: \[\mathscr{A}^{A}=u^{\mu}\nabla_{\mu}u^{A}=u^{0}\partial_{0}u^{A}\,+\,2u^{0} \Gamma^{A}_{\,0\,B}u^{B}\,.\] Since the 4-velocity \(u^{\mu}\) is zero in the 2-Sphere, we have \(\mathscr{A}^{A}=u^{0}\partial_{0}u^{A}=\partial_{t}u^{A}\). In null coordinate, this becomes \(\mathscr{A}^{A}=\partial_{t}u^{A}\). Substituting the above expression in Eq. (61) and integrating in the null coordinate, we have: \[\Delta u^{A}\equiv\int\,du\,\mathscr{A}^{A}=-\frac{1}{2}\,\int\,du\,\frac{D^{A} \rho_{\rm n}}{\rho_{\rm n}}\,. \tag{62}\] The above expression is velocity kick w.r.t the _Eulerian observers_. To compare this with the net momentum (kick) vector as seen by the asymptotic static observers (_Lagrangian observers_), we need to do a coordinate transformation. Specifically, we need to transform from coordinate basis \(\left(\vec{e}^{\,\theta},\vec{e}^{\,\phi}\right)\) to orthogonal coordinate basis \(\left(\hat{\theta},\hat{\phi}\right)\). In terms of \(\left(\hat{\theta},\hat{\phi}\right)\), we have \(\Delta\vec{u}\equiv\Delta u^{\mu}\vec{e}_{\mu}\), where, \(\vec{e}^{\,\theta}=\hat{\theta}/r\,,\,\vec{e}^{\,\phi}=\hat{\phi}/(r\,\sin\theta)\). Thus, the velocity kick w.r.t the asymptotic static observers is given by: \[\Delta\vec{u}_{\rm Flat}=\frac{1}{r}\left(\Delta u^{\theta}\hat{\theta}+\frac {\Delta u^{\phi}}{\sin\theta}\hat{\phi}\right) \tag{63}\] Interestingly, the EM memory vector in Minkowski space-time is inversely proportional to \(r\) and matches with Ref. [18]. This passes the first test that the master equation (56) indeed describes the EM memory vector. In the rest of this section, we obtain the memory vector for non-flat geometries and show the robustness of our approach. ### FLRW space-time The conformally flat FLRW metric in spherical coordinates is: \[ds^{2}=a(\eta)^{2}\,\left(-d\eta^{2}+dr^{2}+r^{2}\,\gamma_{AB}\right) \tag{64}\] where, the conformal time (\(\eta\)) is related to the cosmic time (\(t\)) by \(dt=a(\eta)\,d\eta\). In \(1+3\) formalism, the fundamental observer with time-like \(4-\)velocity in FLRW metric is \(u^{\mu}=dx^{\mu}/dt=dx^{\mu}/(a(\eta))\,d\eta=\left(\,1,\,0,\,0,\,0\,\right)/a (\eta)\). For this choice of observer, the \(3-\)space projection tensor (\(h_{\mu\nu}\)) orthogonal to \(u^{\mu}\) is: \[h_{\mu\nu}=\left(\begin{array}{cc}a^{2}(\eta)&0\\ 0&a^{2}(\eta)\,r^{2}\,\gamma_{AB}\end{array}\right)\,. \tag{65}\] Since the FLRW line-element is homogeneous and isotropic, only the expansion scalar (\(\Theta\)) is non-zero: \[\Theta=3\frac{\mathscr{H}(\eta)}{a(\eta)}\quad\text{where}\quad\mathscr{H}= \frac{a^{\prime}(\eta)}{a(\eta)}\] where \({}^{\prime}\) refers to derivative w.r.t. \(\eta\). Other kinematic quantities vanish, i. e., \(\sigma_{\mu\nu}=\omega_{\mu\nu}=0\). We now split the 3-space into \(1+2\) by choosing the following space-like vector \(n^{\mu}=(0,\,1,\,0,\,0)/a(\eta)\). This satisfies the conditions: \(n^{\mu}n_{\mu}=1\) and \(u^{\mu}n_{\mu}=0\). Repeating the steps discussed in Sec. (II.2) for the line-element (64), we get: \[\tilde{\Theta}=\frac{2}{a(\eta)}\frac{1}{r}\,,\tilde{\sigma}_{\mu\nu}=\tilde{ \omega}_{\mu\nu}=0.\] It is important to note that while \(\Theta\) is a function of \(\eta\) only, \(\tilde{\Theta}\) depends on both \(\eta\) and \(r\). Also, \(\Theta\) depends on the Hubble parameter \(\mathscr{H}\), while \(\tilde{\Theta}\) is inversely proportional of \(r\). Hence, at large distances, \(\tilde{\Theta}\) decays faster compared to \(\Theta\). Substituting the above expressions in Eq. (58), we have: \[\mathscr{A}^{\nu}_{\rm{FLRW}}=-\,\frac{1}{2}\,\frac{D^{\nu}\rho_{\rm{n}}}{ \rho_{\rm{n}}}+\frac{1}{4\,\rho_{\rm{n}}}\,\mathscr{S}^{\nu}(\Theta-\tilde{ \Theta})\,. \tag{66}\] Like Minkowski space-time, \(\mathscr{A}^{\nu}\) will have components only in the 2-Sphere. Using the fact that the fundamental observers have zero velocity in the 2-Sphere and repeating the earlier analysis, we have \[\mathscr{A}^{A}=u^{0}\partial_{0}u^{A}=\frac{1}{a(\eta)}\,\frac{\partial u^{A} }{\partial\eta}\,.\] In terms of the null coordinate \(u(\equiv\eta-r)\), we have: \[\mathscr{A}^{A}=\frac{1}{a(u)}\,\frac{\partial u^{A}}{\partial u}\,.\] Substituting the above expression in Eq. (66), we have: \[\frac{\partial u^{A}}{\partial u}=-\,\frac{a(u)}{2}\,\frac{D^{A}\rho_{\rm{n}} }{\rho_{\rm{n}}}+\frac{a(u)}{4\,\rho_{\rm{n}}}\,\mathscr{S}^{A}(\Theta-\tilde {\Theta})\,. \tag{67}\] Integrating the above expression w.r.t \(u\), leads to the following memory vector: \[\Delta u^{A}_{\rm{FLRW}}=-\frac{1}{2}\,\int du\,\frac{a(u)}{\rho_{\rm{n}}}\,D ^{A}\rho_{\rm{n}}+\frac{1}{4}\,\int du\,\frac{a(u)}{\rho_{\rm{n}}}\,\mathscr{ S}^{A}(\Theta-\tilde{\Theta}) \tag{68}\] This is the expression for the memory vector in FLRW space-time regarding which we want to highlight the following points: First, unlike Minkowski space-time, here the fundamental observers are Eulerian, and hence, we do not have to transform the above expression to Lagrangian observers. Second, our results differ from the results of Ref. [34]. In Ref. [34], the authors show that the EM memory effect in FLRW differs from the Minkowski only by the conformal factor \(a(\eta)\) or \(a(u)\). In other words, their analysis did not account for the geometric contribution to the memory effect. As mentioned earlier, the geometric contribution leads to a non-zero energy flux (\(\mathscr{S}^{A}\)) contribution. Also note that the ordinary memory derived in Ref. [34] is not present in Eq. (68) as we have assumed any external charge or current to be zero. Third, we find that \(\rho_{(n)}\) and the energy flux (\(\mathscr{S}^{A}\)) contribute oppositely. It will be interesting to see whether the two contributions nullify the EM memory. ### _pp-wave_ space-times In this subsection, we derive the EM memory for a special kind of plane-fronted wave with parallel rays (pp-waves) called plane-wave metric [57]: \[ds^{2}=-2dudv-\mathscr{F}(u,\,x,\,y)\,du^{2}+dx^{2}+dy^{2} \tag{69}\] where, \(\mathscr{F}(u,\,x,\,y)=A(u)(x^{2}-y^{2})+2B(u)xy\) describes the plane wave and \(A(u),B(u)\) are arbitrary functions such that \(\mathscr{F}>0\). Note that \(u,v\) are not light-cone coordinates. \(u\) is time-like coordinate and \(v\) is a null coordinate. We split the above 4-D space-time into \(1+3\) form and later into \(1+1+2\)-form by considering the following time-like velocity vector (\(u^{\mu}\)) and space-like vector (\(n^{\mu}\)): \[u^{\mu}\equiv\left(\mathscr{F}(u,\,x,\,y)^{(-1/2)},\,0,\,0,\,0\right),\quad n ^{\mu}\equiv\left(\mathscr{F}(u,\,x,\,y)^{(-1/2)},\,-\,\mathscr{F}(u,\,x,\,y )^{(1/2)},\,0,\,0\right)\,.\] For the above choice of time-like vector, the 3-space projection tensor (\(h_{\mu\nu}\)) is: \[h_{\mu\nu}=\begin{bmatrix}0&0&0&0\\ 0&\frac{1}{\mathscr{F}(u,\,x,\,y)}&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix} \tag{70}\] Substituting these in the definitions in Sec. (II), only non-zero quantity is the expansion scalar (\(\Theta\)): \[\Theta=-\frac{(x^{2}-y^{2})\,\,A^{\prime}(u)\,+\,2xy\,B^{\prime}(u)}{2\,\,(2B (u)\,xy\,+\,A(u)\,\,(x^{2}-y^{2})\,)^{3/2}}\,. \tag{71}\] The non-zero projection tensor \(\tilde{h}_{\mu\nu}\) components in the 2-space are \(\tilde{h}_{xx}=1\), \(\tilde{h}_{yy}=1\). Thus, the memory vector for the special kind of pp-wave space-times is: \[\mathscr{A}^{\nu}_{\rm PP}=-\,\frac{1}{2}\,\frac{D^{\nu}\rho_{\rm n}}{\rho_{ \rm n}}+\frac{\Theta}{4\,\rho_{\rm n}}\mathscr{S}^{\nu}\,. \tag{72}\] Here, the acceleration of the time-like observer is confined to the \(x-y\) plane, i. e., \[\mathscr{A}_{\rm PP}^{A}=-\,\frac{1}{2}\,\frac{D^{A}\rho_{\rm n}}{\rho_{\rm n}}+ \frac{\Theta}{4\,\rho_{\rm n}}\,\mathscr{S}^{A}\,, \tag{73}\] where, the index \(A,\,B\) corresponds to \((x,\,y)\). Evaluating the acceleration vector along \(x\) and \(y\), we have: \[\mathscr{A}_{x(y)}^{\rm(PP)}=-\frac{1}{2\,\rho_{\rm n}}\,\partial_{x(y)}\,( \rho_{\rm n})+\frac{\Theta}{4\,\rho_{\rm n}}\,\mathscr{S}_{x(y)}\,. \tag{74}\] Integrating the above equation w.r.t \(u\), we have: \[\Delta u_{x(y)}^{\rm PP}=-\frac{1}{2}\int du\ \frac{\partial_{x(y)}\,( \rho_{\rm n})}{\rho_{\rm n}}\,+\,\frac{\Theta}{4}\,\int du\frac{\mathscr{S}_{x (y)}}{\rho_{\rm n}}\,. \tag{75}\] The above expression for the velocity kick is for a generic plane-wave metric. To gain some physical intuition, we consider two specific forms -- Penrose limit of the Schwarzschild and FLRW space-times [57]. For Schwarzschild space-time, we have \[A(u)=\frac{6}{25u^{2}};\quad B(u)=0\] Substituting these in Eq. (71), we have: \[\Theta_{\rm PP,Sch}=\frac{5}{\sqrt{6(x^{2}-y^{2})}}\,.\] It is interesting to note that although the space-time metric does not differentiate between the two spatial coordinates \((x,y)\), in order for \(\Theta\) to be real, the above expression demands that \(x>y\). Thus, velocity kick due to EM wave in PP-wave limit of Schwarzschild space-time can only occur if \(x>y\) and is given by: \[\Delta u_{x(y)}^{\rm PP\,Sch}=-\frac{1}{2}\int du\ \frac{\partial_{x(y)}\,( \rho_{\rm n})}{\rho_{\rm n}}\,+\,\frac{5}{4\sqrt{6(x^{2}-y^{2})}}\,\int du \frac{\mathscr{S}_{x(y)}}{\rho_{\rm n}}\,. \tag{76}\] In the case of Penrose limit of FLRW space-time with power-law scale factor \(a(t)\sim t^{h}\), we have: \[A(u)=-\frac{h}{(1+h)u^{2}},\quad B(u)=0\,.\] Substituting these in Eq. (71), we have: \[\Theta_{\rm PP,FLRW}=\sqrt{\frac{(1+h)}{h(y^{2}-x^{2})}};\quad\Delta u_{x(y)} ^{\rm PP\,FLRW}=-\frac{1}{2}\int du\,\frac{\partial_{x(y)}\,(\rho_{\rm n})}{ \rho_{\rm n}}+\frac{\sqrt{(1+h)}}{4\sqrt{h(y^{2}-x^{2})}}\int du\frac{ \mathscr{S}_{x(y)}}{\rho_{\rm n}}\,. \tag{77}\] Here again, we see that in-order for \(\Theta\) to be real, the above expression demands that \(y>x\). Thus, velocity kick due to EM wave in PP-wave limit of FLRW space-time occurs in a different region of the 2-space compared to Schwarzschild. Thus, EM memory has a distinct signature for different space-times and can potentially be used as a probe. ### Kerr space-time In this section, we derive the memory effect in Kerr space-time. In Boyer-Lindquist coordinates (\(t\), \(r\), \(\chi\), \(\phi\)), the Kerr space-time is: \[ds^{2}= \left(\frac{2mr}{r^{2}+a^{2}\chi^{2}}-1\right)dt^{2}+\left(\frac{ r^{2}+a^{2}\chi^{2}}{r^{2}-2mr+a^{2}}\right)dr^{2}+\left(\frac{r^{2}+a^{2} \chi^{2}}{1-\chi^{2}}\right)d\chi^{2}\] \[-\left[\frac{4mar\left(1-\chi^{2}\right)}{r^{2}+a^{2}\chi^{2}} \right]dtd\varphi+\left(1-\chi^{2}\right)\left[r^{2}+a^{2}+\frac{2ma^{2}r \left(1-\chi^{2}\right)}{r^{2}+a^{2}\chi^{2}}\right]d\varphi^{2}\,. \tag{78}\] where \(\chi\equiv\cos\theta\). In this case, the time-like observer 4-velocity (\(u^{\mu}\)) and the space-like vector (\(n^{\mu}\)) are [58]: \[u^{\mu}=\left[\sqrt{\frac{r^{2}-2mr+a^{2}}{r^{2}+a^{2}\chi^{2}}},0,0,0\right],n^{\mu}=\left[0,\sqrt{\frac{r^{2}-2mr+a^{2}}{r^{2}+a^{2}\chi^{2}}},0,0\right]\,.\] We give below the kinematical quantities (discussed in Sec. (II.2)) for Kerr space-time in \(1+1+2\) formalism obtained in Ref. [58]: \[\Theta=0; \Sigma_{\mu\nu}=0\,; \tag{79}\] \[\Omega=-\frac{2mar\chi\sqrt{\mathscr{L}}}{J\sqrt{\mathscr{K}^{3}}}; \tilde{\Theta}=\frac{\mathscr{W}}{\mathscr{I}\sqrt{\mathscr{K}^{3} \mathscr{L}}}\,;\] (80) \[\tilde{\omega}_{\mu\nu}=\tilde{\omega}\epsilon_{\mu\nu}=0; \mathscr{A}=-\frac{m\mathscr{D}\sqrt{\mathscr{L}}}{J\sqrt{ \mathscr{K}^{3}}}\,;\] (81) \[\tilde{\sigma}_{\mu\nu}=\left[\begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\ 0&0&-\frac{1}{2}\frac{a^{2}(m-r)\sqrt{\mathscr{L}}}{\mathscr{I}\sqrt{ \mathscr{L}}}&0\\ 0&0&0&\frac{1}{2}\frac{a^{2}(m-r)\mathscr{M}^{2}\sqrt{\mathscr{L}\mathscr{K}} }{\mathscr{I}^{2}}\end{array}\right] \tag{82}\] where, \[\mathscr{M}=\chi^{2}-1; \mathscr{D}=-r^{2}+a^{2}\chi^{2};\quad\mathscr{L}=r^{2}-2mr+a^{2} \tag{83}\] \[\mathscr{I}=r^{2}-2mr+a^{2}\chi^{2};\quad\mathscr{K}=r^{2}+a^{2} \chi^{2} \tag{84}\] \[\mathscr{W}=2r^{3}(r-2m)^{2}+a^{4}\chi^{2}\left(m+r-m\chi^{2}+r\chi^{2}\right)+a^ {2}r^{2}\left(-3m+r+\chi^{2}(3r-5m)\right) \tag{85}\] Substituting these expressions in Eq. (58), and noting that the memory vector lies in the 2-D surface, we get: \[\mathscr{A}^{A}= -\,\frac{1}{2}\frac{D^{A}\rho_{(n)}}{\rho_{(n)}}-\frac{\Omega}{2} \frac{\epsilon^{AB}\,\mathscr{S}_{B}}{\rho_{(n)}}\,-\frac{1}{2}\left(\tilde{ \sigma}^{AB}+\frac{\tilde{\Theta}}{2}\tilde{h}^{AB}\right)\frac{\mathscr{S}_{ B}}{\rho_{(n)}}-\frac{\mathscr{A}}{2\rho_{(n)}}\,\mathscr{S}^{A} \tag{86}\] This is the EM memory vector for an Eulerian observer in Kerr space-time. Note that this is a generic result for any value of angular momentum. For a better physical insight, we consider \(a\to 0\) limit. Substituting \(a\to 0\) in Eqs. (79 - 85), we have \[\mathscr{M}_{0}=\chi^{2}-1; \mathscr{D}_{0}=-r^{2};\quad\mathscr{L}_{0}=r^{2}-2mr \tag{87}\] \[\mathscr{J}_{0}=r^{2}-2mr; \mathscr{K}_{0}=r^{2};\quad\mathscr{W}_{0}=2r^{3}(r-2m)^{2}\] (88) \[\Omega_{0}=\tilde{\sigma_{0}}^{\mu\nu}=0; \tilde{\Theta}_{0}=2\sqrt{\frac{(r-2m)}{r^{3}}};\quad\mathscr{A} =\frac{m}{\sqrt{r^{3}(r-2m)}} \tag{89}\] Substituting the above quantities in Eq. (86), we have: \[\mathscr{A}^{A}= -\,\frac{1}{2}\frac{D^{A}\rho_{(n)}}{\rho_{(n)}}-\frac{1}{2}\sqrt{ \frac{r-2m}{r^{3}}}\frac{\mathscr{S}^{A}}{\rho_{(n)}}-\frac{1}{2\rho_{(n)}}\, \frac{m}{\sqrt{r^{3}(r-2m)}}\,\mathscr{S}^{A}\,. \tag{90}\] This is the EM memory vector for an Eulerian observer in Schwarzschild space-time, regarding which we want to mention the following points: First, in the limit, \(r\rightarrow\infty\), reduces to Minkowski space-time expression (61). Second, in the limit \(r\rightarrow\infty\), the subleading term is proportional to \(r^{-1}\). Third, to derive the memory vector \(\Delta u^{A}\), we have to switch to the null time coordinate \(u=t-r\) and integrate Eq. (90) with respect to \(u\) at the asymptotic limit. Lastly, to evaluate the memory effect experienced by the static asymptotic (Lagrangian) observer, we need to do the transformation from \(\left(\vec{e}^{\vartheta},\vec{e}^{\,\phi}\right)\) to the orthogonal coordinate basis \(\left(\hat{\theta},\hat{\phi}\right)\) like in Sec. (V.1). ## VI Conclusions In this work, we have derived a master equation for electromagnetic memory in an arbitrary space-time. We used the covariant formalism to obtain the same. More specifically, we used the \(1+1+2\) covariant formalism. The \(1+1+2\) decomposition of space-time is a natural extension of the \(1+3\) formalism in which the three-space is further decomposed using a given spatial direction. This choice of covariant formalism is because the net momentum (kick) vector lies on the 2-D surface for arbitrary space-time. Also, the electric and magnetic fields are transverse to the direction of propagation of the passing EM wave. The EM memory (58) has three distinct contributions: First contribution is due to the change in the EM field energy (\(\rho_{(n)}\)) along \(n^{\mu}\) in the 2-space. This is non-zero for Minkowski space-time. The second contribution is proportional to the energy flux (\(S^{\alpha}\)) in the 2-space. This has kinematical information of the space-time and vanishes for the flat space-time. The third contribution is proportional to the acceleration \(\mathscr{A}\) along the time-like vector \(u^{\mu}\). To our understanding, the earlier approaches could not isolate the different contributions to the EM memory as done in this work. We then obtained the EM memory for different space-times. In the case of FLRW space-time, we showed that the earlier analysis did not account for the geometric contribution to the memory effect [34]. Specifically, their analysis did not account for the geometric contribution leading to a non-zero energy flux (\(\mathscr{S}^{A}\)) contribution. We have also obtained the EM memory for Kerr space-time. We also showed that the EM memory has a distinct signature for different pp wave space-times and can potentially be used as a probe. It would be interesting to extend our analysis for black holes with multiple horizons and those that are not asymptotically flat. These may be particularly relevant for using EM memory as a probe to PBH. Finally, our analysis points to the possibility of using \(1+1+2\) covariant formalism to understand gravitational memory. These are currently under investigation. ###### Acknowledgements. The authors thank A. Chowdhury, S. Mahesh Chandran and A. Kushwaha for comments on the earlier version of the manuscript. The work is supported by the SERB-Core Research grant.
2305.03681
Beam displacement tolerances on a segmented mirror for higher-order Hermite-Gauss modes
Odd-indexed higher-order Hermite-Gauss (HG) modes are compatible with 4-quadrant segmented mirrors due to their intensity nulls along the principal axes, which guarantees minimum beam intensity illuminating the bond lines between the segments thus leading to low power loss. However, a misplaced HG beam can cause extra power loss due to the bright intensity spots probing the bond lines. This paper analytically and numerically studies the beam displacement tolerances on a segmented mirror for the $\mathrm{HG_{3,3}}$ mode. We conclude that for "effective" bond lines with 6 $\mu$m width, and the $\mathrm{HG_{3,3}}$ beam size chosen to guarantee 1 ppm clipping loss when centered, the beam can be rotated by roughly 1 degree or laterally displaced by 4% of its beam size while keeping the total power on the bond lines under 1 ppm. We also demonstrate that the constrained beam displacement parameter region that guarantees a given power loss limit, or the beam displacement tolerance, is inversely proportional to the bond line thickness.
Liu Tao, Nina Brown, Paul Fulda
2023-05-05T16:50:39Z
http://arxiv.org/abs/2305.03681v1
# Beam displacement tolerances on a segmented mirror for higher-order Hermite-Gauss modes ###### Abstract Odd-indexed higher-order Hermite-Gauss (HG) modes are compatible with 4-quadrant segmented mirrors due to their intensity nulls along the principal axes, which guarantees minimum beam intensity illuminating the bond lines between the segments thus leading to low power loss. However, a misplaced HG beam can cause extra power loss due to the bright intensity spots probing the bond lines. This paper analytically and numerically studies the beam displacement tolerances on a segmented mirror for the HG\({}_{3,3}\) mode. We conclude that for "effective" bond lines with \(6\mu\)m width, and the HG\({}_{3,3}\) beam size chosen to guarantee \(1\,\mathrm{ppm}\) clipping loss when centered, the beam can be rotated by roughly \(1\) degree or laterally displaced by \(4\,\mathrm{\char 37}\) of its beam size while keeping the total power on the bond lines under \(1\,\mathrm{ppm}\). We also demonstrate that the constrained beam displacement parameter region that guarantees a given power loss limit, or the beam displacement tolerance, is inversely proportional to the bond line thickness. Thermal noise of the test masses is one of the limiting noise sources in advanced gravitational wave (GW) detectors [1; 2; 3]. It is expected to remain a limiting noise source in future detectors [4; 5], despite radical changes to the design including cryogenic operations, new materials, and the use of longer laser wavelengths [6; 7; 8; 9]. There has been a continued research interest within the gravitational wave community in the so-called "flat beams", in particular higher-order Hermite-Gaussian (HG) beams as the probe beam for the detectors [10; 11; 12; 13; 14], in place of the currently-used fundamental Gaussian beam, for their thermal noise benefits. With flatter beam intensity distributions, higher order beams can better average over the random mirror surface fluctuations caused by thermal motions, thus leading to lower thermal noise [15; 16]. One other potential benefit of using higher-order HG modes is that one can take advantage of their intrinsic intensity pattern symmetries to use larger and more massive segmented mirrors, fabricated from smaller substrates that are limited by the "boule size" of high-purity silicon [10]. This optics segmentation design idea has also been studied by other people within the GW community in lowering the optics thermal noise, for instance, a "piecewise" coating design has been proposed by Kontos and Loglia [17] to satisfy high-quality coatings and relatively large optics diameters. In this paper, we consider one possible implementation of segmented optics, namely the 4-quadrant circular segmented mirror shown in Fig. 1. It is made by grouping four identical quadrants together, with each quadrant cut from a smaller "boule". The circular segmented mirror has its radius and thus the maximum allowable beam size assuming the same clipping loss requirement enlarged by a factor of \(\sqrt{2}\), and its volume and therefore mass enlarged by a factor of \(2\,\sqrt{2}\), assuming fixed dimensional ratios. This allows a further reduction of thermal noises and quantum radiation pressure noise, as shown in Table 1. For instance, the coating Brownian thermal noise power spectral density will be improved by a factor of \(2\) since it is inversely proportional to the square of the beam size at the mirror [16]. The advantage of HG modes here comes from the fact that the intensity nulls can be aligned with the bond lines, thus avoiding interaction of light with the bonds and the associated scattered light penalties and thermal noise. If the beam is displaced on the segmented mirror, parts of the beam intensity will probe the bond lines though, as illustrated in Fig. 2. In this work we define the beam power on the bond lines as the _power loss_, which we want to minimize or keep below some acceptable level. This manuscript takes both analytical and numerical approaches to study the beam displacement tolerances for higher-order HG modes on the segmented mirror. We will start with an analytical approach that linearly expands the displaced beam in the segmented mirror basis, as denoted as \((X,Y)\) in Fig. 2. The result provides valuable insights into the problem but is only valid for small deviations. When the deviation gets large, a numerical approach is \begin{table} \begin{tabular}{c||c|c|c} \hline \hline Noise & CBTN & SBTN & QRPN \\ \hline \hline PSD Scaling Relation & \(1/w^{2}\) & \(1/w\) & \(1/M^{2}\) \\ \hline \hline Segmented Mirror Improvement & \(2\) & \(\sqrt{2}\) & \(8\) \\ \hline \hline \end{tabular} \end{table} Table 1: Coating and substrate Brownian thermal noise (CBTN and SBTN) and quantum radiation pressure noise (QRPN) power spectral density (PSD) improvement [16; 18] for the segmented mirror in Fig. 1. Figure 1: Illustration of an HG\({}_{3,3}\) beam incident on a circular segmented mirror. The size of the segmented mirror is increased by a factor of \(\sqrt{2}\), and the mass by a factor of \(2\,\sqrt{2}\). required. We then adopt a complete numerical scheme that represents the displaced beams as discrete arrays. It gives quantitative arguments regarding what the typical "effective" bond line thickness is. Since extremely small quantities such as the power loss on the bond lines are involved, the grid convergence and the accuracy of our numerical scheme are discussed. We will look at how well the beam has to be centered and well aligned, in order to have at most 1 ppm of its power probing the bond lines, and how the beam displacement tolerances vary as we may have uncertainty in the "effective" bond line thickness due to the uncertainty in the beam propagation direction. The numerical results show good agreement with the analytical results. We report our conclusions and discussions at the end. For simplicity but without loss of generality, we consider a HG\({}_{\text{n,n}}\) mode at its waist, where n is odd in order to be compatible with segmented mirrors, see Fig. 1. We assume no clipping on the mirror for the analytical calculations and focus on the power loss on the bond lines in the presence of beam displacement. As illustrated in Fig. 2, a generic beam displacement can be decomposed into a combination of lateral translations along \(X\) and \(Y\) of amount \(x_{0}\) and \(y_{0}\), and a counter-clockwise rotation of angle \(\theta\) between the new axes and the original principle axes. Assuming reasonably small beam displacement \((x_{0}/w_{0},y_{0}/w_{0},\theta\ll 1)\) as seen in real-life experiments, we expand the displaced beam up to the linear order in \(x_{0}\), \(y_{0}\), and \(\theta\). We will see under such expansion, the lateral translations and the rotation do not cross-couple with each other and they contribute to the power loss on the bond lines independently. We first expand the rotated beam in the shifted basis centered at \((x_{0},y_{0})\), denoted as \((X^{\prime},Y^{\prime})\) in Fig. 2, and then expand each shifted beam component in the original basis \((X,Y)\). An HG\({}_{\text{n,n}}\) mode rotated by \(\theta\) in \((X^{\prime},\,Y^{\prime})\) basis looks like \[\begin{split}\text{HG}_{\text{n,n}}(x^{\prime},y^{\prime})& \stackrel{{\theta}}{{\Longrightarrow}}\text{HG}_{\text{n,n}}( \cos\theta x^{\prime}+\sin\theta y^{\prime},-\sin\theta x^{\prime}+\cos\theta y ^{\prime})\\ &\approx\text{HG}_{\text{n,n}}(x^{\prime}+\theta y^{\prime},- \theta x^{\prime}+y^{\prime})\end{split} \tag{1}\] where the small-angle approximation is used, and we denote the coordinates in the shifted \((X^{\prime},Y^{\prime})\) basis as \(x^{\prime}\) and \(y^{\prime}\), scaled by the beam size. Since HG modes are separable in the x and y axes, meaning \(\text{HG}_{\text{nm}}(x^{\prime},y^{\prime})=\text{U}_{\text{n}}(x^{\prime}) \cdot\text{U}_{\text{m}}(y^{\prime})\), we can treat them independently. Dealing with the x component first: \[\begin{split}\text{U}_{\text{n}}(x^{\prime})&=\frac {1}{\sqrt{2^{n}n!}}H_{n}(\sqrt{2}x^{\prime})e^{-x^{\prime 2}}\\ &\stackrel{{\theta}}{{\Longrightarrow}}\frac{1}{\sqrt{ 2^{n}n!}}H_{n}(\sqrt{2}(x^{\prime}+\theta y^{\prime}))e^{-(x^{\prime}+\theta y ^{\prime})^{2}}\\ &\approx\frac{1}{\sqrt{2^{n}n!}}\left(H_{n}(\sqrt{2}x^{\prime})+ 2\sqrt{2}\theta y^{\prime}nH_{n-1}(\sqrt{2}x^{\prime})\right)e^{-x^{\prime 2 }}\\ &\times(1-2\theta x^{\prime}y^{\prime})\end{split} \tag{2}\] where we used the property \(dH_{n}(x)/dx=2nH_{n-1}(x)\) for Hermite polynomials [12]. We have a similar result for the y component. In fact from Eq. 1 we can see that one can simply replace x with y and \(\theta\) with \(-\theta\) in Eq. 2 to get the corresponding y component \[\begin{split}\text{U}_{\text{n}}(y^{\prime})& \stackrel{{\theta}}{{\Longrightarrow}}\frac{1}{\sqrt{2^{n}n!}} \left(H_{n}(\sqrt{2}y^{\prime})-2\sqrt{2}\theta x^{\prime}nH_{n-1}(\sqrt{2}y^{ \prime})\right)e^{-y^{\prime 2}}\\ &\times(1+2\theta x^{\prime}y^{\prime})\end{split} \tag{3}\] Combine Eqs. 2 with 3 we get the full decomposition of the rotated HG\({}_{\text{n,n}}\) mode in \((X^{\prime},Y^{\prime})\) basis as \[\begin{split}\text{HG}_{\text{n,n}}(x^{\prime},y^{\prime})& =\text{U}_{\text{n}}(x^{\prime})\text{U}_{\text{n}}(y^{\prime})\\ &\approx\text{HG}_{\text{n,n}}+\theta\sqrt{n(n+1)}\Big{(}\text{ HG}_{\text{n-1,n+1}}-\text{HG}_{\text{n+1,n-1}}\Big{)}\end{split} \tag{4}\] , all functions of \((x^{\prime},y^{\prime})\). We also used Hermite polynomials property \(2\sqrt{2}xH_{n}=H_{n+1}+2nH_{n-1}\). As a result of rotation, the HG\({}_{\text{n,n}}\) mode is scattered into HG\({}_{\text{n-1,n+1}}\) and HG\({}_{\text{n+1,n-1}}\) modes up to the linear order. This means that, if we start with an odd-indexed mode to have a dark stripe along the principle axes, such as the HG\({}_{3,3}\) mode in Fig. 1, the rotation would scatter the original HG\({}_{3,3}\) mode into even modes HG\({}_{2,4}\) and HG\({}_{4,2}\) modes. This causes extra beam intensity to probe the bond lines as they no longer have intensity nulls. We now expand all the laterally shifted modes in Eq. 4 in the original \((X,Y)\) basis, where we have \(x^{\prime}=x-x_{0}\) and \(y^{\prime}=y-y_{0}\). From our previous work on the higher-order mode scattering due to lateral translations [12], we have for the x component \[\begin{split}\text{U}_{\text{n}}(x)&\stackrel{{ x_{0}}}{{\Longrightarrow}}\text{U}_{\text{n}}(x-x_{0})\\ &=\text{U}_{\text{n}}(x)+x_{0}\left(\sqrt{n+1}\text{U}_{\text{n+1} }(x)-\sqrt{n}\text{U}_{\text{n-1}}(x)\right)\end{split} \tag{5}\] Similarly, for the y component, we have \[\begin{split}\text{U}_{\text{n}}(y)&\stackrel{{ x_{0}}}{{\Longrightarrow}}\text{U}_{\text{n}}(y-y_{0})\\ &=\text{U}_{\text{n}}(y)+y_{0}\left(\sqrt{n+1}\text{U}_{\text{n+ 1}}(y)-\sqrt{n}\text{U}_{\text{n-1}}(y)\right)\end{split} \tag{6}\] Combining Eqs. 5 and 6 we have \[\begin{split}\text{HG}_{\text{n,n}}(x,y)&\stackrel{{ x_{0}}}{{\Longrightarrow}}\text{HG}_{\text{n,n}}+x_{0}\left(\sqrt{n+1}\text{HG}_{\text{n+1,n}}-\sqrt{n} \text{HG}_{\text{n-1,n}}\right)\\ &+y_{0}\left(\sqrt{n+1}\text{HG}_{\text{n,n+1}}-\sqrt{n}\text{HG}_{ \text{n,n-1}}\right)\end{split} \tag{7}\] Figure 2: Illustration of an arbitrary displacement of an HG\({}_{3,3}\) beam on a segmented mirror of the type shown in Fig. 1, characterized by a rotation of angle \(\theta\) and lateral displacements of \(x_{0}\) and \(y_{0}\) along X and Y. The X and Y bond lines are shown as the grey strips along the principle axes. up to the linear order. Applying Eq. 7 to all shifted modes in Eq. 4 we see that, up to the linear order, an arbitrarily displaced HG\({}_{\text{n,n}}\) beam as shown in Fig. 2 can be expanded in the original \((X,Y)\) basis as \[\begin{split}\text{HG}_{\text{n,n}}&\xrightarrow{( \kappa_{0},y_{0}),\theta}{\text{HG}_{\text{n,n}}+x_{0}\left(\sqrt{n+1}\text{HG }_{\text{n+1,n}}-\sqrt{n}\text{HG}_{\text{n-1,n}}\right)}\\ &+y_{0}\left(\sqrt{n+1}\text{HG}_{\text{n,n+1}}-\sqrt{n}\text{ HG}_{\text{n,n-1}}\right)\\ &+\theta\sqrt{n(n+1)}\left(\text{HG}_{\text{n-1,n+1}}-\text{HG}_ {\text{n+1,n-1}}\right)\end{split} \tag{8}\] We can see that up to the linear order, the lateral translations and the rotation contribute to mode scattering and thus the power loss on the bond lines independently. Since the lateral translation along the x and y have a similar contribution to the scattered mode contents due to symmetry, we will focus on the lateral translation along the x direction. With the scattered mode contents in Eq. 8, we can then square the result and integrate over the bond line regions to get the total power loss. We now calculate the power loss on the bond lines numerically by representing the displaced beam into discretized arrays. We consider a probe beam of HG\({}_{3,3}\) mode with a total power of 1 W at its waist both angularly and laterally displaced on the circular segmented mirror. We consider an aLIGO-like test mass with a radius \(R_{m}\) of 0.15 m. The HG\({}_{3,3}\) beam size is scaled to 0.0394 m (0.263 \(\cdot\)\(R_{m}\)) to maintain 1 ppm clipping loss, which is the beam power lost on the edge of the finite-aperture mirror. We are interested in the "effective" bond line thickness considering the beam propagation directional uncertainties. In real-life experiments, the beam may not be exactly parallel to the bond plane. We expect this uncertainty in beam propagation direction \(\theta_{beam}\) to be small, estimated as \[\theta_{beam}=\Theta_{cav} \tag{9}\] where \(\Theta_{cav}=\frac{\lambda}{\pi w_{0}}\) is the far-field beam divergence angle of aLIGO-like arm cavities. As the beam propagates through the thickness of the mirror substrate \(t_{sub}\), it encounters the projection of the bond line onto its propagation axis: we refer to the projected width as the "effective" bond line width \(d_{\text{eff}}\)1 Footnote 1: Realistically \(d_{\text{eff}}\) would get reduced by the refraction index of silicon due to Snell’s law. This is not considered here for an order-of-magnitude estimation. \[d_{\text{eff}}=t_{sub}\cdot\theta_{beam}=\frac{t_{sub}\cdot\lambda}{\pi w_{0} }\approx 6\mu m \tag{10}\] where \(\text{t}_{\text{sub}}\approx 20\,\text{cm}\) and \(w_{0}\approx 1.2\,\text{cm}\) for aLIGO-like arm cavities. In our numerical approach, the displaced beams are discretized into arrays to calculate the power on 2D regions, the grid size has to be chosen carefully to guarantee numerical convergence and accuracy. To determine the optimal grid size, the clipping loss is first calculated as the number of grid points increases, as shown in the top panel of Fig. 3. The clipping loss quickly converges to the desired 1 ppm after around 100 cells for the entire mirror. With the number of cells along the mirror diameter set to 100 cells, the power loss on the bond lines with a thickness of 6 \(\mu m\) is calculated as the number of grid points along the bond line is increased, as shown in the bottom panel of Fig. 3. The power loss also converges quickly after around 100 cells. We thus use 100 cells for both the mirror diameter and the bond line thickness in later calculations. We can also see that the power loss on the bond lines has small values of order \(10^{-6}\) ppm with \(6\,\mu m\) thick bond lines, because of the intensity nulls of HG\({}_{3,3}\), while the power loss for the HG\({}_{0,0}\) is 170 ppm. The probe beam HG\({}_{3,3}\) mode is displaced both laterally along the x direction, in the unit of the beam size, and angularly in degrees. The total power loss on the X and Y Figure 4: An image showing the power loss on the X and Y bond lines on the top and bottom left, and the total power loss on the right, due to lateral offset \(x_{0}\) in the unit of the waist size \(\text{w}_{\text{HG}_{3,3}}\) and rotation \(\theta\) in degrees. Figure 3: An image showing the grid convergence of our numerical scheme. Top: HG\({}_{3,3}\) mode clipping loss on the entire mirror; Bottom: total power loss on the bond lines. In both cases, the power converges quickly as the number of grids increases, after around 100 cells. bond lines is calculated by discretizing the displaced beam on the bond lines into matrices. The result is shown in Fig. 4. We see as the beam is displaced with respect to the segmented mirror, different parts of the beam profile illuminate on the bond lines, which causes extra power losses. The power loss can go up to as high as 120 ppm for \(\delta\mu m\) bond lines when the beam is rotated by 45 degrees. This corresponds to when the brightest spot of the HG\({}_{3,3}\) beam at the corners hits the bond lines. In current GW detectors, however, the typical beam displacement would be much smaller. The total power loss due to small beam displacement is shown on the left in Fig. 5. Three contours corresponding to power loss of 0.5 ppm, 1 ppm, and 1.5 ppm are included. We see that even if the beam is displaced by 1 degree angularly, or by 4% of the beam size laterally, the total power loss on the 6 \(\mu m\) bond lines is still under 1 ppm. In comparison to the analytical model using the linear approximation, the total power loss is also calculated analytically using Eq. 8. The percentage difference is shown on the right in Fig. 5. We see that the linear approximation gives quite accurate results. For instance, in the displacement parameter region \((x_{0},\theta)\) that gives 1 ppm power loss, the difference is roughly 2%. The residual becomes large as we move toward larger displacements, as the linear approximation starts to fail. The beam displacement parameter contour tells us how much a beam can be displaced while keeping the total power loss within a small value. We quantify the beam displacement tolerance as the area of the contour. As explained earlier in Eq. 10, the "effective" bond line thickness is uncertain, due to the uncertainty in the beam propagation direction. With the bond line thickness ranging from 1 \(\mu m\) to 10 \(\mu m\), the beam displacement contour area is calculated and shown in Fig. 6. The contour area is normalized such that the value is 1 for the 6 \(\mu m\) bond line case we looked at. The best-fit function suggests a simple inverse relationship between the contour area and the bond line thickness, as shown in orange. For instance, if the bond line width is 3 \(\mu m\) instead, we could tolerate twice as much beam displacement compared to the 6 \(\mu m\) bond line case. We can also explain the \(1/d\) relation for the contour area using our analytical formalism. As we square and integrate the mode contents in Eq. 8 to get the power loss, the terms that are linear in \(x_{0}\) or \(\theta\) cancel out due to the property of odd functions. For the quadratic terms, the coefficients in general depend on \(d\). In our case, since \(d\) is \(\mathcal{O}(1\mu m)\) while \(x_{0}\) is \(\mathcal{O}(4\%\cdot 0.0394m)\sim 1mm\), we have \(\mathrm{d}\ll x_{0}\). As a result, only the linear term in \(d\) would contribute to the coefficients of the quadratic terms. Namely, we would have \(C_{1}d\cdot x_{0}^{2}+C_{2}d\cdot\theta^{2}\sim 1\), where \(C_{1,2}\) are constants. This is a function for an ellipse, as shown in Fig. 5. The area equals to \(\frac{\pi}{C_{1}C_{2}d}\), which is the inverse relation illustrated in Fig. 6. This paper has demonstrated an assessment technique for evaluating the beam displacement tolerance performance benefit for a given optical mode and a segmented mirror geometry. It has shown that odd-indexed HG modes remain compatible with the segmented mirror idea, even when there is a substantial amount of beam displacement present. For a nominal bond line effective thickness of 6 \(\mu m\), the total power loss for HG\({}_{3,3}\) mode is within 1 ppm even with 1-degree rotation and 4% its beam size lateral offset. There still remains a huge benefit compared to the fundamental HG\({}_{0,0}\) mode, which has a total power loss of 170 ppm even when perfectly centered. These results have important implications for future ground-based gravitational wave detector designs relying on the use of high-purity silicon substrates for the test mass material [4, 5]: they show that odd-indexed higher-order HG modes allow the use of segmented mirrors with overall diameter larger than the maximum available silicon boule diameter, by keeping the optical power on the bonds orders of magnitude smaller than for the HG\({}_{0,0}\) mode. ###### Acknowledgements. This work was supported by National Science Foundation grants PHY-1806461 and PHY-2012021. Figure 5: Left: the same total power loss as Fig. 4 but due to smaller beam displacements. Right: the percentage difference between the numerical result and the analytical result in Eq. 8. Three contours corresponding to total power loss of 0.5 ppm, 1 ppm, and 1.5 ppm are included on the left. Figure 6: An image showing the 1 ppm contour area (normalized so that the value is 1 for the 6 \(\mu m\) bond line case) in Fig. 5 as the bond line thickness is increased from 1 \(\mu m\) to 10 \(\mu m\). The best-fit function suggests a simple inverse relationship.
2310.18771
Robot Control based on Motor Primitives -- A Comparison of Two Approaches
Motor primitives are fundamental building blocks of a controller which enable dynamic robot behavior with minimal high-level intervention. By treating motor primitives as basic "modules," different modules can be sequenced or superimposed to generate a rich repertoire of motor behavior. In robotics, two distinct approaches have been proposed: Dynamic Movement Primitives (DMPs) and Elementary Dynamic Actions (EDAs). While both approaches instantiate similar ideas, significant differences also exist. This paper attempts to clarify the distinction and provide a unifying view by delineating the similarities and differences between DMPs and EDAs. We provide eight robot control examples, including sequencing or superimposing movements, managing kinematic redundancy and singularity, obstacle avoidance, and managing physical interaction. We show that the two approaches clearly diverge in their implementation. We also discuss how DMPs and EDAs might be combined to get the best of both approaches. With this detailed comparison, we enable researchers to make informed decisions to select the most suitable approach for specific robot tasks and applications.
Moses C. Nah, Johannes Lachner, Neville Hogan
2023-10-28T18:02:33Z
http://arxiv.org/abs/2310.18771v1
# Robot Control based on Motor Primitives -- A Comparison ###### Abstract Motor primitives are fundamental building blocks of a controller which enable dynamic robot behavior with minimal high-level intervention. By treating motor primitives as basic "modules," different modules can be sequenced or superimposed to generate a rich repertoire of motor behavior. In robotics, two distinct approaches have been proposed: Dynamic Movement Primitives (DMPs) and Elementary Dynamic Actions (EDAs). While both approaches instantiate similar ideas, significant differences also exist. This paper attempts to clarify the distinction and provide a unifying view by delineating the similarities and differences between DMPs and EDAs. We provide eight robot control examples, including sequencing or superimposing movements, managing kinematic redundancy and singularity, obstacle avoidance, and managing physical interaction. We show that the two approaches clearly diverge in their implementation. We also discuss how DMPs and EDAs might be combined to get the best of both approaches. With this detailed comparison, we enable researchers to make informed decisions to select the most suitable approach for specific robot tasks and applications. Motor Primitives, Dynamic Movement Primitives (DMPs), Elementary Dynamic Actions (EDAs). + Footnote †: This manuscript has been submitted to the International Journal of Robotics Research for review. + Footnote †: This manuscript has been submitted to the International Journal of Robotics Research for review. ## 1 Introduction One of the major challenges of robotics is to generate complex motor behavior that can match that of humans [1]. Numerous approaches have been developed to address this problem, including applied nonlinear control [12, 13], optimization-based approaches [22, 14], and machine learning algorithms [15, 16, 1]. Among these methods, several advances have been made based on "motor primitives" [1, 17, 18, 19]. The fundamental concept originates from human motor control research, where complex motor behavior of biological systems appears to be generated by a combination of fundamental building blocks known as motor primitives [12, 13, 14, 15, 16, 17]. The concept of control based on motor primitives dates back at least a century [2], with a number of subsequent experiments providing support for its existence in biological systems. Sherrington was one of the first to suggest "reflex" as a fundamental element of complex motor behavior [2, 15]. Sherrington proposed that reflexes can be treated as basic units of motor behavior that when chained together produce more complex movements [18]. First formalized by Bernstein [17, 16], "synergies" have also been suggested as a motor primitive to account for the simultaneous motion of multiple joints or activation of multiple muscles [12, 13, 14, 15]. Discrete and rhythmic movements have also been suggested as two distinct classes of primitives [15, 16, 17, 18, 19, 20]. Recently, there is growing evidence that "stable postures" may be considered to be a distinct class of motor primitives [16, 17]. Motor primitives have also been applied to robotics. Two distinct approaches exist: Dynamic Movement Primitives (DMPs) [17, 18] and Elementary Dynamic Actions (EDAs) [1, 16, 19]. The key idea of these approaches is to formulate motor primitives as "attractors" [19, 17]. An attractor is a prominent feature of nonlinear dynamical systems, defined as a set of states towards which the system tends to evolve. Its type ranges from relatively simple ones such as (stable) "point attractors" and (stable) "limit cycles," to "strange attractors" (Strogatz, 2018) such as the "Lorenz attractor" (Lorenz, 1963)," Rossler attractor" (Rossler, 1976), and others (Tam et al., 2008; Sprott, 2014). One of the key benefits of using motor primitives is that it enables highly dynamic behavior of the robot with minimal high-level intervention (Hogan and Stermad, 2012). As a result, the complexity of the control problem can be significantly reduced. For instance, by formulating discrete (respectively rhythmic) movement as a stable point attractor (respectively limit cycle), the problem of generating the movement reduces to learning the parameters of the corresponding attractor. Another important consequence is that it provides a modular structure of the controller. By treating motor primitives as basic "modules," learning motor skills happens at the level of modules which provides adaptability and flexibility for robot control. Since DMPs and EDAs stem from the theory of motor primitives, both approaches share the same philosophy. Nevertheless, significant differences exist such that their implementations diverge. In the opinion of the authors, this has not yet been sufficiently emphasized. An in-depth review that elucidates the similarities and differences between the two approaches may be beneficial to the robotics community. In this paper, we provide a comprehensive review of motor primitives in robotics, focusing specifically on the two distinct approaches -- DMPs and EDAs. We delineate the similarities and differences of both approaches by presenting eight extensively used robotic control examples (Section 3).2 We show that: Footnote 2: The code for the simulation examples is available in [https://github.com/mossesnh-shared/DMP-comparison](https://github.com/mossesnh-shared/DMP-comparison) * Both approaches use motor primitives as basic building blocks to parameterize the controller (Section 2). DMPs consist of a canonical system, nonlinear forcing terms and transformation systems. EDAs consist of submovements, oscillations and mechanical impedances. * For torque-controlled robots, DMPs require an inverse dynamic model of the robot, whereas EDAs do not impose this requirement (Section 3.1). * With an inverse dynamics model, DMPs can achieve perfect tracking, both in task-space and joint-space (Section 3.2, 3.3). Imitation Learning enables DMPs to learn and track trajectories of arbitrary complexity (Section 2.1.4). Online trajectory modulation of DMPs enables achieving additional control objectives such as obstacle avoidance (Section 3.5), thereby providing advantages over spline methods. For tracking control with EDAs, an additional method for calculating an appropriate virtual trajectory and mechanical impedance to which it is connected (Section 2.2.4) is required (Section 2.2, 3.2, 3.3). * To control the end-effector of the robot, DMPs require additional control methods to manage kinematic singularity and kinematic redundancy (Section 3.3, 3.9). In contrast, for EDAs, stability near (and even at) kinematic singularity can be ensured (Section 3.3). Kinematic redundancy can be managed without solving the inverse kinematics (Section 3.9). * Both approaches provide a modular framework for robot control. However, the extent of modularity and its practical implications differ between the two approaches. A clear distinction appears when combining multiple movements. For DMPs, discrete and rhythmic movements are represented by different DMPs (Section 2.1.1, 2.1.2). Hence, the two different DMPs cannot be directly superimposed to generate a combination of discrete and rhythmic movements (Section 3.7). Multiple discrete movements are generated by modifying the goal position of the previous movement (Section 3.8). While the weights of the nonlinear forcing terms learned from Imitation Learning (Section 2.1.4) can be reused, the weights of different DMPs cannot be simply combined. For EDAs, modularity exists both at the level of kinematics and mechanical impedances (Section 2.2.4). For the former, multiple movements can be directly superimposed at the kinematic level, which provides a greater degree of simplicity to generate a rich repertoire of movements, e.g., combining discrete and rhythmic movements (Section 3.7), or sequencing discrete movements (Section 3.8). For the latter, the learned mechanical impedances can be reused and combined by simple superposition. Superposition of mechanical impedances enables a "divide-and-conquer" strategy, where complex tasks can be broken down into simpler sub-tasks. This modular property of mechanical impedances simplifies multiple control tasks, e.g., obstacle avoidance (Section 3.5) and managing kinematic redundancy (Section 3.9). * For DMPs, a low-gain PD controller is superimposed to manage uncertainty and physical contact (Section 3.1). EDAs include mechanical impedance as a separate primitive to manage physical interaction (Section 2.2.3). The dynamics of physical interaction can be controlled by modulating mechanical impedance (Section 3.4). Lastly, we show how DMPs and EDAs might be integrated, thereby leveraging the best of both approaches. ## 2 Theory In this Section, we provide an overview of DMPs and EDAs. For simplicity, we consider a system with a single DOF. A generalization to systems with multiple DOFs is presented in Section 3. ### Dynamic Movement Primitives DMPs, introduced by Schaal (1999, 2006); Ijspeert et al. (2013), consist of three classes of primitives: a canonical system (Section 2.1.1), a nonlinear forcing term (Section 2.1.2), and a transformation system (Section 2.1.3). To generate discrete and rhythmic movements, two distinct definitions exist for canonical system and nonlinear forcing term. For clarification, labels "Discrete" and "Rhythmic" are added next to the equation. #### 2.1.1 Canonical System A canonical system \(s:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is a scalar variable governed by a first-order differential equation: \[\tau s(t)=\begin{cases}-\alpha_{s}s(t)&\text{Discrete}\\ 1&\text{Rhythmic}\end{cases} \tag{1}\] In these equations, \(\alpha_{z}\) is a positive constant, \(t\in\mathbb{R}_{\geq 0}\) is time and \(\tau>0\) is a time constant. For discrete movements, the canonical system is exponentially convergent to \(0\) with a closed-form solution \(s(t)=\exp(-\alpha_{s}t/\tau)s(0)\). For rhythmic movements, the canonical system is a linear function of time \(s(t)=t/\tau\), but the modulo-\(2\pi\) operation is applied to ensure \(s\in[0,2\pi)\). #### 2.1.2 Nonlinear Forcing Term A nonlinear forcing term \(f:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\), which takes the canonical system \(s(t)\) as the function argument, is defined by: \[f(s(t))=\begin{cases}\frac{\sum_{i=1}^{N}v_{i}\psi_{i}(s(t))}{\sum_{i=1}^{N} \psi_{i}(s(t))}s(t)(g-y_{0})&\text{Discrete}\\ \frac{\sum_{i=1}^{N}v_{i}\psi_{i}(s(t))}{\sum_{i=1}^{N}\psi_{i}(s(t))}\,r&\text {Rhythmic}\end{cases} \tag{2}\] In Equation (2), \(N\) is the number of basis functions; \(w_{i}\) is the weight of the \(i\)-th basis function; \(y_{0}\) and \(g\) are the initial and final positions of the discrete movement, respectively; \(r\) is the amplitude of the nonlinear forcing term for rhythmic movements and \(\phi_{i}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\) is the \(i\)-th basis function of the nonlinear forcing term: \[\phi_{i}(s(t))=\begin{cases}\exp\big{\{}-h_{i}(s(t)-c_{i})^{2}\big{\}}&\text{ Discrete}\\ \exp\big{\{}h_{i}(\cos(s(t)-c_{i})-1)\big{\}}&\text{Rhythmic}\end{cases} \tag{3}\] In Equation (3), the basis functions for discrete and rhythmic movements are Gaussian functions and von Mises functions, respectively (Ijspeert et al., 2013); \(c_{i}\) is the center of the \(i\)-th basis function; \(h_{i}\) is a positive constant that determines the width of the \(i\)-th basis function. #### 2.1.3 Transformation System The nonlinear forcing term \(f\), with canonical system \(s\) as its function variable, is used as an input to the transformation system to generate trajectories with arbitrary complexity. In detail, a transformation system is a second-order differential equation, which is equivalent to a linear mass-spring-damper model with a nonlinear force input \(f(s(t))\): \[\tau\dot{y}(t) =z(t) \tag{4}\] \[\tau\dot{z}(t) =\alpha_{z}\{\beta_{z}(g-y(t))-z(t)\}+f(s(t))\] In these equations, \(\alpha_{z}\) and \(\beta_{z}\) are positive constants; \(y(t)\) and \(z(t)\) are state variables which correspond to position and (time-scaled) velocity of the transformation system, respectively; \(\tau\) is the time constant used for the canonical system (Equation (1)); for discrete movement, \(g\) is identical to Equation (2); for rhythmic movement, \(g\) is chosen to be the average of the rhythmic, repetitive movement (Ijspeert et al., 2013). While any positive values of \(\alpha_{z}\) and \(\beta_{z}\) can be used, usually, the values of \(\alpha_{z}\) and \(\beta_{z}\) are chosen such that if \(f(s(t))\) is zero, the transformation system is critically damped (i.e., has repeated eigenvalues) for \(\tau=1\) (i.e., \(\beta_{z}=\alpha_{z}/4\)) (Ijspeert et al., 2013). Using the canonical system \(s\) avoids the explicit time-dependency of the transformation system and results in an autonomous system (Ijspeert et al., 2013). Moreover, while both discrete and rhythmic movements are generated with the same transformation system, different choices of canonical systems \(s\) and nonlinear forcing terms \(f\) are made to produce those movements. Hence, to produce a combination of discrete and rhythmic movements, both discrete and rhythmic DMPs must be constructed (Section 3.7). #### 2.1.4 Imitation Learning One prominent application of DMPs is "Imitation Learning," also called "Learning from Demonstration" (Schaal, 1999; Ijspeert et al., 2001, 2002, 2013). Let \(y_{des}(t)\) be the desired trajectory that the robot aims to learn (or imitate). Then, \(y(t)=y_{des}(t)\) can be achieved by using the following nonlinear forcing term (Equation (4)): \[f_{target}(s(t))=\tau^{2}\ddot{y}_{des}(t)+\alpha_{z}\tau\dot{y}_{des}(t)+ \alpha_{z}\beta_{z}(y_{des}(t)-g)\] If the analytic solution of \(y_{des}(t)\) and its derivatives are not known, Imitation Learning generates the whole continuous trajectory of \(y_{des}(t)\) from \(P\) sample points, \((y_{des}(t_{i}),\)\(\dot{y}_{des}(t_{i}),\)\(\dot{y}_{des}(t_{i}))\) for \(i\in[1,2,\cdots,P]\), and the \(P\) sample points are used to find the best-fit weights of \(f(s(t))\) (Equation (3)) that matches \(f_{target}(s(t))\). The best-fit weight \(w_{i}^{*}\) of the \(i\)-th basis function is calculated using Locally Weighted Regression (Atkeson et al., 1997; Schaal and Atkeson, 1998; Ijspeert et al., 2013): \[w_{i}^{*}=\frac{\mathbf{a}^{\mathbf{T}}\Phi_{i}\mathbf{f}_{target}}{\mathbf{a}^ {\mathbf{T}}\Phi_{i}\mathbf{a}} \tag{5}\] where: \[\mathbf{a}=\begin{bmatrix}a_{1}\\ a_{2}\\ \vdots\\ a_{P}\end{bmatrix}\quad\mathbf{f}_{target}=\begin{bmatrix}f_{target,1}\\ f_{target,2}\\ \vdots\\ f_{target,P}\end{bmatrix}\] \[\Phi_{i}=\begin{bmatrix}\phi_{i}(s(t_{1}))&0\\ \phi_{i}(s(t_{2}))&\\ &\ddots\\ 0&\phi_{i}(s(t_{P}))\end{bmatrix}\] The elements of \(\mathbf{a}\) and \(\mathbf{f}_{target}\) are: \[a_{i}\equiv a(t_{i})=\begin{cases}s(t_{i})(g-y_{0})&\text{Discrete}\\ r&\text{Rhythmic}\end{cases}\] \[f_{target,i}=\tau^{2}\ddot{y}_{des}(t_{i})+\alpha_{z}\tau\dot{y}_{ des}(t_{i})+\alpha_{z}\beta_{z}(y_{des}(t_{i})-g)\] Along with Locally Weighted Regression, one can also use linear least square regression to find the best fit weights (Ude et al., 2014; Saveriano et al., 2019). For Imitation Learning of discrete movements, the goal \(g\) and the initial position \(y_{0}\) are set as \(g=y_{des}(t_{P})\) and \(y_{0}=y_{des}(t_{1})\), respectively; \(\tau\) is chosen to be the duration of the discrete movement. For Imitation Learning of rhythmic movements, goal \(g\) is the midpoint of the minimum and maximum values of \(y_{des}(t_{1}),y_{des}(t_{2}),\cdots,y_{des}(t_{P});\ \tau\) is chosen to be the period of the demonstrated movement divided by \(2\pi\), hence the period of the rhythmic movement must be derived first (Ijspeert et al., 2013). With these best-fit weights \(w_{i}^{*}\), the best-fit nonlinear forcing term \(f^{*}(s(t))\) is derived and used as the input to the transformation system to generate \(y_{des}(t)\), \(\dot{y}_{des}(t)\), \(\dot{y}_{des}(t)\). One might wonder why spline methods are not used to derive \(y_{des}(t)\), \(\tilde{y}_{des}(t)\), \(\tilde{y}_{des}(t)\) from the \(P\) sample points (Wada and Kawato, 2004). Splines are effective for smooth trajectory generation and are widely used in industrial robotics. However, spline methods do not allow online trajectory modulation of DMPs (Ijspeert et al., 2013) which is crucial to achieve multiple control tasks, e.g., obstacle avoidance (Section 3.5) and sequencing discrete movements (Section 3.8). Compared to spline methods, Imitation Learning provides modularity for robot control. Once the best-fit weights are learned, these weights can be saved as a learned module, which can be reused to generate the learned trajectory (Section 3.3, 3.5). Finally, DMPs provide favorable invariance properties. Once the best-fit weights of the demonstrated trajectory are learned, the learned movement can be scaled in time (i.e., temporal invariance) and space (i.e., spatial invariance) without changing the qualitative property of the movement (Ijspeert et al., 2013). If the \(P\) sample points of a demonstrated trajectory are provided, the best-fit weights which produce that demonstrated trajectory can be calculated with matrix algebra, referred to as "one-shot learning" (Equation (5)). This process is called "batch regression" (Ijspeert et al., 2013), since all \(P\) data points should be collected to calculate the \(N\) weights. The batch regression method assumes a predefined number of basis functions \(N\) and parameters \(c_{i}\) and \(h_{i}\). While these parameters can be defined manually, the Locally Weighted Projection Regression method (Vijayakumar and Schaal, 2000) can identify the necessary number of basis functions \(N\), the center locations \(c_{i}\), and width parameters \(h_{i}\) using nonparametric regression techniques. Note that Imitation Learning for joint trajectories is easily scalable to high DOF systems (Atkeson et al., 2000; Ijspeert et al., 2002). For an \(n\)-DOF system, one can construct \(n\) transformation systems, each representing a joint trajectory. The \(n\) transformation systems are synchronized with a single canonical system (Ijspeert et al., 2013). With the Imitation Learning method, learning the best-fit weights is computationally efficient as it involves simple matrix algebra. ### Elementary Dynamic Actions EDAs, introduced by Hogan and Sternad (2012, 2013) consist of at least three distinct classes of primitives: submovements (Section 2.2.1) and oscillations (Section 2.2.2) as kinematic primitives, and mechanical impedances as interaction primitives (Section 2.2.3) (Figure 1). #### 2.2.1 Submovements A submovement \(x_{0}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\) is a smooth trajectory in which its time derivative is a unimodal function, i.e., has a single peak value: \[\dot{x}_{0}(t)=v\;\hat{\sigma}(t)\] In this equation, \(\hat{\sigma}:\mathbb{R}_{\geq 0}\rightarrow[0,1]\) denotes a smooth unimodal basis function with peak value 1; \(v\in\mathbb{R}\) is the velocity amplitude of the submovement. Submovements model discrete motions, and therefore \(\hat{\sigma}(t)\) has a finite support, i.e., there exists \(T>0\) such that \(\hat{\sigma}(t)=0\) for \(t\geq T\). (Hogan and Sternad, 2007). The shape of \(\hat{\sigma}(t)\) can either be symmetric or not. #### 2.2.2 Oscillations An oscillation \(x_{0}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\) is a smooth non-zero trajectory which is a periodic function: \[\forall t>0:\;\;\exists T>0:\;\;x_{0}(t)=x_{0}(t+T)\] Note that this definition of oscillation can be too strict and the definition of oscillation can be expanded to almost-periodic functions (Hogan and Sternad, 2012). For our purposes, it is sufficient to think of an oscillation as a periodic function. Compared to submovements, oscillations model rhythmic and repetitive motions. #### 2.2.3 Mechanical Impedances Mechanical impedance \(Z\) is an operator which maps (generalized) displacement \(\Delta x(t)\in\mathbb{R}\) to (generalized) force \(F(t)\in\mathbb{R}\)(Hogan, 2017; Hogan and Buerger, 2018): \[Z:\Delta x(t)\longrightarrow F(t) \tag{6}\] In detail, \(\Delta x(t)\) is the displacement from an actual trajectory of (generalized) position \(x(t)\) and a virtual trajectory \(x_{0}(t)\) to which the mechanical impedance is connected, i.e., \(\Delta x(t)=x_{0}(t)-x(t)\). Loosely speaking, mechanical impedance is a generalization of stiffness to encompass nonlinear dynamic behavior (Hogan, 2017). The impedance operator of Equation (6) can denote both a map from joint displacement to torque, or a map from end-effector (generalized) displacement to (generalized) force. The former impedance operator is often referred to as "joint-space impedance," and the latter is often referred to as "task-space impedance." For task-space impedance, both translational (Hogan, 1985) and rotational displacement (Caccavele et al., 1998) of the end-effector can be considered separately. Along with the kinematic primitives, i.e., submovements and oscillations, EDAs include mechanical impedance as a distinct primitive to manage physical interaction (Hogan, 2017; Hogan and Buerger, 2018; Dietrich and Hogan, 2022; Hogan, 2022). The dynamics of physical interaction can be controlled by modulating mechanical impedance. For instance, tactile exploration and manipulation of fragile objects should evoke the use of low stiffness, while tasks such as drilling a hole on a surface requires high stiffness for object stabilization (Hogan and Buerger, 2018). Under the assumption that the environment is an admittance, mechanical impedances can be linearly superimposed even though each mechanical impedance is a nonlinear operator. This is the superposition principle of mechanical Figure 1: The three primitives of Elementary Dynamic Actions (EDAs). Submovements and oscillations correspond to kinematic primitives and mechanical impedances manage physical interaction. impedances (Hogan, 1985, 2017): \[Z=\sum Z_{i} \tag{7}\] This principle provides a modular framework for robot control that can simplify multiple control tasks, e.g., obstacle avoidance (Section 3.5) or managing kinematic redundancy (Section 3.9). Note that the impedance operators of Equation (7) can include transformation maps. For instance, to superimpose a joint-space impedance and a task-space impedance at the torque level, the task-space impedance is multiplied by a Jacobian transpose to map from end-effector (generalized) force to joint torques (Section 3.1.2). The choice of mechanical impedance decides whether the virtual trajectory \(x_{0}(t)\) is an attractor or repeller. The former can be used to produce discrete point-to-point movements (Section 3.2, 3.3), whereas the latter can be exploited for obstacle avoidance (Section 3.5) (Andrews and Hogan, 1983; Newman, 1987; Khatib, 1986; Hjorth et al., 2020). #### 2.2.4 Norton Equivalent Network Model The three distinct classes of EDAs -- submovements, oscillations, and mechanical impedances -- may be combined using a Norton equivalent network model (Hogan, 2017), which provides an effective framework to relate these classes of primitives (Figure 2). In detail, the forward-path dynamics (Figure 2) specifies the virtual trajectory \(x_{0}(t)\), which consists of submovements and/or oscillations. The interactive dynamics, which consists of mechanical impedances \(Z\), determines the generalized force output \(F(t)\) with the generalized displacement input \(\Delta x(t)\). Since the force output \(F(t)\) is determined by the choice of virtual trajectory \(x_{0}(t)\) and mechanical impedance \(Z\), a key objective of EDAs is to find appropriate choices of \(x_{0}(t)\) and \(Z\) to produce the desired robot behavior. Note that submovements and/or oscillations can be directly combined at the level of the virtual trajectory \(x_{0}(t)\), which thereby provides modularity at the kinematic level. As with the superposition principle of mechanical impedances (Section 2.2.3), the modular property at the kinematic level provides several benefits for multiple control tasks, e.g., combining discrete and rhythmic movements (Section 3.7), sequencing discrete movements (Section 3.8). As shown in Figure 2, EDAs neither control \(x(t)\) (i.e., position control) nor \(F(t)\) (i.e., force/torque control) directly. Hence, EDAs are fundamentally different from position control (Tanner, 1981; Arimoto, 1984), force/torque control (Whitney, 1977), or hybrid position/force control methods (Raibert and Craig, 1981). This is one of the key ideas of impedance control. This is also the reason why we have chosen the terminology "virtual trajectory" for \(x_{0}(t)\) instead of a "desired" or a "reference trajectory." Compared to tracking control methods which aim to follow a reference trajectory, \(x_{0}(t)\) of EDAs is simply a virtual trajectory to which the impedances are connected. This property of EDAs has several benefits for robot control with physical interaction. Compared to \(x(t)\) and \(F(t)\) that depend on the environment or object with which the robot interacts, the virtual trajectory \(x_{0}(t)\) and impedance operator \(Z\) can be modulated "independently," i.e., regardless of the environment or the manipulated object (Hogan and Buerger, 2018). For instance, force/torque control cannot be used for free-space motions and position control cannot be used in contact with a kinematically constrained environment. EDAs can be used for both cases since neither \(x(t)\) nor \(F(t)\) is controlled directly. Note that the Norton equivalent network model separates forward-path dynamics (virtual trajectory \(x_{0}(t)\)) from interactive dynamics (mechanical impedance \(Z\)). Hence, parallel optimization of \(x_{0}(t)\) and \(Z\) can be conducted. This has computational advantages for real-time control of robots with many DOF (Lachner et al., 2022). ## 3 Comparison of the Two Approaches In this Section, a detailed comparison between DMPs and EDAs is presented. To emphasize the similarities and differences between the two approaches, multiple simulation examples using the MuJoCo physics engine (Version 1.50) (Todorov et al., 2012) are presented. The code is available at [https://github.com/moseshah-shared/DMP-comparison](https://github.com/moseshah-shared/DMP-comparison). A list of examples, ordered in progressive complexity, is shown below: * A goal-directed discrete movement in joint-space (Section 3.2). * A goal-directed discrete movement in task-space (Section 3.3). * A goal-directed discrete movement in task-space, with unexpected physical contact (Section 3.4). * A goal-directed discrete movement in task-space, including obstacle avoidance (Section 3.5). * Rhythmic movement, both in joint-space and task-space (Section 3.6). * Combination of discrete and rhythmic movements, both in joint-space and task-space (Section 3.7). * A sequence of discrete movements in task-space (Section 3.8). * A single (or sequence of) discrete movement(s) in task-space, while managing kinematic redundancy (Section 3.9). Some of the examples reproduce human-subject experiments in motor control research, e.g., Burdet et al. (2001) for Section 3.3, Flash and Henis (1991) for Section 3.8. Figure 2: The three Elementary Dynamic Actions (EDAs) combined using a Norton equivalent network model. The virtual trajectory \(x_{0}(t)\) (yellow box) consists of submovements (orange box) and/or oscillations (blue box), and mechanical impedance \(Z\) (green box) regulates the interactive dynamics. While in general position (or motion) control can be used to encode motor primitives, we will focus on the control of torque-actuated robots. Position control would create challenges that restrict the set of tasks that can be achieved (Hogan, 2022). For instance, one of the challenges is the kinematic transformation from task-space coordinate to the robot's joint configuration, which complicates control in task-space. Another challenge is for tasks involving contact and physical interaction, which requires some level of compliance of the robotic manipulator. For the eight simulation examples, we highlight the challenges when using position-actuated robots, e.g., managing contact and physical interaction (Section 3.4), managing kinematic redundancy (Section 3.9). We show that torque-actuated robots can address these tasks without imposing such challenges. Further discussion is deferred to Section 4.2.1. Given a torque-actuated open-chain \(n\)-DOF robot manipulator, its dynamics is governed by the following differential equation (Murray et al., 1994): \[\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{C}(\mathbf{q},\dot{\mathbf{q} })\dot{\mathbf{q}}+\mathbf{G}(\mathbf{q})=\boldsymbol{\tau}_{\mathit{in}}(t)+ \boldsymbol{\tau}_{\mathit{ext}}(t) \tag{8}\] In this equation, \(\mathbf{q}\equiv\mathbf{q}(t)\in\mathbb{R}^{n}\) is the joint trajectory of the robot; \(\mathbf{M}(\mathbf{q})\in\mathbb{R}^{n\times n}\) and \(\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\in\mathbb{R}^{n\times n}\) are the mass and centrifugal/Coriolis matrices, respectively; \(\mathbf{G}(\mathbf{q})\in\mathbb{R}^{n}\) is the vector arising through gravitational potential energy; \(\boldsymbol{\tau}_{\mathit{in}}(t)\in\mathbb{R}^{n}\) is the torque input; \(\boldsymbol{\tau}_{\mathit{ext}}(t)\in\mathbb{R}^{n}\) is the resultant effect of external forces expressed as torque. For torque-controlled robots, the goal is to determine the torque input \(\boldsymbol{\tau}_{\mathit{in}}(t)\) which produces desired robot behavior. For brevity and to avoid clutter, we often omit argument \(t\). For the presented examples, the orientation of the end-effector is not considered, since the control of end-effector's position suffices to illustrate the differences between the two approaches. The details of implementing controllers for orientation can be found in references such as Pastor et al. (2011); Abu-Dakka et al. (2015) for DMPs and Lachner (2022) for EDAs. In this paper, we use \(\mathbf{p}\equiv\mathbf{p}(t)\in\mathbb{R}^{3}\) to denote the 3D Cartesian position of the end-effector and \(\mathbf{h}\) to represent the Forward Kinematic Map of the robot, i.e., \(\mathbf{p}=\mathbf{h}(\mathbf{q})\). Except for tasks with physical contact (e.g., Section 3.4), we assume \(\boldsymbol{\tau}_{\mathit{ext}}(t)=\mathbf{0}\). Finally, we assume that gravitational force \(\mathbf{G}(\mathbf{q})\) is compensated by the controller and can be neglected (Equation (8)). ### The Existence of an Inverse Dynamics Model We first show that for torque-actuated robots, DMPs require an inverse dynamics model, whereas EDAs do not. #### 3.1.1 Dynamic Movement Primitives For DMPs, the transformation system (Equation (4)) represents kinematic relations. Hence for a torque-actuated robot, the approach requires an inverse dynamics model, which determines the feedforward joint torques \(\boldsymbol{\tau}_{\mathit{in}}(t)\) to generate the desired joint trajectory specified by \(\ddot{\mathbf{q}}(t)\), \(\dot{\mathbf{q}}(t)\), \(\mathbf{q}(t)\)(Ijspeert et al., 2013). The inverse dynamics model requires an exact model of the robot manipulator, i.e., exact values of \(\mathbf{M}(\mathbf{q})\) and \(\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\). The requirement of an inverse dynamics model also implies that the DMP approach is in principle, a non-reactive feedforward control approach. To be robust against uncertainty or unexpected physical contact, a low-gain feedback control (e.g., PD control) is added to the feedforward inverse dynamics controller (Schaal et al., 2007; Pastor et al., 2013) (Section 3.4). Moreover, for control in task-space where the maps from \(\bar{\mathbf{p}}(t)\), \(\dot{\mathbf{p}}(t)\), \(\mathbf{p}(t)\) to \(\ddot{\mathbf{q}}(t)\), \(\dot{\mathbf{q}}(t)\), \(\dot{\mathbf{q}}(t)\) are not always well defined, additional control methods must be employed. For instance, to solve inverse kinematics, methods presented by Whitney (1969); Liegeois (1977); Maciejewski and Klein (1988) are used (Park et al., 2008; Nakanishi et al., 2008; Pastor et al., 2009) (Section 3.9.3). For tracking control in task-space, feedback control methods such as sliding-mode control (Slotine and Li, 1991; Nakanishi et al., 2008)) are employed (Section 3.9.3). #### 3.1.2 Elementary Dynamic Actions For EDAs, an inverse dynamics model is not required for a torque-actuated robot. Hence, an exact model of the robot, i.e., exact computation of \(\mathbf{M}(\mathbf{q})\) and \(\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\) is not essential, unless advanced applications such as inertia shaping (Khatib, 1995; Dietrich and Ott, 2019) are desired. Instead, the input torque command \(\boldsymbol{\tau}_{\mathit{in}}(t)\) is determined by superimposing mechanical impedances (Section 2.2.3): \[\boldsymbol{\tau}_{\mathit{in}}(t)=\sum_{i}\mathbf{J}(\mathbf{q}(t))^{T} \mathbf{Z}_{p,i}(\Delta\mathbf{p}(t),t)+\sum_{i}\mathbf{Z}_{q,i}(\Delta \mathbf{q}(t),t)\] In this equation, \(\mathbf{J}(\mathbf{q}(t))\equiv\mathbf{J}(\mathbf{q})\) is the Jacobian matrix (Siciliano et al., 2008); \(\mathbf{Z}_{p}\) and \(\mathbf{Z}_{q}\) denote task-space and joint-space impedances, respectively. Compared to DMPs, the EDA approach is in principle, a reactive feedback control method. Instead of an inverse dynamics model, measurements \(\mathbf{q}(t)\) and a Forward Kinematics Map of the robot are required. Given these values, one can react to the environment by modulating impedance and/or the virtual trajectories to regulate the dynamics of physical interaction (Section 3.4). Moreover, with appropriate choices of \(\mathbf{Z}_{p}\), \(\mathbf{Z}_{q}\) and \(\mathbf{p}_{0}(t)\), \(\mathbf{q}_{0}(t)\), the controller preserves passivity, which thereby provides robustness against uncertainty and external disturbances (Section 3.4). ### A Goal-directed Discrete Movement in Joint-space We consider designing a controller to generate a goal-directed discrete movement planned in joint-space coordinates. #### 3.2.1 Dynamic Movement Primitives The movement of each joint is represented by a transformation system. The \(n\) transformation systems are synchronized by using a single discrete canonical system as the input to the \(n\) nonlinear forcing terms (Ijspeert et al., 2013) (Section 2.1.4): \[\begin{split}\tau^{2}\dot{\mathbf{q}}(t)&=-\alpha_{c} \beta_{c}\{\mathbf{q}(t)-\mathbf{g}\}-\alpha_{c}\tau\dot{\mathbf{q}}(t)+ \mathbf{f}(s(t))\\ \tau s(t)&=-\alpha_{s}s(t)\end{split} \tag{9}\] Note that different values of \(\alpha_{c}\) and \(\beta_{c}\) can be used for each transformation system. Nevertheless, it is sufficient to use identical values of \(\alpha_{c}\) and \(\beta_{c}\) for \(n\) transformation systems. Without using Imitation Learning, i.e., \(\mathbf{f}(s(t))=\mathbf{0}\), the goal-directed discrete movement can be generated by setting \(\mathbf{g}\) as the goal location. As a result, \(\mathbf{q}(t)\) generates a motion of a stable second-order linear system which converges to \(\mathbf{g}\). In case we want to generate a goal-directed discrete movement that also follows a specific joint trajectory, Imitation Learning can be used. Let \(\mathbf{q}_{des}(t)\) be the desired discrete joint-trajectory with duration \(T\), such that \(\mathbf{q}_{des}(T)=\mathbf{g}\) and \(\tau=T\). The best-fit force \(\mathbf{\Gamma}^{*}(s(t))\) is calculated and used as the input to \(n\) transformation systems to produce \(\tilde{\mathbf{q}}_{des}(t)\) (Equation (9)). With the initial conditions \(\mathbf{q}_{des}(0),\tilde{\mathbf{q}}_{des}(0)\), the trajectories of \(\mathbf{q}_{des}(t)\), \(\tilde{\mathbf{q}}_{des}(t)\) are calculated using numerical integration. The calculated \(\mathbf{q}_{des}(t)\), \(\hat{\mathbf{q}}_{des}(t)\), \(\tilde{\mathbf{q}}_{des}(t)\) are the input to the inverse dynamics model to generate the necessary torque input \(\mathbf{\tau}_{in}(t)\). #### 3.2.2 Elementary Dynamic Actions To generate a goal-directed discrete movement planned in joint-space coordinates, we construct the following controller: \[\mathbf{\tau}_{in}(t)=\mathbf{K}_{q}\{\mathbf{q}_{0}(t)-\mathbf{q}\}+\mathbf{ B}_{q}\{\mathbf{q}_{0}(t)-\mathbf{q}\} \tag{10}\] In this equation, \(\mathbf{K}_{q},\mathbf{B}_{q}\in\mathbb{R}^{n\times n}\) are symmetric positive definite matrices which correspond to the joint stiffness and damping, respectively. This controller is a first-order joint-space impedance controller (Nah et al., 2020, 2021). To generate a goal directed discrete movement, \(\mathbf{q}_{0}(t)\) is chosen to be a submovement which ends at goal location \(\mathbf{g}\), i.e., if the duration of submovement is \(T\), then \(\mathbf{q}_{0}(T)=\mathbf{g}\). For constant positive definite \(\mathbf{K}_{q},\mathbf{B}_{q}\) matrices, \(\mathbf{q}(t)\) asymptotically converges to \(\mathbf{g}\)(Takegaki, 1981; Slotine and Li, 1991). #### 3.2.3 Simulation Example Consider a 2-DOF planar robot model, where each link consists of a single uniform slender bar with mass and length of 1kg and 1m, respectively. Let the initial joint configuration be \(\mathbf{q}(0)=\mathbf{q}_{i}\in\mathbb{R}^{2}\). The code script for this simulation is main_joint_discrete.py. For \(\mathbf{q}_{des}(t)\) of DMPs and \(\mathbf{q}_{0}(t)\) of EDAs, both trajectories were set to be a minimum-jerk trajectory (Flash and Hogan, 1985; Hogan and Flash, 1987): \[\begin{split}\mathbf{q}_{des}(t),\mathbf{q}_{0}(t)=\begin{cases} \mathbf{q}_{i}+(\mathbf{g}-\mathbf{q}_{i})f_{MUT}(t)&0\leq t<T\\ \mathbf{g}&T\leq t\\ f_{MUT}(t)=10\Big{(}\frac{t}{T}\Big{)}^{3}-15\Big{(}\frac{t}{T} \Big{)}^{4}+6\Big{(}\frac{t}{T}\Big{)}^{5}\end{cases}\end{split} \tag{11}\] The simulation results are shown in Figure 3. DMPs generated the goal-directed discrete movement while achieving perfect tracking of the minimum-jerk trajectory. Once the weights of the nonlinear forcing terms are learned using Imitation Learning, one can regenerate the minimum-jerk trajectory by simply retrieving these learned weights. While the presented example used a minimum-jerk trajectory, Imitation Learning can be used to achieve tracking control of a trajectory with arbitrary complexity. For EDAs, a non-zero error between \(\mathbf{q}_{0}(t)\) and \(\mathbf{q}(t)\) was observed. Hence, perfect tracking of the minimum-jerk trajectory was not achieved. Nevertheless, EDAs enabled asymptotic convergence of \(\mathbf{q}(t)\) towards the goal configuration \(\mathbf{g}\) without the need for an inverse dynamics model. ### A Goal-directed Discrete Movement in Task-space We next design a controller to generate a goal-directed discrete movement of the end-effector in task-space coordinates. For this Section, we assume no kinematic redundancy of the robot model, i.e., the Jacobian matrix \(\mathbf{J}(\mathbf{q})\) is a square matrix. A method to manage kinematic redundancy is considered in Section 3.9. One of the goal positions is located at a kinematic singularity, i.e., a fully stretched configuration. #### 3.3.1 Dynamic Movement Primitives For DMPs, the task can be achieved by representing the end-effector trajectory \(\mathbf{p}(t)\) with transformation system (Pastor et al., 2009): \[\begin{split}\tau^{2}\tilde{\mathbf{p}}(t)&=-\alpha _{c}\beta_{c}(\mathbf{p}(t)-\mathbf{g})-\alpha_{c}\tau\tilde{\mathbf{p}}(t)+ \mathbf{f}(s(t))\\ \tau\tilde{s}(t)&=-\alpha_{s}s(t)\end{split} \tag{12}\] Once the desired end-effector trajectories \(\mathbf{p}_{des}\), \(\tilde{\mathbf{p}}_{des}\), \(\tilde{\mathbf{p}}_{des}\) are computed with or without Imitation Learning (as discussed in Section 3.2), the inverse kinematics are used to calculate the corresponding joint trajectories: \[\mathbf{q}_{des} =\mathbf{h}^{-1}(\mathbf{p}_{des})\] \[\dot{\mathbf{q}}_{des} =\mathbf{J}(\mathbf{q}_{des})^{-1}\dot{\mathbf{p}}_{des}\] \[\ddot{\mathbf{q}}_{des} =\mathbf{J}(\mathbf{q}_{des})^{-1}(\dot{\mathbf{p}}_{des}-\dot{ \mathbf{J}}(\mathbf{q}_{des})\dot{\mathbf{q}}_{des})\] The calculated \(\ddot{\mathbf{q}}_{des},\dot{\mathbf{q}}_{des},\mathbf{q}_{des}\) are used as the input to the inverse dynamics model to calculate \(\mathbf{\tau}_{in}\) (Section 3.1). Note that the presented method for inverse kinematics cannot be used for a kinematically redundant robot (with fewer task-space than joint-space DOFs). To manage kinematic redundancy, along with the feedforward torque command from the inverse dynamics model, an additional feedback controller should be employed (Slotine and Li, 1991; Nakanishi et al., 2008; Pastor et al., 2009). This is considered further in (Section 3.9). #### 3.3.2 Elementary Dynamic Actions To generate a goal-directed discrete movement planned in task-space coordinates, we construct the following controller: \[\mathbf{\tau}_{in}(t)=\mathbf{J}(\mathbf{q})^{\mathrm{T}}\big{[}\mathbf{K}_{p} \{\mathbf{p}_{0}(t)-\mathbf{p}\}+\mathbf{B}_{p}\{\dot{\mathbf{p}}_{0}(t)- \dot{\mathbf{p}}\}\big{]} \tag{13}\] Figure 3: A goal-directed discrete movement in joint-space for Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.2.3). The black dotted line is a minimum-jerk trajectory (Equation (11)). Goal location: \(\mathbf{g}=[0,1]\) rad. Parameters of minimum-jerk trajectory: \(\mathbf{q}_{i}=[0,0]\), rad. \(T=1\). Parameters of MDFs: \(\alpha_{c}=10\), \(\beta_{c}=2.5\), \(\alpha_{s}=1.0\), \(t=T\), \(N=50\), \(p=100\), \(c_{i}=\exp(-\alpha_{i}-1)/(N-1)\) for \(i\in[1,2,\cdots,N]\), \(N_{i}=1/(c_{i+1}-c_{i})^{2}\) for \(i\in[1,2,\cdots,N-1]\), \(h_{N}=h_{N-1}\). Parameters of EDAs: \(\mathbf{K}_{q}=15012\). N-m/rad, \(\mathbf{B}_{q}=5012\). N-m/rad, where \(\mathbf{I}_{2}\in\mathbb{R}^{2\times 2}\) is an identity matrix. For DMP, perfect tracking was achieved. For EDA, a non-negligible tracking error was observed. In this equation, \(\mathbf{K}_{p},\mathbf{B}_{p}\in\mathbb{R}^{3\times 3}\) (\(\mathbb{R}^{2\times 2}\) for the planar simulations presented below) are constant symmetric positive definite matrices which correspond to translational stiffness and damping, respectively. This controller is a first-order task-space impedance controller (Hermus et al., 2021; Verdi, 2019). Compared to DMPs (Section 3.3.1), this controller does not require a Jacobian inverse. Hence, the torque input is always well defined near and even at kinematic singularities. To generate a goal directed discrete movement, as with the first-order joint-space impedance controller (Section 3.2), \(\mathbf{p}_{0}(t)\) is chosen to be a submovement which ends at goal location \(\mathbf{g}\). For constant positive definite \(\mathbf{K}_{p},\mathbf{B}_{p}\) matrices, \(\mathbf{p}(t)\) asymptotically converges to \(\mathbf{g}\)(Takegaki, 1981). This also implies an asymptotic convergence of \(\mathbf{q}(t)\to\mathbf{h}^{-1}(\mathbf{g})\). #### 3.3.3 Simulation Example The simulation example in Figure 4 reproduced the movement of the experiment conducted in Burdet et al. (2001). The goal-directed discrete movement was made in a direction away from the robot base, along the positive \(Y\)-axis direction (Figure 4). The amplitude of the movement was varied such that one of the movements reached a fully stretched configuration, i.e., a configuration at kinematic singularity. The code script for this simulation is main.task.discrete.py. We used the 2-DOF planar robot model from Section 3.2 that was constrained to move within the \(XY-\)plane. For DMPs and EDAs, both \(\mathbf{p}_{ds}(t)\) and \(\mathbf{p}_{0}(t)\) were chosen to be a minimum-jerk trajectory (Equation (11)): \[\begin{split}\mathbf{p}_{ds}(t),\mathbf{p}_{0}(t)=\begin{cases} \mathbf{p}_{i}+(\mathbf{g}-\mathbf{p}_{i})f_{MT}(t)&0\leq t<T\\ \mathbf{g}&T\leq t\\ \end{cases}\\ f_{MT}(t)=10\Big{(}\frac{t}{T}\Big{)}^{3}-15\Big{(}\frac{t}{T}\Big{)}^{4}+ 6\Big{(}\frac{t}{T}\Big{)}^{5}\end{cases}\\ f_{MT}(t)=10\Big{(}\frac{t}{T}\Big{)}^{3}-15\Big{(}\frac{t}{T}\Big{)}^{4}+6 \Big{(}\frac{t}{T}\Big{)}^{5}\end{cases}\end{split} \tag{14}\] In this equation, \(\mathbf{p}_{i}\) is the initial position of the end-effector. As shown in Figure 4A, 4B, both approaches successfully generated discrete movements that converged to the desired goal location. As discussed in Section 3.2.3, DMPs achieved the goal-directed discrete movement with perfect tracking. For EDAs, goal-directed discrete movement was achieved but a tracking error still existed. Note that EDAs did not require the Jacobian inverse. The benefit of this property was emphasized when the planar robot model reached for a fully stretched configuration. As shown in Figure 4C, 4D, 4E, DMPs became numerically unstable when the robot model approached a kinematic singularity. For EDAs, not only was the approach stable, but the approach even "passed-through" the kinematic singularity (Figure 4E, near \(t=1.0\) and \(t=2.4\), depicted as grey circle markers) without any numerical instability of the controller. Note that the robot model oscillated back and forth between its "left-hand" and "right-hand" configurations, passing through the singularity multiple times. This occurred because at the singular configuration the controller included no effective damping, hence no means to dissipate the angular momentum of the robot links. This Figure 4: A goal-directed discrete movement in task-space for (A, C, E) Dynamic Movement Primitives (DMPs, blue) and (B, D, E) Elementary Dynamic Actions (EDAs, orange) (Section 3.3.3). (A, B, C, D) Goal-directed discrete movements in a direction along the positive \(Y\)-axis direction, but (C, D) reached kinematic singularity. The black dotted line is a minimum-jerk trajectory (Equation (14)). (E) Time \(t\) vs. \(\sigma_{min}((\mathbf{J}(\mathbf{q})))/\sigma_{max}((\mathbf{J}(\mathbf{q})))\) of both robots for (C, D), where \(\sigma\) is a singular value of the matrix. For DMPs, the MuJoCo simulation halted near 0.998. Parameters of a minimum-jerk trajectory: (A, B) \(\mathbf{p}_{i}=[0.0,0.52]m\), \(\mathbf{g}=[0.0,0.52]m\), \(T=1.0\)s, where \(\mathbf{p}_{i}\) is the initial end-effector position. Parameters of DMPs are identical to those in Figure 3. Parameters of EDAs: \(\mathbf{K}_{p}=601_{2}\) N/m, \(\mathbf{B}_{p}=201_{2}\) N/s/m. For DMP, perfect tracking was achieved when the motion was not near a kinematic singularity. However for (C) when the goal \(\mathbf{g}\) was at a kinematic singularity, the approach failed to achieve the task. For EDA, a non-negligible tracking error existed, but goal-directed discrete movements were achieved for both (A) and (C). The approach based on EDAs was not only stable near and at a kinematic singularity but even passed through it (E, near \(t=1.0\) and \(t=2.4\), depicted as grey circle markers). oscillation may be suppressed by adding a non-zero joint-space damping, an example of impedance superposition (Section 2.2.3, 3.5, 3.9). To manage kinematic singularity for DMPs, methods such as "damped least-squares inverse" (Nakamura and Hanafras, 1986; Chiaverini et al., 1994) can be employed. In principle, these methods manually revise near-zero singular values to strictly positive values to avoid a singular \(\mathbf{J}(\mathbf{q})\) matrix. However, this results in an approximate rather than exact value of the inverse kinematics near kinematic singularity. This error in the joint trajectory propagates to the feedforward torque command from the inverse dynamics model. Hence, an additional feedback control method such as joint-space PD control (Section 3.4) or sliding mode control (Section 3.9) should be employed to ensure stability and reasonable tracking performance. ### Tasks with Unexpected Physical Contact We next consider a case where unexpected physical contact is made while conducting a point-to-point discrete reaching movement presented in Section 3.3.3 (Figure 4A, 4B). #### 3.4.1 Dynamic Movement Primitives As discussed in Section 3.1, DMPs use a feedforward torque command calculated from the inverse dynamics model. To handle possible instabilities for control tasks involving unexpected contact, a feedback torque command is superimposed on the feedforward torque command. Let \(\boldsymbol{\tau}_{ff}(t)\) be the feedforward torque command (Section 3.1, 3.3) to produce \(\mathbf{p}_{des}(t)\). A feedback torque command \(\boldsymbol{\tau}_{fb}(t)\) based on a low-gain joint-space PD controller is superimposed on \(\boldsymbol{\tau}_{ff}(t)\): \[\begin{split}\boldsymbol{\tau}_{fb}(t)&=\mathbf{K}_ {q}\{\mathbf{q}_{des}(t)-\mathbf{q}\}+\mathbf{B}_{q}\{\dot{\mathbf{q}}_{des}(t )-\dot{\mathbf{q}}\}\\ \boldsymbol{\tau}_{in}(t)&=\boldsymbol{\tau}_{ff}(t) +\boldsymbol{\tau}_{fb}(t)\end{split} \tag{15}\] The gain values for \(\mathbf{K}_{q}\) and \(\mathbf{B}_{q}\) are manually chosen to be sufficiently small to manage unexpected contacts (Schaal et al., 2007). Moreover, \(\mathbf{q}_{des}(t),\mathbf{q}_{des}(t)\) are derived by inverse kinematics of \(\mathbf{p}_{des}(t)\), \(\mathbf{p}_{des}(t)\) (Section 3.3.1). The gains \(\mathbf{K}_{q}\) and \(\mathbf{B}_{q}\) can also be determined by stochastic optimal control (Theodorou et al., 2010; Buchil et al., 2011). Note that for ideal torque-actuated robots, the joint-space PD controller (Equation (15)) is identical to the first-order joint-space impedance controller of EDAs (Equation (10)). However, it would be a mistake to conclude that PD control is identical to impedance control (Won et al., 1997). Further discussion is deferred to Section 4.2.3. #### 3.4.2 Elementary Dynamic Actions As discussed in Hogan (1985) and Hogan (2022), an impedance controller is robust against unexpected physical contact with passive environments. While it is common to use constant mechanical impedances, mechanical impedance can be modulated to regulate the dynamics of physical interaction (Lachner et al., 2021). In detail, the first-order task-space impedance controller of Equation (13) can be adapted by modulating the translational stiffness and damping values: \[\boldsymbol{\tau}_{in}=\mathbf{J}(\mathbf{q})^{\mathrm{T}}\big{[}\mathbf{K}_ {p}^{\prime}(\lambda)\{\mathbf{p}_{0}-\mathbf{p}\}+\mathbf{B}_{p}^{\prime}( \lambda)\{\dot{\mathbf{p}}_{0}-\mathbf{p}\}\big{]} \tag{16}\] In this equation, \(\mathbf{K}_{p}^{\prime}(\lambda)=\lambda\,\mathbf{K}_{p}\) and \(\mathbf{B}_{p}^{\prime}(\lambda)=c\lambda\,\mathbf{K}_{p}\); \(\mathbf{K}_{p}\) is a constant symmetric positive definite matrix; \(\lambda\in[0,1]\), and \(c\) is a positive constant that determines the damping ratio. With a slight abuse of notation, \(\mathbf{K}_{p}^{\prime}(\lambda)\) and \(\mathbf{B}_{p}^{\prime}(\lambda)\) were modulated via \(\lambda\equiv\lambda(\mathcal{T},\mathcal{U},\mathcal{L}_{max})\), which is a function of the kinetic energy of the robot \(\mathcal{T}\equiv\mathcal{T}\,(\mathbf{q}(t),\dot{\mathbf{q}}(t))\), elastic potential energy \(\mathcal{U}\equiv\mathcal{U}\,(\Delta\mathbf{p}(t),\mathbf{K}_{p})\) due to the translational stiffness \(\mathbf{K}_{p}\) and displacement \(\Delta\mathbf{p}(t)\), and the energy threshold \(\mathcal{L}_{max}\): \[\lambda=\left\{\begin{aligned} 1&\quad\text{if }\mathcal{L}_{c}(t,\lambda)\leq\mathcal{L}_{\max}\\ \max\left(\frac{1}{\mathcal{U}}(\mathcal{L}_{\max}-\mathcal{T}), \ 0\right)&\quad\text{if }\mathcal{L}_{c}(t,\lambda)>\mathcal{L}_{\max}\end{aligned}\right.\] where: \[\mathcal{T}(\mathbf{q},\dot{\mathbf{q}}) =\frac{1}{2}\dot{\mathbf{q}}^{\mathrm{T}}\mathbf{M}(\mathbf{q}) \dot{\mathbf{q}}\] \[\mathcal{U}(\Delta\mathbf{p},\mathbf{K}_{p}) =\frac{1}{2}\Delta\mathbf{p}^{\mathrm{T}}\mathbf{K}_{p}\Delta \mathbf{p}\] \[\mathcal{L}_{c}(t,\lambda) =\mathcal{T}(\mathbf{q},\dot{\mathbf{q}})+\lambda\,\mathcal{U}( \Delta\mathbf{p})\] This controller limits the impact of an unexpected contact, e.g., during physical Human-Robot Interaction (pHRI) (Lachner, 2022), by bounding the total energy of the robot via \(\lambda\). For \(\mathcal{T}\leq\mathcal{L}_{\max}\), the total energy of the robot \(\mathcal{L}_{c}\) is always less than or equal to \(\mathcal{L}_{\max}\). To avoid negative stiffness and damping matrices emerging from the condition \(\mathcal{T}>\mathcal{L}_{\max}\), \(\lambda\) is set to be zero via the max-function. For pHRI, the value \(\mathcal{L}_{\max}\) is defined by standards and regulations (International Organization for Standardization, 2016) and is dependent on the environment between the robot and the human. While the presented controller is a simplified example, a more advanced application exists where the damping term can be modulated as a function of stiffness and inertia (Albu-Schaffer et al., 2003). Moreover, the dissipative behavior of the robot can be modulated to limit the robot power, e.g., by using "damping injection" (Stramigioli, 2015). #### 3.4.3 Simulation Example As in Section 3.3.3, we used a 2-DOF planar robot model to generate a goal-directed discrete movement in task-space coordinates. In this example, a square-shaped obstacle was placed to block the robot path. A few seconds after the first contact with the robot, the obstacle was moved aside and the robot could continue its motion (Andrews and Hogan, 1983; Newman, 1987). The code script for this simulation is main_unexpected_contact.py. For DMPs, without a low-gain PD feedback controller, the learned weights from Section 3.3.3 were reused. For this controller, the robot model bounced back from the obstacle due to contact and failed to reach the goal (Figure 5B). Hence, it was necessary to add a low-gain PD controller (Figure 5C). The presented example showed the modular property of DMPs, since the learned feedforward torque controller from Section 3.3 was reused without modification, and superimposed with an additional feedback controller. For EDAs, both the controller with and without energy limitation were able to reach the goal (Figure 5F, 5G). However in the latter case, high accelerations of the end-effector occured after the obstacle was removed (Figure 5H). As shown in Figure 6, \(\lambda\) is regulated to limit the total energy to be less than \(\mathcal{L}_{max}\). By including mechanical impedance as a separate primitive, the EDAs approach was able to regulate the interactive dynamics between the robot and environment. ### Obstacle Avoidance We next consider obstacle avoidance while conducting a point-to-point discrete reaching movement presented in Section 3.3.3. Moreover, we assume that the obstacle is fixed at location \(\mathbf{o}\in\mathbb{R}^{3}\) (\(\mathbb{R}^{2}\) for the planar simulations), although the method can be easily generalized to non-stationary obstacles. #### 3.5.1 Dynamic Movement Primitives To avoid an obstacle located at \(\mathbf{o}\in\mathbb{R}^{3}\) a coupling term \(\mathbf{C}_{t}(t)\) is added to the transformation system (Hoffmann et al., 2009; Ijspeert et al., 2013): \[\tau^{2}\hat{\mathbf{p}}(t)=-\alpha_{c}\beta_{z}\{\mathbf{p}(t)-\mathbf{g}\}- \alpha_{c}\tau\hat{\mathbf{p}}(t)+\mathbf{f}(s(t))+\mathbf{C}_{t}(t)\] The coupling term \(\mathbf{C}_{t}(t)\) is defined by: \[\begin{split}\mathbf{C}_{t}(t)&=\gamma\mathbf{R}(t )\hat{\mathbf{p}}(t)\theta(t)\exp(-\beta\theta(t))\\ \theta(t)&=\arccos\left(\frac{\left[\mathbf{o}- \mathbf{p}(t)\right]^{T}\hat{\mathbf{p}}(t)}{||\mathbf{o}-\mathbf{p}(t)||\;|| \hat{\mathbf{p}}(t)||}\right)\end{split} \tag{17}\] In these equations, \(\gamma\) and \(\beta\) are positive constants which correspond to the amplitude of the coupling term and its exponential decay rate, respectively; \(\theta(t)\) is the angle between \(\hat{\mathbf{p}}(t)\) and \(\mathbf{o}-\mathbf{p}(t)\); \(||\cdot||:\mathbb{R}^{3}\rightarrow\mathbb{R}_{\geq 0}\) is the Euclidean norm operator; \(\mathbf{R}(t)\in\mathbb{R}^{3\times 3}\) is a rotation matrix representing the \(\pm 90^{\circ}\) rotation about axis \(\mathbf{r}(t)\equiv\left\{\mathbf{o}-\mathbf{p}(t)\right\}\times\hat{ \mathbf{p}}(t)\). For spatial task, \(\mathbf{R}(t)\) can be calculated using the Rodrigues' formula (Murray et al., 1994): \[\begin{split}\mathbf{R}(t)&=\mathbf{I}_{3}+\sin \big{(}\pm\frac{\pi}{2}\big{)}[\hat{\mathbf{r}}(t)]+\Big{\{}1-\cos\big{(}\pm \frac{\pi}{2}\big{)}\Big{\}}[\hat{\mathbf{r}}(t)]^{2}\\ &=\mathbf{I}_{3}\pm[\hat{\mathbf{r}}(t)]+[\hat{\mathbf{r}}(t)]^{ 2}\end{split}\] In this equation, \(\hat{\mathbf{r}}(t)\) is the normalization of \(\mathbf{r}(t)\); \([\hat{\mathbf{r}}(t)]\) is a skew-symmetric matrix form of \(\hat{\mathbf{r}}(t)\)(Murray et al., 1994). The \(+\) and \(-\) signs represent \(+90^{\circ}\) and \(-90^{\circ}\) rotation about axis \(\hat{\mathbf{r}}(t)\), respectively. Intuitively, the coupling term forces the movement to move away from the obstacle (Ijspeert et al., 2013). For a planar task, \(\mathbf{R}(t)\) is a skew-symmetric matrix with \(\pm 1\) and \(\mp 1\) as off-diagonal terms. #### 3.5.2 Elementary Dynamic Actions For EDAs, the idea resembles the method of obstacle avoidance using potential fields (Andrews and Hogan, 1983; Newman, 1987; Hogan, 1985; Khatib, 1985; Koditschek, 1987; Hjorth et al., 2020). A mechanical impedance which produces a repulsive force from the obstacle (i.e., a mechanical impedance with a point "repeller" at the obstacle location) is superimposed on the Figure 5: A goal-directed discrete movement with unexpected physical contact for (A, B, C, D) Dynamic Movement Primitives (DMPs, blue) and (E, F, G, H) Elementary Dynamic Actions (EDAs, orange) (Section 3.4.3). For DMP: (A) Moment of contact. (A\(-\)\(\Delta\)\(\Rightarrow\)) DMP without PD controller (A\(\rightarrow\)C) DMP with PD controller. (D) Time vs. \(Y\)-coordinate of the end-effector for A\(\rightarrow\)B (dotted line) and A\(\rightarrow\)C (filled line). For EDA: (E) Moment of contact. (E\(-\)\(\pi\)\(\Rightarrow\)) EDA without impedance modulation (E\(\rightarrow\)G) EDA with impedance modulation. (H) Time vs. \(Y\)-coordinate of the end-effector for E\(\rightarrow\)F (dotted line) and E\(\rightarrow\)G (filled line). With impedance modulation of E\(\rightarrow\)G, a slower movement than E\(\rightarrow\)F was achieved. (D, H) The black dotted lines are a minimum-jerk trajectory (Equation (14)). Parameters of the minimum-jerk trajectory, DMPs and EDAs are identical to those in Figure 4. Parameters of the PD controller of DMPs (Equation (15)): \(\mathbf{K}_{q}=50\mathbf{I}_{2}\) N-m/rad, \(\mathbf{B}_{q}=30\mathbf{I}_{2}\) N-m/rad. Smaller gain values than Figure 3 were chosen to achieve a relatively low-gain PD controller as recommended by (Schaal et al., 2007; Pastor et al., 2013). Parameters of EDAs: \(\mathcal{L}_{max}=2.5\). Figure 6: Simulation results using Elementary Dynamic Actions (EDAs) with impedance modulation via energy regulation (Equation (16)) (Figure 5) (Section 3.4.3). (Top) Time \(t\) vs. \(\lambda\) of the controller. After the first contact, a sudden change in \(\lambda\) occurred. (Bottom) Time vs. \(\mathcal{L}_{c}\) of the controller. Since the zero-force trajectory continued during contact, the decrease of \(\lambda\) limited \(\mathcal{L}_{c}\) to a maximal value of 2.5J. As expected, \(\mathcal{L}_{c}\) did not exceed \(\mathcal{L}_{max}=2.5\)J. task-space impedance controller: \[\mathbf{Z}_{p,1}(t) = \mathbf{K}_{p}\{\mathbf{p}_{0}(t)-\mathbf{p}(t)\}+\mathbf{B}_{p}\{ \mathbf{\dot{p}}_{0}(t)-\mathbf{\dot{p}}(t)\}\] \[\mathbf{Z}_{p,2}(t) = -\frac{k}{||\mathbf{e}-\mathbf{p}(t)||^{n}}(\mathbf{o}-\mathbf{p}( t))\] \[\mathbf{\tau}_{\dot{m}}(t) = \mathbf{J}(\mathbf{q})^{\mathrm{T}}\{\mathbf{Z}_{p,1}(t)+\mathbf{ Z}_{p,2}(t)\} \tag{18}\] In these equations, \(k\) is a positive constant that determines the amplitude of the repulsive force; \(n\) is a positive integer. Intuitively, the impedance term \(\mathbf{Z}_{p,2}(t)\) produces a "potential barrier" which repels the robot from the obstacle. #### 3.5.3 Simulation Example As in Section 3.3.3, we used the 2-DOF planar robot model to generate a goal-directed discrete movement in task-space coordinates. However, a stationary obstacle was located at \(\mathbf{o}\), blocking the path. The code script for this simulation is main_obstacle_avoidance.py. As shown in Figure 7, both approaches successfully achieved the task. Nevertheless, differences between the two approaches were observed. DMPs considered the problem from a position control perspective. The coupling term \(\mathbf{C}_{t}(t)\) (Equation (17)) controlled the acceleration of the end-effector position \(\mathbf{p}(t)\) for obstacle avoidance. As with Section 3.4, the presented example also highlighted the modular property of DMPs. Without modification and reusing the learned weights from Section 3.3.3, DMPs simply superimposed coupling term \(\mathbf{C}_{t}(t)\) onto the transformation systems to modify the learned trajectory. This example shows the property of online trajectory modulation of DMPs, which is an advantage over spline methods (Section 2.1.4). In comparison, EDAs achieved obstacle avoidance without explicit path planning (Hogan, 1985). Instead, both goal-reaching and obstacle avoidance tasks were achieved by using the superposition principle of mechanical impedances (Equation (7)). The modular property of the superposition principle enabled EDAs to divide the task into multiple sub-tasks, allocate an appropriate mechanical impedance for each sub-task, and then combine the mechanical impedances to solve the original task. For this case, mechanical impedance \(\mathbf{Z}_{p,1}\) was used for the goal-directed discrete movement planned in task-space coordinates and mechanical impedance \(\mathbf{Z}_{p,2}\) was used for obstacle avoidance. As with the weights of DMPs, \(\mathbf{Z}_{p,1}\) was simply reused from Section 3.3.3 without any modification. While the modular property of EDAs simplified the approach, care is required for implementation. Since EDAs do not explicitly plan the path, for Figure 7, a slight offset (2cm) in the positive \(X\) direction was added to "push away" the robot to the counterclockwise direction (Andrews and Hogan, 1983). Without this offset, the robot's end-effector could get stuck, or move in a clockwise direction which thereby resulted in collision between the robot links and the obstacle. Several methods, such as using time-varying potential fields, have been demonstrated to overcome these limitations. For brevity, further details are not reviewed here and the reader is referred to Andrews and Hogan (1983); Newman (1987); Khatib (1986); Hjorth et al. (2020). ### Rhythmic Movement We next consider a method to generate a rhythmic, repetitive movement. For this example, we considered movements planned in joint-space and task-space respectively. #### 3.6.1 Dynamic Movement Primitives For DMPs, we used a canonical system and nonlinear forcing terms of a rhythmic movement (Section 2.1). From the generated \(\mathbf{q}(t)\) (respectively \(\mathbf{p}(t)\)), the process discussed in Section 3.2 (respectively Section 3.3) was conducted. #### 3.6.2 Elementary Dynamic Actions For EDAs, we defined \(\mathbf{q}_{0}(t)\) (respectively \(\mathbf{p}_{0}(t)\)) to be a rhythmic movement which we aimed to follow. With this virtual trajectory, either a joint-space (Equation (10)) or task-space impedance controller (Equation (13)) was used. #### 3.6.3 Simulation Example We used the 2-DOF planar robot model from Section 3.2 and Section 3.3. The code scripts for the simulations are main_joint_rhythmic.py for joint-space and main_task_rhythmic.py for task-space. The rhythmic movement in joint-space followed a sinusoidal trajectory. For DMPs and EDAs, both \(\mathbf{q}_{des}(t)\) and \(\mathbf{q}_{0}(t)\) were defined by: \[\mathbf{q}_{des}(t),\mathbf{q}_{0}(t)=\mathbf{q}_{i}+\mathbf{q}_{4}\sin( \omega_{0}t) \tag{19}\] In this equation, \(\mathbf{q}_{4}\in\mathbb{R}^{2}\) is the amplitude of the rhythmic movement. The rhythmic movement in task-space followed a circular trajectory. For DMPs and EDAs, both \(\mathbf{p}_{des}(t)\) and \(\mathbf{p}_{0}(t)\) were defined by: \[\mathbf{p}_{des}(t),\mathbf{p}_{0}(t)=\mathbf{c}+[r_{0}\cos(\omega_{0}t),r_{0} \sin(\omega_{0}t)] \tag{20}\] In this equation, \(\mathbf{c}\in\mathbb{R}^{2}\) is the center location of the circle; \(r_{0}\) is the radius of the circle. As shown in Figure 8, both DMPs and EDAs successfully generated rhythmic movement in joint-space (Figure 8A, 8B, 8C, 8D) and task-space (Figure 8E, 8F). As discussed in Figure 7: A goal-directed discrete movement with obstacle avoidance for Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.5.3). (Left) Trajectory of the end-effector position. Initial end-effector position \(\mathbf{p}\), and goal location \(\mathbf{g}\) are depicted as black circle markers. (Right) Time-lapse of the movement for DMP (Top, blue) and EDA (Bottom, orange). Dotted lines are a minimum-level trajectory. Obstacle location \(\mathbf{e}=[0.0,1.14]\)m. Parameters of DMPs: \(\gamma=300\), \(\beta=-3\) (Equation (17)). Parameters of EDAs: \(k=0.1\)N/m\({}^{*}\), \(n=6\) (Equation (18)). Other parameters are identical to those in Figure 4. Section 3.2.3 and Section 3.3.3, for DMPs, perfect tracking was achieved in both joint-space and task-space. Given the period of the rhythmic movement (Section 2.1.4), rhythmic trajectories with arbitrary complexity can be learned using Imitation Learning. For EDAs, tracking error existed in both joint-space and task-space. Nevertheless, EDAs generated a rhythmic, repetitive movement without an inverse dynamics model and without solving the inverse kinematics. ### Combination of Discrete and Rhythmic Movements We next designed a controller to generate a combination of discrete and rhythmic movements (Hogan and Sternad, 2007). For this example, we considered a movement planned in both joint-space and task-space. #### 3.7.1 Dynamic Movement Primitives For DMPs, the canonical system and nonlinear forcing term are different for discrete and rhythmic movements (Section 2.1.1, 2.1.2). Hence, both rhythmic and discrete DMPs cannot be directly combined. Instead, the discrete DMP generates a time-changing goal \(\mathbf{g}(t)\) for the rhythmic DMP (Degallier et al., 2006, 2007, 2008): \[\tau_{\mathbf{i}}^{2}\hat{\mathbf{q}}(t) =-\alpha_{z,1}\beta_{z,1}\{\mathbf{q}(t)-\mathbf{g}(t)\}-\alpha_ {z,1}\tau_{\mathbf{i}}\hat{\mathbf{q}}(t)+\mathbf{f}(s(t))\] \[s(t) =\frac{t}{\tau_{\mathbf{i}}}\qquad\qquad\mod 2\pi\] \[\tau_{\mathbf{i}}^{2}\hat{\mathbf{g}}(t) =-\alpha_{z,2}\beta_{z,2}\{\mathbf{g}(t)-\mathbf{g}_{0}\}-\alpha _{z,2}\tau_{2}\hat{\mathbf{g}}(t)\] The first two equations represent rhythmic DMPs, and the last equation represents discrete DMPs without a nonlinear forcing term and a canonical system. \(\mathbf{g}_{0}\) is the discontinuous change of the goal location to which goal \(\mathbf{g}(t)\) converges. Note that for control in task-space, \(\mathbf{q}(t)\) terms in the equation are substituted for \(\mathbf{p}(t)\). From the generated \(\mathbf{q}(t)\) (respectively \(\mathbf{p}(t)\)), the process discussed in Section 3.2 (respectively Section 3.3) was conducted. #### 3.7.2 Elementary Dynamic Actions For EDAs, we simply combine submovements \(\mathbf{q}_{0,sub}(t)\) and oscillations \(\mathbf{q}_{0,sub}(t)\) to define the virtual trajectory \(\mathbf{q}_{0}(t)\): \[\mathbf{q}_{0}(t)=\mathbf{q}_{0,sub}(t)+\mathbf{q}_{0,sub}(t)\] Note that for control in task-space, \(\mathbf{q}\) terms in the equation are substituted with \(\mathbf{p}\). With this virtual trajectory, either a joint-space (Equation (10)) or a task-space impedance controller (Equation (13)) is used. #### 3.7.3 Simulation Example Using the 2-DOF robot model in Section 3.2, the goal was to generate a combination of discrete and rhythmic movements both in joint-space and task-space. The code scripts for the simulations are main_joint_discrete_and_rhythmic.py for joint-space and main_task_discrete_and_rhythmic.py for task-space. For \(\mathbf{q}_{des}(t)\) (respectively \(\mathbf{p}_{des}(t)\)) of DMPs, the sinusoidal trajectory, Equation 19 (respectively circular trajectory, Equation 20) was represented by rhythmic DMPs, and \(\mathbf{g}(t)\) of \(\mathbf{q}_{des}(t)\) (respectively \(\mathbf{p}_{des}(t)\)) was represented by discrete DMPs with \(\mathbf{g}_{0}\) discretely changing from \(\mathbf{q}_{i}\) to \(\mathbf{q}\) (respectively \(\mathbf{p}_{i}\) to \(\mathbf{p}_{f}\)). For \(\mathbf{q}_{0}(t)\) of EDAs, a minimum-jerk trajectory (Equation (11)) was combined with a sinusoidal trajectory (Equation Figure 8: Rhythmic movements in (A, B, C, D) joint-space and (E, F) task-space for Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.6.3). (A, B) Rhythmic movement in joint-space and its (C, D) joint trajectories. (C, D) The black dashed lines which are perfectly overlapped with DMPs (blue lines) represent a sinusoidal trajectory with parameters: \(\mathbf{q}_{i}=[0.5,0.5]\) rad, \(\mathbf{q}_{A}=[0.1,0.3]\) rad, \(\mathbf{q}_{0}=\pi\) rads (Equation (19)). Parameters of rhythmic DMP: \(\alpha_{z}=10\), \(\beta_{z}=2.5\), \(\tau=1.0\), \(r=1.0\), \(\nu=40\), \(P=100\), \(c_{i}=2\pi(i-1)/N\), \(\nu=4\) for \(i=1,2,...,N\), Parameters of EDA are identical to those in Figure 3. (E, F) Rhythmic movement in task-space. The black dashed lines represent a circular trajectory with parameters: \(\mathbf{c}=[0,1.4142]\)m, \(\nu_{0}=0.5\)m, \(\alpha_{z}=\pi\) rads (Equation (20)). Parameters of EDAs: \(\mathbf{N}_{P}=90\)i2 N/m, \(\mathbf{B}_{o}=60\mathbf{I}_{2}\) N-s/m. Higher stiffness and damping values than Figure 4 were chosen to reduce the tracking error. Parameters of rhythmic DMP are identical to those in (A, B). Consistent with Figure 3 and Figure 4, for DMP, perfect tracking was achieved while for EDA, a non-negligible tracking error existed. (19)): \[\mathbf{q}_{0}(t) =\mathbf{q}_{0,neb}(t)+\mathbf{q}_{0,asc}(t)\] \[\mathbf{q}_{0,amb}(t) =\begin{cases}\mathbf{0}&0\leq t<t_{off}\\ \mathbf{q}_{i}+(\mathbf{q}_{f}-\mathbf{q}_{i})f_{MIT}(t)&t_{off}\leq t<T+t_{ off}\\ \mathbf{q}_{f}&T+t_{off}\leq t\end{cases}\] \[\mathbf{q}_{0,osc}(t) =\mathbf{q}_{4}\sin(\omega_{f})\] \[f_{MIT}(t) =10\Big{(}\frac{t-t_{off}}{T}\Big{)}^{3}-15\Big{(}\frac{t-t_{ off}}{T}\Big{)}^{4}+6\Big{(}\frac{t-t_{off}}{T}\Big{)}^{5}\] In this equation, \(t_{off}>0\) is a time offset for the submovement. For \(\mathbf{q}_{des}(t)\) of DMPs, the sinusoidal trajectory was represented by rhythmic DMPs, and \(\mathbf{g}(t)\) of \(\mathbf{q}_{des}(t)\) was represented by discrete DMPs with \(\mathbf{g}_{0}\) discretely changing from \(\mathbf{q}_{i}\) to \(\mathbf{q}_{f}\). For \(\mathbf{p}_{0}(t)\) of EDAs, a minimum-jerk trajectory (Equation (11)) was combined with a circular trajectory (Equation (20)): \[\mathbf{p}_{0}(t) =\mathbf{p}_{0,neb}(t)+\mathbf{p}_{0,asc}(t)\] \[\mathbf{p}_{0,mb}(t) =\begin{cases}\mathbf{0}&0\leq t<t_{off}\\ \mathbf{p}_{i}+(\mathbf{p}_{f}-\mathbf{p}_{i})f_{MIT,off}(t)&t_{off}\leq t<T+t_ {off}\\ \mathbf{g}&T+t_{off}\leq t\end{cases}\] \[\mathbf{p}_{0,osc}(t) =[r_{0}\cos(\omega_{f}),r_{0}\sin(\omega_{f})]\] As shown in Figure 9, both approaches successfully produced a combination of discrete and rhythmic movements in joint-space and task-space. However, since DMPs separate the canonical system and nonlinear forcing terms for discrete and rhythmic movements (Section 2.1.1, 2.1.2), merging the two movements was not straightforward. DMPs circumvented this issue by assigning the time-changing goal \(\mathbf{g}(t)\) of rhythmic DMPs, with discrete DMPs without the nonlinear forcing terms input. Nevertheless, this resulted in \(\mathbf{g}(t)\) that followed the response of a second-order linear system (Figure 9). On the other hand for EDAs, given a single impedance operator, discrete and rhythmic movements were directly combined at the level of the virtual trajectory (i.e., the forward path dynamics) (Figure 2). With modest parameter tuning, the discrete and rhythmic movements used in Section 3.2.3 and Section 3.6.3 were reused and combined. This approach intuitively provides convenience in practical implementation and also emphasizes the modularity of EDAs at a kinematic level. Moreover, the trajectory of the discrete movement need not be restricted to a response of a second-order linear system and can be freely chosen. ### Sequence of Discrete Movements We next consider designing a controller to generate a sequence of discrete movements planned in task-space Figure 9: A combination of discrete and rhythmic movements in (A-F) joint-space and (G-J) task-space for Dynamic Movement Primitives (DMPs, blue) and Elementary Dramic Actions (EDAs, orange) (Section 3.7.3). (C, E, I) Black filled lines represent the discrete change of \(\mathbf{g}_{0}\) of DMP Black dotted lines represent \(\mathbf{g}_{i}(t)\), which follows a response of a (critically damped) second-order linear system. (D, F, J) Black dotted lines represent \(\mathbf{q}_{0}(t)\) and \(\mathbf{p}_{0}(t)\) for joint-space and task-space, respectively. (A-F) Parameters of the discrete and rhythmic movements in joint-space: \(\mathbf{a}_{0}=\pi\) rad/s, \(\mathbf{q}_{4}=[0.1,0.3]\) rad, \(\mathbf{q}_{i}=[0.5,0.5]\) rad, \(\mathbf{q}_{f}=[1.5,1.5]\) rad. (G-J) Parameters of the discrete and rhythmic movements in task-space: \(\mathbf{a}_{0}=\pi\) rad/s, \(r_{0}=0.3\)m, \(\mathbf{p}_{i}=[-0.47,0.9]\)m, \(\mathbf{p}_{f}=[0.53,0.9]\)m. In joint-space (respectively task-space), multiple discrete movements in opposite directions are generated by switching the values of \(\mathbf{q}_{i}\) (respectively \(\mathbf{p}_{i}\)) and \(\mathbf{q}_{f}\) (respectively \(\mathbf{p}_{f}\)), with time offset \(t_{off}=3.5+5\), where \(i\) is the number of discrete movements. Parameters of discrete DMPs in both joint-space and task-space: \(\mathbf{\tau}_{2}=1.0\), \(\mathbf{\alpha}_{z}=10\), \(\beta_{z}=2.5\). Other parameters of DMP are identical to those in Figure 8. Parameters of EDA are identical to those in Figure 8. For EDA, given a single impedance operator, discrete and rhythmic movements can be directly combined at the level of the virtual trajectory. coordinates. For this, we show how the controller generates a movement in response to a sudden change of goal location. #### 3.8.1 Dynamic Movement Primitives With the controller introduced in Section 3.3.1, an additional differential equation for the time-varying goal location \(\mathbf{g}(t)\) was added (Ijspeert et al., 2013): \[\begin{split}\tau^{2}\dot{\mathbf{p}}(t)&=-\alpha_{ \varepsilon}\beta_{\varepsilon}\{\mathbf{p}(t)-\mathbf{g}(t)\}-\alpha_{ \varepsilon}\tau\dot{\mathbf{p}}(t)+\mathbf{f}(s(t))\\ \tau\dot{\mathbf{g}}(t)&=\alpha_{\varepsilon}\{ \mathbf{g}_{0}-\mathbf{g}(t)\}\end{split} \tag{21}\] In these equations, \(\alpha_{\varepsilon}\) is a positive constant; \(\mathbf{g}_{0}\) is a discontinuous change of the goal location. In other words, the goal location \(\mathbf{g}\) is first-order low-pass filtered. This is used to avoid a discontinuous jump in the acceleration \(\dot{\mathbf{p}}(t)\) of the transformation system. Note that both \(\mathbf{p}(t)\) and \(\mathbf{g}(t)\) comprise a third-order linear system with a nonlinear forcing term and \(\alpha_{\varepsilon}\mathbf{g}_{0}/\tau\) as inputs. While any positive values of \(\alpha_{\varepsilon}\) can be used, for \(\tau=1\) and \(\beta_{\varepsilon}=\alpha_{\varepsilon}/4\), \(\alpha_{\varepsilon}\) is often chosen to be \(\alpha_{\varepsilon}=\alpha_{\varepsilon}/2\) such that the third-order linear system has three repeated eigenvalues (Nemec and Ude, 2012; Schaal et al., 2005). Note that \(\mathbf{g}(t)\) has a closed-form solution. Without loss of generality, if the goal location changes from \(\mathbf{g}_{0}=\mathbf{g}_{old}\) to \(\mathbf{g}_{0}=\mathbf{g}_{new}\) at time \(t=0\), with initial condition \(\mathbf{g}(0)=\mathbf{g}_{old}\): \[\mathbf{g}(t)=\mathbf{g}_{new}+(\mathbf{g}_{old}-\mathbf{g}_{new})\exp\left( -\frac{\alpha_{\varepsilon}}{\tau}t\right)\] Hence, a sequence of finite submovements can be generated by discrete changes of the goal location from \(\mathbf{g}_{0}=\mathbf{g}_{old}\) to \(\mathbf{g}_{0}=\mathbf{g}_{new}\). While the first discrete movement can follow an arbitrary trajectory using Imitation Learning, the subsequent discrete movements cannot, but follow the motion of a stable third-order linear system converging to the corresponding \(\mathbf{g}_{0}\) value. #### 3.8.2 Elementary Dynamic Actions With the first-order task-space impedance controller (Equation (13)), a sequence of discrete movements is generated by sequencing multiple submovements (Flash and Hens, 1991): \[\dot{\mathbf{p}}_{0}(t)=\sum_{t}\mathbf{v}_{i}\ \dot{\mathbf{\phi}}_{t}(t-t_{i})\] The amplitude of the \(i\)-th basis function \(\mathbf{v}_{i}\) is chosen to reach the goal of the \(i\)-th submovement, starting from the goal of the previous submovement. Note that each submovement can use a different basis function \(\delta\). #### 3.8.3 Simulation Example The simulation example in Figure 10 reproduced the experiment conducted in Flash and Henis (1991). Let the goal location of the first discrete movement be \(\mathbf{g}_{old}\). A new goal location, \(\mathbf{g}_{new}\) suddenly appeared at time \(t_{g}\). The task was to modulate the controller to eventually reach \(\mathbf{g}_{new}\). We assumed that the new goal position \(\mathbf{g}_{new}\) was immediately measured at time \(t_{g}\). The simulation example used the 2-DOF robot model of Section 3.2. The code script for the simulation is main_sequencing.py For DMPs, the first discrete movement followed a minimum-jerk trajectory with goal location \(\mathbf{g}_{old}\). The subsequent discrete movement was generated by discretely changing \(\mathbf{g}_{0}=\mathbf{g}_{old}\) to \(\mathbf{g}_{0}=\mathbf{g}_{new}\) at \(t=t_{g}\): \[\mathbf{g}_{0}(t)=\begin{cases}\mathbf{g}_{old}&0\leq t<t_{g}\\ \mathbf{g}_{new}&t_{g}\leq t\end{cases}\] For EDAs, the minimum-jerk trajectory was used for the basis function of each submovement \(\mathbf{p}_{0}(t)\): \[\mathbf{p}_{0}(t) =\mathbf{p}_{0,1}(t)+\mathbf{p}_{0,2}(t)\] \[\mathbf{p}_{0,1}(t) =\begin{cases}\mathbf{p}_{i}+(\mathbf{g}_{old}-\mathbf{p}_{i})f_{ HMT,1}(t)&0\leq t<T_{1}\\ \mathbf{g}_{old}&T_{1}\leq t\end{cases}\] \[\mathbf{p}_{0,2}(t) =\begin{cases}\mathbf{0}&0\leq t<t_{g}\\ (\mathbf{g}_{new}-\mathbf{g}_{old})f_{HMT,2}(t)&t_{g}\leq t<t_{g}+T_{2}\\ \mathbf{g}_{new}-\mathbf{g}_{old}&t_{g}+T_{2}\leq t\end{cases}\] \[f_{HMT,1}(t) =10\Big{(}\frac{t}{T_{1}}\Big{)}^{3}-15\Big{(}\frac{t}{T_{1}} \Big{)}^{4}+6\Big{(}\frac{t}{T_{1}}\Big{)}^{5}\] \[f_{HMT,2}(t) =10\Big{(}\frac{t-t_{g}}{T_{2}}\Big{)}^{3}-15\Big{(}\frac{t-t_{g}} {T_{2}}\Big{)}^{4}+6\Big{(}\frac{t-t_{g}}{T_{2}}\Big{)}^{5}\] In these equations, subscripts 1 and 2 denote the first and second submovements, respectively. With this \(\mathbf{p}_{0}(t)\), the controller introduced in Section 3.3.2 was used. As shown in Figure 10, for both approaches, \(\mathbf{p}(t)\) converged to the new goal location \(\mathbf{g}_{new}\). For DMPs, the subsequent discrete movements by design followed the motion of a third-order linear system. For EDAs, the subsequent submovements can use any basis functions, which thereby results in flexibility for determining the resulting motion. As a result, EDAs were able to reach the \(\mathbf{g}_{new}\) location faster than DMPs. Moreover, the modular property of EDAs at the kinematics level enabled a smooth integration of the second movement without any modification of the first one. ### Managing Kinematic Redundancy We next consider designing a controller to generate a goal-directed (or sequence of) discrete movement(s) of the end-effector for a kinematically redundant robot. By definition, kinematic redundancy occurs when a Jacobian matrix has a null space (Siciliano, 1990). Hence, infinitely many joint velocity solutions exist to produce a desired end-effector velocity. While kinematic redundancy provides significant challenges, additional control objectives can be achieved by exploiting kinematic redundancy. Examples include obstacle avoidance (Baillieul, 1986; Maciejewski and Klein, 1985), joint limit avoidance (Liegeois, 1977; Hjorth et al., 2020), and minimization of instantaneous power during movement (Klein and Huang, 1983). #### 3.9.1 Dynamic Movement Primitives For DMPs, a feedback controller is employed to manage kinematic redundancy. Multiple feedback control methods exist and can be divided into three categories: velocity-based control, acceleration-based control, and force-based control (Nakanishi et al., 2005, 2008). Within these methods, we used a "velocity-based control without joint-velocity integration" (Nakanishi et al., 2008; Pastor et al., 2009). Let \(\mathbf{p}_{des}(t)\) be the desired end-effector trajectory with duration \(T\) and \(\mathbf{p}_{des}(T)=\mathbf{g}\). By generating \(\mathbf{p}_{des}(t)\), \(\mathbf{p}_{des}(t)\), \(\dot{\mathbf{p}}_{des}(t)\) as shown in Section 3.3, a reference end-effector velocity \(\dot{\mathbf{p}}_{r}(t)\) and its corresponding reference joint velocity \(\dot{\mathbf{q}}_{r}(t)\) are defined: \[\dot{\mathbf{p}}_{r}(t) =\dot{\mathbf{p}}_{des}(t)+\mathbf{A}_{1}(\dot{\mathbf{p}}_{des}(t )-\mathbf{p}(t))\] \[\dot{\mathbf{q}}_{r}(t) =\mathbf{J}(\mathbf{q})^{+}\dot{\mathbf{p}}_{r}(t)=\mathbf{J}( \mathbf{q})^{+}\big{\{}\dot{\mathbf{p}}_{des}(t)+\mathbf{A}_{1}(\mathbf{p}_{des }(t)-\mathbf{p}(t))\big{\}}\] In these equations, \(\mathbf{J}(\mathbf{q})^{+}\) denotes the Moore-Penrose pseudo inverse, which is defined by \(\mathbf{J}(\mathbf{q})^{+}=\mathbf{J}(\mathbf{q})^{\mathrm{T}}\{\mathbf{J}( \mathbf{q})\mathbf{J}(\mathbf{q})^{\mathrm{T}}\}^{-1}\)(Penrose, 1955; Klein and Huang, 1983); \(\mathbf{A}_{1}\in\mathbb{R}^{3\times 3}\) (\(\mathbb{R}^{2\times 2}\) for planar task) is a symmetric positive definite matrix. Note that for \(\dot{\mathbf{q}}_{r}(t)\), an additional term which projects an arbitrary joint velocity vector onto the null space of the Jacobian matrix \(\mathbf{J}(\mathbf{q})\) can be added (Liegegis, 1977). However, this additional term was omitted for brevity. Accordingly, the reference end-effector acceleration \(\dot{\mathbf{p}}_{r}(t)\) and its corresponding reference joint acceleration \(\dot{\mathbf{q}}_{r}(t)\) are defined: \[\ddot{\mathbf{p}}_{r}(t) =\ddot{\mathbf{p}}_{des}(t)+\mathbf{A}_{1}(\dot{\mathbf{p}}_{des }(t)-\dot{\mathbf{p}}(t))\] \[\dot{\mathbf{q}}_{r}(t) =\mathbf{J}(\mathbf{q})^{+}(\dot{\mathbf{p}}_{r}(t)-\mathbf{J}( \mathbf{q})\dot{\mathbf{q}}(t))\] \[=\mathbf{J}(\mathbf{q})^{+}\big{\{}\dot{\mathbf{p}}_{des}(t)+ \mathbf{A}_{1}[\dot{\mathbf{p}}_{des}(t)-\dot{\mathbf{p}}(t)]-\ddot{\mathbf{J }}(\mathbf{q})\dot{\mathbf{q}}(t)\big{\}}\] From these values, \(\mathbf{\tau}_{in}(t)\) is defined by: \[\mathbf{\tau}_{in}(t)=\mathbf{M}(\mathbf{q})\dot{\mathbf{q}}_{r}(t)+\mathbf{ C}(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}_{r}(t)-\mathbf{A}_{2}(\dot{ \mathbf{q}}(t)-\dot{\mathbf{q}}_{r}(t))\] In this equation, \(\mathbf{A}_{2}\in\mathbb{R}^{n\times n}\) is a symmetric positive definite matrix. Note that this controller, suggested in Nakanishi et al. (2008), is equivalent to the sliding mode feedback controller introduced by Slotine and Li (1987). It was shown by Slotine and Li (1987) that \(\mathbf{p}(t)\) asymptotically converges to \(\mathbf{p}_{des}(t)\). Moreover, with this feedback controller, one can resolve kinematic singularity using a damped least-squares inverse (Section 3.3.3) (Nakamura and Hanafusa, 1986; Chiaverini et al., 1994). #### 3.9.2 Elementary Dynamic Actions For EDAs, kinematic redundancy of the robot manipulator can be managed by superimposing multiple mechanical impedances (Hermus et al., 2021; Verdi, 2019). In detail, a first-order task space impedance controller (Equation (13)) can be superimposed with a joint-space controller that implements joint damping (Equation 10): \[\mathbf{Z}_{p}(t) =\mathbf{K}_{p}\{\mathbf{p}_{0}(t)-\mathbf{p}(t)\}+\mathbf{B}_{p }\big{\{}\dot{\mathbf{p}}_{0}(t)-\dot{\mathbf{p}}(t)\big{\}}\] \[\mathbf{Z}_{q}(t) =-\mathbf{B}_{q}\dot{\mathbf{q}}(t) \tag{22}\] \[\mathbf{\tau}_{in}(t) =\mathbf{J}(\mathbf{q})^{\mathrm{T}}\mathbf{Z}_{p}(t)+\mathbf{Z}_ {q}(t)\] As in Section 3.2.2 and Section 3.3.2, with constant symmetric positive definite matrices of \(\mathbf{K}_{p},\mathbf{B}_{p},\mathbf{B}_{q}\), a goal directed discrete movement can be generated by setting \(\mathbf{p}_{0}(t)\) to be a submovement which ends at goal location \(\mathbf{g}\). The stability of this controller is shown in Arimoto et al. (2005b, a); Arimoto and Sekimoto (2006); Lechner (2022), where an asymptotic convergence of \(\dot{\mathbf{q}}(t)\to\mathbf{\theta}\) and \(\mathbf{p}(t)\to\mathbf{g}\) is guaranteed for discrete movement \(\mathbf{p}_{0}(t)\). This also implies an asymptotic convergence of \(\mathbf{q}(t)\) to one of the infinite number of solutions that satisfies \(\mathbf{g}=\mathbf{h}(\mathbf{q}(t))\). Like the controller in Equation (13), this controller does not involve an inversion of the Jacobian matrix. Moreover, explicitly solving the inverse kinematics and an inverse dynamics model are not required. Hence, the approach remains stable near kinematic singularities. For the desired discrete movement, we used a damping joint-space impedance operator to reduce the joint motions in the nullspace of \(\mathbf{J}(\mathbf{q})\) (Equation (22)). Note that for repetitive rhythmic movements in task-space (Klein and Huang, 1983), this controller might result in non-negligible drift in joint space (Mussa-Ivaldi and Hogan, 1991). If this resulted in control problems, e.g., violation of joint limits, one can augment the damping impedance operator with a stiffness term \(\mathbf{K}_{q}(\mathbf{q}_{0}(t)-\mathbf{q}(t))\), which will eliminate the joint-space drift and enable a stable equilibrium configuration. For brevity, this was not considered here. Superimposing joint-space and task-space impedances can yield task conflicts, unless the virtual joint configuration Figure 10: A sequence of discrete movements for (A, B, D, E) Dynamic Movement Primitives (DMPs, blue) and (A, C, D, E) Elementary Dynamic Actions (EDAs, orange) (Section 3.8.3). (A) Trajectory of the end-effector position. (B, C) Time-lapse of the movement of the 2-DOF robot model. (D, E) Time vs. \(X\)- and \(Y\)-coordinates of the end-effector trajectory. (A, B, C) The first movement headed toward \(\mathbf{p}_{des}(t)\) (square grey marker), but at time \(t=t_{g}\) (D, E), the target switched location to \(\mathbf{g}_{move}\) (square black marker) which necessitated a second movement. Parameters of DMPs (for the first discrete movement): \(\mathbf{z}_{s}=1.0\), \(\mathbf{\tau}=1.0\). Other parameters are identical to those in Figure 4. Parameters of EDAs: \(\mathbf{\tau}_{1}=1.0\), \(\mathbf{\tau}_{2}=1.0\)s, \(\mathbf{\tau}_{s}=0.5\)s. \(\mathbf{g}_{old}=[-0.7,1.22]\)m, \(\mathbf{p}_{ox}=[0.8,1.72]\)m, \(\mathbf{p}_{i}=[0.0,0.52]\)m. Other parameters are identical to those in Figure 4. (D, E) For the new goal location \(\mathbf{g}_{move}\), EDA achieved faster convergence than DMP. For DMP the second movement followed a response of a third-order linear system. On the other hand for EDA, the basis function of the second movement can be freely chosen. Hence, faster convergence to \(\mathbf{g}_{move}\) was achieved for EDA. to which the joint stiffness is connected is defined at the desired goal location (Hermus et al., 2021). Often, the task conflict is resolved by using null-space projection methods, as suggested by Knaib (1995). However, it is important to note that the resultant controller violates passivity (Lachner, 2022). Alternatively, it has been shown that, with sufficiently large null-space dimension, the task conflict is minimized or even elminated (Hermus et al. (2021)). #### 3.9.3 Simulation Example Consider a 5-DOF planar serial-link robot model, where each link consists of a single uniform slender bar with mass and length of 1kg and 1m, respectively. With this robot model, we generated a single (or sequence of) discrete movement(s). As in Section 3.3.3, a minimum-jerk trajectory was used. For the sequence of discrete movements, the trajectories of Section 3.8.3 were used. The code scripts for the simulations are main_redundant_discrete.py for single discrete movement and main_redundant_sequencing.py for sequence of discrete movements. As shown in Figure 11, both approaches were able to achieve goal-directed discrete movements. For DMPs, the approach used a feedback controller with reference trajectories generated by DMPs to manage kinematic redundancy (Slotine and Li, 1991; Nakanishi et al., 2008). For EDAs, the controller simply reused the task-space impedance controller in Section 3.3.2 and combined it with the impedance controller (with \(\mathbf{K}_{p}=\mathbf{0}\)) of Section 3.2.2. As demonstrated in Section 3.5, this example again emphasizes the modular property of EDAs. ## 4 Discussion In the previous Sections, we presented detailed implementations of both DMPs and EDAs to solve eight control tasks. Here, we summarize the similarities and differences between the two approaches. Moreover, we briefly discuss how these two methods might be combined to exploit the advantages of both approaches. ### _Similarities between the Two Approaches_ DMPs and EDAs both stem from the idea of motor primitives. Hence, both approaches share the same principle -- using motor primitives as fundamental building blocks to parameterize a controller. DMPs parameterize the controller with a canonical system, nonlinear forcing terms, and transformation systems (Section 2.1). EDAs parameterize the controller with submovements, oscillations, and mechanical impedances (Section 2.2) Robot control based on motor primitives provides several advantages, and we presented eight examples. First, by parameterizing the controller with motor primitives, the approaches provide a high level of autonomy for generating dynamic robot behavior. Once triggered, the primitive behaviors "play out" without requiring intervention from higher levels of the control system. As a result, the computational complexity of the control problem is reduced. For instance, we showed that DMPs can be scaled to multi-DOF systems by synchronizing a canonical system with multiple transformation systems (Section 2.1.4). With Locally Weighted Regression of Imitation Learning, learning new motor skills is reduced to calculating the best-fit weights of the nonlinear forcing terms. The best-fit weights are learned by simple matrix algebra, which is computationally efficient (Section 3.2, 3.3). In fact, it was reported that this computational efficiency of DMPs achieved control of a 30-DOF humanoid robot (Atkeson et al., 2000; Ijspeert et al., 2002; Schaal et al., 2007; Ijspeert et al., 2013). For EDAs, by parameterizing the controller with motor primitives, the process of acquiring and retaining complex motor skills is simplified by identifying a reduced set of parameters, e.g., the initial and final positions of a submovement (Section 3.8, 3.9). These computational advantages are particularly prominent in control tasks associated with high-DOF systems, e.g., manipulation of flexible, high-dimensional objects (Nah et al., 2020, 2021, 2023). Fig. 11: Managing kinematic redundancy using Dynamic Movement Primitives (DMPs, blue) and Elementary Dynamic Actions (EDAs, orange) (Section 3.9.3). (A, B, C) A single of discrete movement. (D, E, F) A sequence of discrete movements. The first movement headed toward \(\mathbf{g}_{old}\) (square grey marker), but at time \(t=t_{g}\), target switched location to \(\mathbf{g}_{new}\) (square black marker) which necessitated a second movement. (A, B) The dashed black lines are a minimum-jerk trajectory. (C, F) End-effector trajectories. (A, B, C) Parameters of the minimum-jerk trajectory: \(\mathbf{p}_{i}=[0,3]\text{m}_{i}\), \(\mathbf{g}=[3,3]\text{m}_{i}\), \(T=2.0\text{s}\). (D, E, F) Parameters of the minimum-jerk trajectories: \(\mathbf{p}_{i}=[-1.62,0.76]\text{m}_{i}\), \(\mathbf{g}_{old}=[-3.62,1.76]\text{m}_{i}\), \(\mathbf{g}_{new}=[2.38,3.26]\text{m}_{i}\), \(T_{1}=2.0\text{s}\), \(T_{2}=3.0\text{s}\), \(t_{g}=1.0\). Parameters of DMPs: \(\mathbf{A}_{1}=80\text{i}\), \(\mathbf{A}_{2}=100\text{i}_{5}\), \(\tau=T\). Other parameters are identical to those in Figure 3. Parameters of EDAs: \(\mathbf{k}_{p}=300\text{i}_{2}\) N/m, \(\mathbf{B}_{p}=100\text{i}_{2}\) N/m, \(\mathbf{B}_{q}=30\text{i}_{5}\) N-m-s/rad. Moreover, motor primitives offer a modular framework for robot control. By treating motor primitives as basic modules, acquisition or generation of new motor skill occurs at the level of modules or their combination (d'Avella, 2016). Once the modules are learned, one can generate a new repertoire of movements by simply combining or reusing the learned modules. As shown in several simulation examples, i.e, obstacle avoidance (Section 3.5), combination of discrete and rhythmic movements, (Section 3.7), sequencing discrete movements (Section 3.8), managing kinematic redundancy (Section 3.9), these tasks were simplified by the modular properties of DMPs and EDAs. This modular property provides strong adaptability and flexibility for robot control, as learning new motor skills by combining or reusing learned modules is intuitively easier than learning "from scratch." However, it is worth emphasizing that the details of the modular property are significantly different for the two approaches (Section 4.2.5). ### Differences between the Two Approaches #### 4.2.1 The Need for an Inverse Dynamics Model For torque-controlled robots, DMPs require an inverse dynamics model, whereas EDAs do not (Section 3.1). An inverse dynamics model can introduce practical challenges and constraints when applying DMPs to torque-controlled robots. In particular, acquiring accurate models of the robot can be challenging and time-consuming. However, the drawbacks associated with an inverse dynamics model can be dismissed for position control in joint-space. As a result, the application of DMPs in joint-space position-controlled robots is straightforward and efficient. Nevertheless, joint-space position control presents its own challenges when compared to torque-controlled methods. One obvious challenge is the problem of kinematic transformation between the robot's generalized coordinates and task-space coordinates (Hogan, 2022). In fact, these challenges were highlighted in the examples presented, e.g., the problem of inverse kinematics and kinematic singularity (Section 3.3) and managing kinematic redundancy (Section 3.9). Recall that these problems necessitate additional methods, e.g., damped least-squares inverse to manage kinematic singularity (Section 3.3) or sliding mode control to manage kinematic redundancy (Section 3.9). Note that the eight control methods to manage kinematic redundancy presented by Nakanishi et al. (2008) (which include the latter approach) assume feedback control using torque-actuated robots, not position-controlled robots. Perhaps more important, position control introduces instability for tasks involving contact and physical interaction. Robot control involving physical interaction requires controlling the interactive dynamics between the robot and environment (Hogan, 2022). For this, using position control turns out to be inadequate (De Santis et al., 2008). A position-actuated robot fails to provide the level of compliance needed to achieve safe physical interaction. Moreover, the interactive dynamics cannot be directly regulated independent of the environment. Instead of position control, we considered control methods using (ideal) torque-actuated robots to manage contact and physical interaction (Section 3.4). #### 4.2.2 Tracking Control In the absence of uncertainties and external disturbances, DMPs can achieve perfect trajectory tracking, both in task-space and joint-space coordinates (Section 3.2, 3.3, 3.6, 3.7, 3.9). Using Imitation Learning (Section 2.1.4), tracking a trajectory of arbitrary complexity can be achieved. DMPs also allow online trajectory modulation of the learned trajectory, which was shown in the obstacle avoidance example (Section 3.5). EDAs control neither position nor force directly (Section 2.2.4). Hence, non-negligible tracking error arises unless high values of mechanical impedances are employed (Section 3.3, 3.6). To achieve tracking control with EDAs for a given desired trajectory, an additional method to derive the corresponding virtual trajectory should be employed. For instance, trajectory optimization methods which calculate the time course of impedances and virtual trajectories that produce the desired trajectory might be employed. #### 4.2.3 Contact and Physical Interaction To guarantee robustness against contact and physical interaction, DMPs superimpose a joint-space PD controller on the feedforward torque command from the inverse dynamics model (Section 3.1). On the other hand, EDAs include mechanical impedances as a distinct class of primitives (Section 2.2.3). With an appropriate choice of mechanical impedances, the approach is robust against uncertainty and unexpected physical contact (Hogan, 2022). The dynamics of physical interaction can be directly controlled by modulating mechanical impedances (Section 3.4). By superimposing passive mechanical impedances, passivity is preserved (Section 3.9). Note that the equation of PD control used for DMPs (Equation (15)) is identical to a first-order joint-space impedance controller (Equation (10)). However, care is required: They are identical only if the robot actuators are ideal torque sources and the impedance is specified in joint space. Impedance control is more general than PD control and not limited to first-order joint-space behavior; by definition, mechanical impedance determines the dynamics of physical interaction at an interaction port, which may, in principle, be at any point(s) on the robot (Won et al., 1997). #### 4.2.4 Managing Kinematic Singularity and Redundancy For DMPs, control in task-space requires solving an inverse kinematics problem (Section 3.3). This introduces the challenges of managing kinematic singularity and kinematic redundancy. The latter can be resolved by using any of multiple feedback control methods presented by (Nakanishi et al., 2008), and for the example (Section 3.9.3) we used sliding mode control. Nevertheless, this requires feedback control based on an error signal. This introduces a non-negligible error, and instead of perfect tracking, asymptotic convergence is achieved. Moreover, the methods still involve a Jacobian (pseudo-)inverse. Consequently, an additional method to handle kinematic singularity should be employed. Finally, null-space projection methods violate passivity (Section 3.9.1), and advanced methods to guarantee the robot's stability might be needed (Dietrich et al., 2015; Lachner, 2022). For EDAs, explicitly solving the inverse kinematics is not required (Hogan, 1987). Seamless operation into and out of kinematic singularities are possible (Section 3.3.3). EDAs superimpose multiple mechanical impedances to manage kinemtic redundancy (Section 3.9.3). Unlike null-space projection methods, passivity is preserved (Lachner, 2022; Hogan, 2022). #### 4.2.5 Modularity in Robot Control As discussed in the examples of Section 3 and Section 4.1, for both approaches, motor primitives provide a modular control framework for robot control, which thereby simplifies multiple control tasks (Section 3.5, 3.7, 3.8). Nevertheless, it is important to note that the extent of modularity and its practical implications significantly differ between these two approaches. A clear distinction is evident when combining multiple movements. For DMPs, by design, discrete and rhythmic movements are generated by different DMPs. Hence, discrete and rhythmic movements cannot be simply superimposed (Section 3.7). Moreover, to sequence discrete movements, the goal location \(\mathbf{g}(t)\) of the previous discrete movement is modulated (Equation (21)), (Section 3.8). On the other hand, for EDAs, sequencing and/or combining multiple movements can be seemlessly conducted at the level of the virtual trajectory (Section 2.2.4). Recall that a combination of discrete and rhythmic movements was achieved by simply combining submovements and oscillations (Section 3.7). For sequencing discrete movements, the subsequent discrete movement was superimposed "without modifying" the previous movement (Section 3.8). Moreover, one can freely choose the basis function of a submovement, rather than being restricted to a response of a stable linear system (Figure 9). These properties provide a notable degree of simplicity and modularity, as individual motions can be separately planned and simply superimposed without further modification. The superposition principle of mechanical impedances enables breaking down complex tasks into simpler sub-tasks, solving each sub-task with a specific module, and simply combining these modules to solve the original problem (Section 3.5). Using mechanical impedances with this "divide-and-conquer" strategy, the overall complexity of the control problem can be significantly reduced. A task may be achieved by simply reusing the impedance controllers of component tasks without any modification. This modular property of EDAs is in contrast with DMPs. While the learned weights of the nonlinear forcing terms can be reused for a single DMP, multiple DMPs cannot be simply combined by merging the learned weights of different DMPs. ### Combining the Best of Both Approaches Despite these differences, both approaches may be combined to leverage their respective advantages and alleviate their limitations. EDAs are robust against uncertainty and have advantages for physical interaction. DMPs can easily learn and track trajectories of arbitrary complexity. A combination of both approaches may allow robustness against uncertainty and physical interaction, while enabling efficient learning and tracking of trajectories with arbitrary complexity. In fact, superimposing a low-gain PD controller on the feedforward torque command of DMPs (Section 3.4) can be regarded as an example of combining both approaches. Given an ideal torque-actuated robot, a first-order joint-space impedance controller (an example of an EDA) is added to a feedforward torque command based on DMPs. Robustness against uncertainty or physical contact can also be achieved by superimposing other mechanical impedances, e.g., a first-order task-space impedance controller (Equation (13)). A combination of both approaches may be achieved not only at the torque level, but also at the kinematic level. Imitation Learning of DMPs can be used to generate an EDA virtual trajectory with arbitrary complexity. An example of this approach is the "variable impedance control with policy improvement and path integral," introduced by Buchli et al. (2011). This approach demonstrated the potential of combining DMPs and EDAs to achieve a rich set of movements that are also robust against uncertainty and physical interaction. ## 5 Conclusion In this paper, we provided a detailed comparison of two motor-primitives approaches in robotics: DMPs and EDAs. Both approaches utilize motor primitives as fundamental building blocks for parameterizing a controller, enabling highly dynamic robot behavior with minimal high-level intervention. Despite this similarity, there are notable differences in their implementation. Using simulation, we delineated the differences between DMPs and EDAs through eight robot control examples. While DMPs can easily learn and track trajectories of arbitrary complexity, EDAs are robust against uncertainty and have advantages for physical interaction. Accounting for the similarities and differences of both approaches, we suggest how DMPs and EDAs can be combined to achieve a rich repertoire of movements that is also robust against uncertainty and physical interaction. In conclusion, control approaches based on DMPs, EDAs or their combination offer valuable techniques to generate dynamic robot behavior. By understanding their similarities and differences, researchers may make informed decisions to select the most suitable approach for specific robot tasks and applications. ## Acknowledgements This work was supported in part by the MIT/SUSTech Centers for Mechanical Engineering Research and Education. MCN was supported in part by a Mathworks Fellowship. ## Declaration of Conflicting Interests The Authors declare that there is no conflict of interest.
2306.14875
A Fully Unsupervised Instance Segmentation Technique for White Blood Cell Images
White blood cells, also known as leukocytes are group of heterogeneously nucleated cells which act as salient immune system cells. These are originated in the bone marrow and are found in blood, plasma, and lymph tissues. Leukocytes kill the bacteria, virus and other kind of pathogens which invade human body through phagocytosis that in turn results immunity. Detection of a white blood cell count can reveal camouflaged infections and warn doctors about chronic medical conditions such as autoimmune diseases, immune deficiencies, and blood disorders. Segmentation plays an important role in identification of white blood cells (WBC) from microscopic image analysis. The goal of segmentation in a microscopic image is to divide the image into different distinct regions. In our paper, we tried to propose a novel instance segmentation method for segmenting the WBCs containing both the nucleus and the cytoplasm, from bone marrow images.
Shrijeet Biswas, Amartya Bhattacharya
2023-06-26T17:44:36Z
http://arxiv.org/abs/2306.14875v2
# A Fully Unsupervised Instance Segmentation Technique for White Blood Cell Images ###### Abstract White blood cells, also known as leukocytes are group of heterogeneously nucleated cells which act as salient immune system cells. These are originated in the bone marrow and are found in blood, plasma, and lymph tissues. Leukocytes kill the bacteria, virus and other kind of pathogens which invade human body through phagocytosis that in turn results immunity. Detection of a white blood cell count reveal camouflaged infections and warn doctors about chronic medical conditions such as autoimmune diseases, immune deficiencies, and blood disorders. Segmentation plays an important role in identification of white blood cells (WBC) from microscopic image analysis. The goal of segmentation in a microscopic image is to divide the image into different distinct regions. In our paper, we tried to propose a novel instance segmentation method for segmenting the WBCs containing both the nucleus and the cytoplasm, from bone marrow images. computer vision; medical imaging; unsupervised learning; healthcare ## I Introduction In medical imaging, identification of White Blood Cells (WBCs) also called leukocytes is an important step for the diagnosis of different diseases such as leukemia, Myeloplastic syndromes (MDS), Acquired Immunodeficiency Syndrome (AIDS), and other immunological syndromes. WBCs are a part of the human immune system, which exists throughout our entire body, including bone marrow and blood. The number and types of WBCs present in a body can serve as an indication to several health conditions and diseases, therefore; the count of different types of WBCs called differential counting plays a major role in the determination of the health condition of a patient [1]. The traditional method of detecting WBCs involves a trained expert who uses a microscope to select an area of interest from a bone marrow slide, then manually detects and classifies the different White Blood Cells present in that region of the slide. Performing all these steps manually is very difficult and tedious even for a trained expert; therefore an unsupervised method for segmentation of WBCs from bone marrow slides is highly desirable as it reduces the overall cost and required time to estimate the White blood cell differential count [2]. To ease out tedious and time consuming process of manual detection of WBCs, various automated / partially automated methods have been proposed over the past decade. Few automated approaches which were adopted in the laboratories, used tools such as automatic segmenting and counting machines, flow cytometry etc, to detect WBCs. To improve the quality of detection of these tools, pattern recognition and image processing techniques can be augmented which will enable these tools to detect and count WBCs qualitatively rather than quantitatively [3][4]. Majority of these segmentation methods are targeted towards the application in peripheral blood rather than bone marrow. White blood cells in bone marrow are much denser compared to those in peripheral blood. Also, immature WBCs are generally seen only in bone marrow [5]. Thus segmenting and classifying WBCs in bone marrow is more difficult and complex compared to doing the same for peripheral blood. Therefore, segmentation of WBCs from bone marrow slides plays a very important role in the process of automated / partially automated differential counting of WBCs. Segmentation of an image is the process of dividing the image into different connected regions based on their features and properties. Segmentation techniques can be broadly classified into two major categories, semantic segmentation and instance segmentation. In semantic segmentation, different instances of same object are considered to be the part of the same region of the image, whereas in instance segmentation different instances of same objects are identified as different regions of the images. Detecting each individual WBC as a separate entity is a challenging task of instance segmentation. Each individual slide image may contain a variety of white blood cells which are present in different stages of maturity. As a result their nucleus and cytoplasm may differ in shape, color, density, texture and granularities [1]. Also individual WBCs present in the slide may have overlapping cytoplasm or nucleus which makes it difficult to determine the actual boundary between two or more WBCs. Therefore, automated detection of each instance of WBCs present in the bone marrow slides image becomes very challenging. Over the years several WBC segmentation methods, both supervised and unsupervised, have been proposed. Initially, researchers used traditional image processing techniques such as color- space based thresholding, mathematical morphology, space scale analysis, etc to perform the segmentation process [6][7][8][9][10][11]. However, these techniques required manual intervention. With the advent of machine learning, researchers shifted their focus from traditional image processing methods to various machine learning models with the salient machine learning models being [12][13][14], these models required minimal manual intervention however, segmentation of cytoplasm and overlapping WBCs remained a problem. With the emergence of deep learning, researchers began to employ various new deep learning based models [15][16]. These deep learning based segmentation processes required data annotation and labeling for the purpose of generation of ground truth mask which severs as a class label for the supervised learning process. Data annotation and generation of ground truth masks can be very tedious and time-consuming, also it is not possible to generate a ground truth mask for every possible type of WBCs as they might have an infinite number of variations depending on their shape, size, density, color, texture, and granularity. All these methods require some amount of manual intervention without which the quality output of the segmentation process deteriorates. In this paper, we propose a fully unsupervised technique for instance segmentation. To test the efficacy of the system we have applied it for the segmentation and detection of WBCs form bone marrow slide images. The approach is based on a novel combination of color space based thresholding, K-Means Clustering followed by Watershed algorithm. The proposed method requires no manual intervention, there is no requirement for the generation of ground truth binary mask which is essential in majority of deep learning based segmentation methods. Also, the proposed model is able to detect and segment out each WBC present in the slide image including WBCs with overlapping boundaries, and output each WBC as an individual cropped image. ## II Related Works Segmentation techniques for biomedical images have been an important topic of research in the last two decades. Over the years, approaches have been made with the help of image processing, classic machine learning, and deep learning techniques. Earlier works include the works of Cseke et. al 1992 [10] where authors used a method based on the Otsu Thresholding methods in order to segment the WBCs which helped the process of automatic thresholding of images for segmentation of WBCs. Shitong et.al 2006 [6] proposed a novel image processing based model for the segmentation of cells that are widely separated with the help of thresholding techniques with the subsequent application of morphological operations as well as the concept of Fuzzy cellular Network. Dorini et. al 2012 [8] devised a method for nucleus as well as cytoplasm segmentation with the help of scale space analysis along with the use of morphological operations on the images. Dorini et. al, 2013 [8] in his later work provided a novel method based on the Self Dual Multiscale Morphological Toggle(SMMT) algorithm along with the Watershed algorithm for segmenting nucleus and the cytoplasm separately. The authors also provided a method based on granulometric analysis combined with morphological operations for the same. Recent image processing based segmentations methods were implemented in the works of Safuan et. al 2017 [9] where the authors provided an analysis of the performance of different color channels namely RGB,HSV and CMYK to generate the number of WBCs detected. However the research work failed to discuss about providing a method for the segmentation of cytoplasm leading to the decrease of accuracy. The use of various colour channels plays an important role for the segmentation job, the extensive use of which can be seen from the work done by Li et. al, 2016 [11] where the authors used the combination of RGB and HSV colour channel images along with the use of dual thresholding technique in order to segment WBCs from Acute Lymphoblastic Leukemia images. With the advent of machine learning various algorithms were used. K Means Clustering algorithm was seen to be extensively used along with the application of various other methods on top of the algorithm. Ghane et al. 2017 [13] proposed a method based on K Means Clustering algorithm. From the images of the dataset used by the authors, a clear distinction between 3 clusters namely, nucleus, cytoplasm and background, can be observed which were segmented with the help of the clustering algorithm. The segmented regions of the nucleus acted as a mask on which a modified watershed algorithm was used for the segmentation of nucleus as well as the cytoplasm region. The method also dealt with the problem that occurred in the cases of overlapping cells. The authors namely implemented the method in 3 phases wherein at the first stage the author segmented the WBC region from the images, in the second stage nucleus was specifically segmented using image processing techniques like morphological operations and later in the final stage the authors solved the problem of overlapping cells which both involved detection and separation of the cells. Another use of K Means Clustering for segmentation of WBCs can be seen in the works of Sarrafzadeh [17] where the authors used the K Means Clustering algorithm on the LAB color space channel which involved the assumption that the elements of the bone marrow slide images should belong to 3 clusters. And each of the nucleus, cytoplasm and background can be mapped to one of them. Zheng et. al 2018 [18] proposed a self supervised approach for segmentation of WBCs. The proposed model consists of two major modules; 1) an unsupervised initial segmentation, which provides a rough segmentation result which is used to train the second part of the model, 2) an SVM classifier which helps to improve the initial segmentation results. Nasir et. al 2011 [19] provided a method based on the application of K Means Clustering on the hue channel image which solved the problem of segmenting the nucleus only from the images. Along with the application of various machine learning algorithms, various deep learning techniques have already been applied for the task of cell segmentation. Initial stages of the work involves the application of U-Net [20] where Fully Connected Convolutional Neural Networks has been used for the segmentation task. The algorithm can applied for various supervised semantic segmentation task which involves the expensive process of labelling every images. This can also be seen in the application of Instance Segmentation techniques like Mask R CNN [21]. Although the methods mentioned solved the problem for segmenting the WBCs from the bone marrow images, it required manual intervention. The image processing techniques required minimal human intervention which we have tried to avoid in our method in this paper. Moreover the existing deep learning based cell segmentation techniques involve the generation of labels which can be quite an expensive task. We tried to solve this problem by providing a fully unsupervised technique for the segmentation of WBCs. ## III Methods and Materials ### _Data_ Data was provided by the Nightingale Hospital, Kolkata, West Bengal, India. There were 200 images shared which were bone-marrow slide images each containing White Blood Cells (WBCs). Each WBC consists of both nucleus and cytoplasm. Sample images have been shown in 5,13, 14. The images were taken under a microscope for patients having Myelodysplastic Syndrome(MDS) as well as healthy patients. ### _Methods_ In our problem, the main task was to segment all the White Blood Cells(WBCs) present in the bone marrow slide images. This involved developing a novel unsupervised instance segmentation technique that can automatically segment each and every WBC irrespective of its type from every bone marrow slide image. Our proposed algorithm mainly involves three main stages. The first stage involved segmentation of the region of interest(ROI) or the region denoting the presence of WBCs on the image, done using a semantic segmentation technique. This process helped us to create initial semantic masks of all the WBCs to be used at a later stage. After the image was processed in the first stage, it was passed through the second stage which involved separating the regions of cytoplasm, the background and the nucleus. For this stage K-Means clustering method was applied on the semantic masks in the first stage. As there were mainly three clusters present in each of the images, corresponding to the background, nucleus and cytoplasm respectively, the value of K was set to 3. And finally in order to get the instance segmentation, the watershed algorithm was used at the final stage. The watershed algorithm separated out the cells which are packed together and tough to separate using trivial image processing algorithms. Each of the three stages has been discussed in detail, below. #### Iii-B1 Stage1: Semantic Segmentation The first stage consisted of segmenting the ROIs which represented all the WBCs along with their respective cytoplasms. The goal was accomplished by performing operations in three steps. Initially color-space transformation was used, followed by thresholding operation. And finally after applying the morphological operations the desired result was obtained. In the first step, initially the images present in RGB(Red Green Blue) color space were transformed into CMYK(Cyan Magenta Yellow Black) color space shown in 1. These images in CMYK color space were used for the desired segmentation task since the contrast of the WBCs in the Y component of the image was minimum while in the M component the contrast was observed to be maximum. For transforming the images from RGB color space to CMYK color space equations 1 to 7 were used. \[R^{{}^{\prime}}_{i}=\frac{R_{i}}{255} \tag{1}\] \[G^{{}^{\prime}}_{i}=\frac{G_{i}}{255} \tag{2}\] \[B^{{}^{\prime}}_{i}=\frac{B_{i}}{255} \tag{3}\] \[K_{i}=1-max(R^{{}^{\prime}}_{i},G^{{}^{\prime}}_{i},B^{{}^{\prime}}_{i}) \tag{4}\] \[C=\frac{1-R^{{}^{\prime}}-K}{1-K} \tag{5}\] \[M=\frac{1-G^{{}^{\prime}}-K}{1-K} \tag{6}\] \[Y=\frac{1-B^{{}^{\prime}}-K}{1-K} \tag{7}\] where \(R_{i}\), \(G_{i}\), \(B_{i}\) are the red, green and blue value of a pixel in RGB color space, and the equivalent pixel in CMYK color space are C, M, Y, K Histogram Equalization was applied on the images in CMYK color space, followed by contrast stretching. Since, in the Y component of the image, the contrast of the WBCs were minimum, contrast stretching made the regions, where WBCs were absent, have high contrast values thus making the regions containing WBCs distinguishable from the rest. After the previous operations were performed, binary thresholding was applied on the resulting image followed by morphological closing which helped in obtaining the regions where WBCs were present. However in some of the cases, the previous operations led to cytoplasms getting removed from the images. In order to solve the issue, the M component was used. The M component had the maximum contrast value, in the regions where cytoplasm were present. Binary thresholding was applied on this component followed by morphological closing in order to obtain the cytoplasmic region. Now bitwise AND operation was used on these two images, one showing WBCs and the other showing the cytoplasm to get an image having both the cytoplasm as well as the nucleus. And finally the color space was changed again to RGB which has been shown in 2. This image was passed to stage 2. #### Iii-B2 Stage2: Application of K-Means Clustering The semantic mask obtained after the operations performed on the first stage, was used as an input to this stage. Here the goal was to separate out each of the components present in the image. The components corresponding to the background, the nucleus and the cytoplasm were considered as three clusters. Before using the K-Means clustering algorithm on the image, the image was transformed from the RGB color space to L*A*B* color space. In this color space the intensity was represented by the L* color channel. This image can clearly discriminate the nuclei from the rest of the image since the lightness value of the nuclei is more than the other regions. The "a" component of the image was selected and contrast stretching was applied to create more difference in pixel values between the region corresponding to the nucleus and the region corresponding to the cytoplasm. The contrast stretched image was used as the input to the K-Means algorithm and K was set to 3 as discussed before. The output of the image after using the K-Means clustering algorithm has been shown in 3. However the K-Means algorithm, although can segment the three regions separately, it doesn't classify and has an unique class no. for each of the regions containing nucleus, cytoplasm and background. This might lead to inconsistencies which can prove to be problematic at the later stage of the algorithm. This issue was solved using the concept of Intersection over Union(IoU) whose equation has been given in the equation below. IoU = Intersection of two areas/ Union of the two areas Initially a basic nucleus mask was generated by converting the image into a grayscale image and binary thresholding was applied. After that the IoU was calculated using the mask corresponding to the region containing nucleus and each of the three clusters. The cluster which had the maximum and IoU value was considered as the region of nucleus and the region of background respectively. While the other region was considered to be the region containing the cytoplasm. In this way the issue of inconsistency between the clusters was solved. #### Iii-B3 Stage 3: Traditional Watershed Algorithm This was used in the final stage of our proposed methodology and it helped us make instance segmentation, separating out each of the WBCs with a clear boundary. For this stage, the images having the nucleus cluster and the background cluster, obtained from the previous stage, were used as an input. Watershed algorithm requires the presence of at least one marker or seed values, which are obtained by applying distance transform on the cluster containing the nucleus, inside each of the objects present in the image, including the background as a separate object. Once the seed values are generated, each object present in the image is marked. These seeds can then be grown using a morphological watershed transformation. During this process the seeds will touch each other in the cases where the WBCs are in direct contact with each other. Whenever this happens, the region where there is direct contact between the two seeds is considered as the boundary between the two or more seeds or WBCs. The results of this process has been shown in 4 ### _Experiments_ After the acquisition of the data was successful, the above-mentioned technique was applied to the given images, and the Fig. 1: The C,M,Y and K channels of the original RGB image. The RGB image was changed to CMYK using the formulas mentioned. It helped in our task. Fig. 3: The results obtained after applying the K Means clustering algorithm. The K was set to 3 and each of the cluster has been shown separately. Fig. 2: The various stages of operations. Initially the histogram equalized image is obtained. After that we obtained the thresholding applied and enhanced Y component image which helps in nucleus detection. After that in order to obtain the cytoplasm we used the M component. Finally we obtained the masks required. outcomes were observed. Each of the images was taken one at a time and the method was applied to each of the images. After the proposed method was applied, images were obtained showing the boundaries of each of the cells separately. This process was repeated for all of the images in our dataset. And finally, as a measure of the performance, all the IOUs were calculated. ### _Results_ The proposed framework was used in order to segment the cells from 200 bone marrow slide images. Sample slide image has been attached below 5. As mentioned we applied the framework in three different stages. First, we applied the semantic segmentation technique, that helped us to obtain the corresponding segmentation mask for the bone-marrow slide image. The sample results has been shown in 6. After obtaining the semantic segmentation masks of the slide image, we apply the K-Means Clustering method as discussed above and also the Watershed technique. The result obtained after applying these methods has been shown in 7. We finally overlay the contours with the original image to obtain the final results i.e segmenation of each and every cell present in the bone-marrow slide image as shown in 8. Similarly 9 10 11 shows sample results obtained after applying our proposed framework on different slide images. After calculating the Intersection over Union(IOU) with the manually masked images, the IOU was found to have an average of 0.85 or 85%, calculated over 200 images. ### _Conclusion_ Instance Segmentation of WBCs from bone marrow images is a challenging task when the WBCs are densely populated in a single bone marrow slide image. Using supervised Fig. 5: This shows a sample of a bone marrow slide image. It contains normal as well as dysplastic cells. Fig. 6: The above image shows the semantic segmentation masks generated. This proves crucial for the next stages involved. Here we can clearly differentiate the background with the cells. Fig. 7: This shows the contours of each and every segmented cell present in the bone-marrow slide image. Fig. 8: The result obtained after performing the techniques mentioned in our proposed framework. Fig. 10: The result obtained after performing the techniques mentioned in our proposed framework. Fig. 9: The result obtained after performing the techniques mentioned in our proposed framework. Fig. 11: The result obtained after performing the techniques mentioned in our proposed framework. algorithms to solve this problem, generally require a large number of image samples along with their labels. Generating the correct labels manually becomes a tough task and any mistake in the label can lead to an incorrect solution. Here in our paper we present a multi stage unsupervised instance segmentation model which is capable of segmenting every WBCs containing both the nucleus and the cytoplasm from the bone marrow slide images and separating each of the cells with a boundary. The solution involves three separate steps of creating a mask, using K-Means Clustering algorithm and then using Watershed algorithm to segment each of the WBCs. Our solution performs better than the existing unsupervised as well as the supervised methods that have been proposed earlier and also requires minimal number of parameters thus computationally efficient.
2305.13492
How densely can spheres be packed with moderate effort in high dimensions?
We generate non-lattice packings of spheres in up to 22 dimensions using the geometrical constraint satisfaction algorithm RRR. Our aggregated data suggest that it is easy to double the density of Ball's lower bound, and more tentatively, that the exponential decay rate of the density can be improved relative to Minkowski's longstanding 1/2.
Veit Elser
2023-05-22T21:19:21Z
http://arxiv.org/abs/2305.13492v2
# How densely can spheres be packed with moderate effort in high dimensions? ###### Abstract We generate non-lattice packings of spheres in up to 22 dimensions using the geometrical constraint satisfaction algorithm RRR. Our aggregated data suggest that it is easy to double the density of Ball's lower bound, and more tentatively, that the exponential decay rate of the density can be improved relative to Minkowski's longstanding \(1/2\). ## I Introduction The packing of congruent spheres in Euclidean space has important practical implications and is a seemingly unbounded source of theoretical questions. A major recent success was the discovery by Viazovska [1] and coworkers [2] of modular functions that make the Cohn-Elkies linear programming density upper bound [3; 4] sharp for the \(E_{8}\) and Leech lattices, proving that these lattice-based schemes for packing spheres are the densest possible in eight and 24 dimensions. By contrast, the subject of lower bounds on achievable densities is much murker and progress seems to have stalled. Minkowski [5] was the first to find a lower bound for general dimension \(n\) that was superior to the density achieved by any of the known packing schemes available for arbitrary \(n\). For example, a simple scheme is to center the spheres on the \(n\)-dimensional checkerboard lattice \(D_{n}\), the subset of integer lattice points with even coordinate sum. This gives the optimal density for \(n=3\)[6] and is also believed to be the best possible for \(n=4\) and \(5\). On the other hand, the density \(\Delta\), or fraction of space covered by spheres, decays as \[\Delta(D_{n})\sim\frac{1}{\sqrt{4\pi n}}\left(\frac{e\pi}{n}\right)^{n/2}\;,\] which is much faster than Minkowski's bound whose leading behavior is \(2^{-n}\). Like Minkowski's result, recent advances are also based on lattices and have asymptotic densities \[c\,\frac{n}{2^{n}}\;,\] with improvements in the value of the constant \(c\). The current best bound, for general \(n\), is Ball's bound [7] \[\Delta>\Delta_{\rm B}=\frac{n-1}{2^{n-1}}\,\zeta(n)\;,\] corresponding to \(c=2\). Ball's result hinges on a lemma in Bang's proof of the "plank problem" [8] and corresponds geometrically to the transformed problem of custom-fitting a thin oblate ellipsoid in the integer lattice that avoids all lattice points except the origin. Though only the existence of the ellipsoid is established, and the corresponding packing is not explicitly constructed, the value \(c=2\) appears as a sharp estimate because the ellipsoid is constrained all over its surface. Vance [9] was able to further improve \(c\) when \(n\) is divisible by four and Venkatesh [10] found that \(c\) could be replaced by \(\log\log n\) for very special \(n\). These increasingly sophisticated bounds, all based on lattices, stand in stark contrast to a bound that makes no reference to lattices at all and can be proved in five sentences. A set of sphere centers \(S_{n}^{*}\) is "saturated" for spheres of radius \(r\) if it is impossible to add another sphere, also of radius \(r\), without intersecting an existing sphere. This property implies all points not covered by a sphere are within distance \(2r\) of one of the sphere centers. By doubling all the sphere radii, all of these points will be covered as well. But this could not happen if \(\Delta(S_{n}^{*})<2^{-n}\), since doubling the radii increases each sphere volume by \(2^{n}\). We therefore know that \[\Delta(S_{n}^{*})\geq 2^{-n}\;.\] Like the lattice-based bounds, this construction is not constructive in a practical sense. On the other hand, it reveals that matching the leading asymptotic part of the sophisticated bounds is already achieved by a greedy algorithm. Information on where spheres can be placed is provided by the Voronoi diagram of the sphere centers already placed, something that can be locally updated in a sequential construction of a periodic saturated packing. The crudeness of saturated packings suggests that easy improvements on the lower bound should be possible just by dropping the lattice constraint. Theoretically this proposal is still difficult because no one knows how to even mildly enhance the density in a way that is also amenable to computations. Torquato and Stillinger (TS) [11; 12] have conjectured the existence of packings that have a particular limiting form of the sphere-center autocorrelation (pair distribution function) \(g_{2}\) in high dimensions. If such packings exist, then the dominant \(2^{-n}\) behavior of the density would be improved to \(b^{n}\), with \(b\approx 0.583\). The constrained optimization of \(g_{2}\) used by TS to obtain this \(b\) can be interpreted as an infinite-dimensional linear program (LP) dual to the LP used by Cohn and Elkies [3; 4; 13] to establish upper bounds on the density. In this setting the TS-conjectured lower bound is a rigorous lower bound on the upper bounds that can be achieved with the LP method. However, this "lower bound on the upper bounds" may well be above realizable densities if it turns out that packings with the conjectured \(g_{2}\) do not exist. To add perspective to the Torquato-Stillinger constant \(0.583\), we note that by the
2306.06281
Energy-Dissipative Evolutionary Deep Operator Neural Networks
Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator learning neural network. It is designed to seed numerical solutions for a class of partial differential equations instead of a single partial differential equation, such as partial differential equations with different parameters or different initial conditions. The network consists of two sub-networks, the Branch net and the Trunk net. For an objective operator G, the Branch net encodes different input functions u at the same number of sensors, and the Trunk net evaluates the output function at any location. By minimizing the error between the evaluated output q and the expected output G(u)(y), DeepONet generates a good approximation of the operator G. In order to preserve essential physical properties of PDEs, such as the Energy Dissipation Law, we adopt a scalar auxiliary variable approach to generate the minimization problem. It introduces a modified energy and enables unconditional energy dissipation law at the discrete level. By taking the parameter as a function of time t, this network can predict the accurate solution at any further time with feeding data only at the initial state. The data needed can be generated by the initial conditions, which are readily available. In order to validate the accuracy and efficiency of our neural networks, we provide numerical simulations of several partial differential equations, including heat equations, parametric heat equations and Allen-Cahn equations.
Jiahao Zhang, Shiheng Zhang, Jie Shen, Guang Lin
2023-06-09T22:11:16Z
http://arxiv.org/abs/2306.06281v1
# Energy-Dissipative Evolutionary Deep Operator Neural Networks ###### Abstract Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator learning neural network. It is designed to seek numerical solutions for a class of partial differential equations instead of a single partial differential equation, such as partial differential equations with different parameters or different initial conditions. The network consists of two sub-networks, the Branch net, and the Trunk net. For an objective operator \(\mathcal{G}\), the Branch net encodes different input functions \(u\) at the same number of sensors \(y_{i},i=1,2,\cdots,m\), and the Trunk net evaluates the output function at any location. By minimizing the error between the evaluated output \(q\) and the expected output \(\mathcal{G}(u)(y)\), DeepONet generates a good approximation of the operator \(\mathcal{G}\). In order to preserve essential physical properties of PDEs, such as the Energy Dissipation Law, we adopt a scalar auxiliary variable approach to generate the minimization problem. It introduces a modified energy and enables unconditional energy dissipation law in the discrete level. By taking the parameter as a function of the time \(t\) variable, this network can predict the accurate solution at any further time with feeding data only at the initial state. The data needed can be generated by the initial conditions, which are readily available. In order to validate the accuracy and efficiency of our neural networks, we provide numerical simulations of several partial differential equations, including heat equations, parametric heat equations, and Allen-Cahn equations. A Article history: Operator Learning Evolutionary Neural Networks Energy Dissipative Parametric equation Scalar auxiliary variable Deep learning ## 1 Introduction Operator learning is a popular and challenging problem with potential applications across various disciplines. The opportunity to learn an operator over a domain in Euclidean spaces[1] and Banach spaces[2] opens a new class of problems in neural network design with generalized applicability. In application to solve partial differential equations(PDEs), operator learning has the potential to predict accurate solutions for the PDE by acquiring extensive prior knowledge [3; 4; 5; 6; 7; 8; 9; 10; 11]. In a recent paper[12], Lu, Jin, and Karniadakis proposed an operator learning method with some deep operator networks, named as DeepONets. It is based on the universal approximation theorem [13; 14; 15]. The goal of this neural network is to learn an operator instead of a single function, which is usually the solution of a PDE. For any operator \(\mathcal{G}\) on a domain \(\Omega\), we can define \(\mathcal{G}\) as a mapping from \(\Omega^{*}\rightarrow\Omega^{*}\) with \(\mathcal{G}(u)(y)\in R\) for any \(y\in\Omega\). \(\mathcal{G}(u)(y)\) is the expected output of the neural network, which is usually a real number. The objective of the training is to obtain an approximation of \(\mathcal{G}\), where we need to represent operators and functions in a discrete form. In practice, it is very common to represent a continuous function or operator by the values evaluated at finite and enough locations \(\{x_{1},x_{2},\cdots,x_{m}\}\), which is called "sensors" in DeepONet. The network takes \([u(x_{1}),u(x_{2}),\cdots,u(x_{m})]\) and \(y\) as the input. The loss function is the difference between the output \(q\) and the expected output \(\mathcal{G}(u)(y)\). Generally, there are two kinds of DeepONet, Stacked DeepONet, and Unstacked DeepONet. The Stacked DeepONet consists of \(p\) branch networks and one trunk network. The number of the Trunk networks of the Unstacked DeepONet is the same as the DeepONet, but the Unstacked DeepONet merges all the \(p\) branch networks into a single one. An Unstacked DeepONet combines two sub-networks, Branch net, and Trunk net. The Branch net encodes the input function \(u\) at some sensors, \(\{x_{i}\in\Omega\,|\,i=1,\cdots,m\}\). The output of the Branch net consists of \(p\) neurons, where each neuron can be seen as a scalar, \(b_{j}=b_{j}(u(x_{1}),u(x_{2}),\cdots,u(x_{m}))\), \(j=1,2,\cdots,p\). The Trunk net encodes some evaluation points \(\{y_{k}\in\Omega|k=1,\cdots,n\}\), while the output also consists of \(p\) neurons and each neuron is a scalar \(g_{j}=g_{j}(y_{1},y_{2},\cdots,y_{n})\), \(j=1,2,\cdots,p\). The evaluation point \(y_{i}\) can be arbitrary in order to obtain the loss function. The number of neurons at the last layer of the Trunk net and the Branch net is the same. Hence, the output of the DeepONet can be written as an inner product of \((b_{1},b_{2},\cdots,b_{p})\) and \((g_{1},g_{2},\cdots,g_{p})\). In other words, the relationship between the expected output and the evaluated output is \(\mathcal{G}(u)(y)\approx\sum_{j=1}^{p}b_{j}g_{j}\). The DeepONet is an application of the Universal Approximation Theorem for Operator, which is proposed by Chen & Chen [16]: **Theorem 1.1** (Universal Approximation Theorem for Operator).: _Suppose that \(\Omega_{1}\) is a compact set in \(X\), \(X\) is a Banach Space, \(V\) is a compact set in \(C(\Omega_{1})\), \(\Omega_{2}\) is a compact set in \(\boldsymbol{R}^{d}\), \(\sigma\) is a continuous non-polynomial function, \(\mathcal{G}\) is a nonlinear continuous operator, which maps \(v\) into \(C(\Omega_{2})\), then for any \(\epsilon>0\), there are positive integers \(M,N,m\), constants \(c_{i}^{k},\zeta_{k},\xi_{ij}^{k}\in\boldsymbol{R}\), points \(\omega_{k}\in\boldsymbol{R}^{n},x_{j}\in K_{1},i=1,\cdots,M\), \(k=1,\cdots,N,j=1,\cdots,m\), such that_ \[|\,\mathcal{G}(u)(y)-\sum_{k=1}^{N}\sum_{i=1}^{M}c_{i}^{k}\sigma\left(\sum_{j =1}^{m}\xi_{ij}^{k}u\left(x_{j}\right)+\theta_{i}^{k}\right)\cdot\sigma\left( \omega_{k}\cdot y+\zeta_{k}\right)|<\epsilon\] _holds for all \(u\in V\) and \(y\in\Omega_{2}\)._ For any time-dependent PDE, the training data is the form of \((u,y,\mathcal{G}(u)(y))\), where \(u\) in the discrete form can be represented as \([u(x_{1}),u(x_{2}),\cdots,u(x_{m})]\) in the neural network. In the original paper, they used the classic FNN[17] as the baseline model. For dynamic systems, various network architectures are used, including residual networks[18], convolutional NNs(CNNs)[19; 20], recurrent NNs(RNNs)[21], neural jump stochastic differential equations[22] and neural ordinary differential equations[23]. The training performance is very promising. It predicts accurate solutions of many nonlinear ODEs and PDEs, including the simple dynamic system, gravity pendulum system, and diffusion-reaction system. However, the training data need to be generated at each time step, so it is very expensive to train the network. For a lot of initial value problems, there is no any information of \(u(x,t)\) except \(t=0\). It is very natural to raise a question: _Can we learn an operator of a kind of time-dependent PDEs with only initial conditions?_ Inspired by the Evolutionary Deep Neural Network(EDNN)[24], it is more convenient to learn an operator at a fixed time instead of an operator with not only spatial variables but also a time variable. With the loss of generality, we can take the time variable \(t\) to be \(0\) in initial value problems. Once obtained the operator at the initial time, many traditional numerical methods can be used to update the solution. More specifically, assuming that the initial condition operator has been trained well, we can consider the parameters of the Branch net and the Trunk net as a function with respect to the time variable as shown in Figure 1. More specifically, for a given initial value problem, \[\begin{cases}\dfrac{\partial u}{\partial t}=s(u)\\ u(x,0)=f(x),\quad x\in\omega\end{cases} \tag{1}\] the objective is to approximate the operator \(\mathcal{G}:u\mapsto\mathcal{G}(u)\). The input is \(([u(x_{1}),u(x_{2}),\cdots,u(x_{m})],y,\mathcal{G}(u)(y))\), where \([x_{1},x_{2},\cdots,x_{m}]\) are the sensors and \(\mathcal{G}(u)(y)=f(y)\). The training process at the initial step is the same as the DeepONet, so we can use the same architecture to train the initial condition operator. The output of the Branch net can be written as \(\mathbf{b}=\mathbf{b}(u(x_{1},0),u(x_{2},0),\cdots,u(x_{m_{1}},0))=\mathbf{b}^{\prime}(x_{1}, x_{2},\cdots,x_{m};W_{1})\), where \(W_{1}\) are the parameters in the Branch net. The output of the Trunk net can be written as \(\mathbf{g}=\mathbf{g}(y;W_{2})\), where \(W_{2}\) are the parameters in the Trunk net. Once trained well, we will regard the parameters as a function of \(t\) and \(W_{1}\), \(W_{2}\) as the initial conditions of \(W_{1}(t)\) and \(W_{2}(t)\). By the architecture of the Unstacked DeepONet, we can write the solution at initial time \(t_{0}=0\) as \[u(x,t_{0})\approx\sum_{j=1}^{p}b_{j}g_{j}=\mathbf{b}^{T}\mathbf{g}\text{ for any given initial condition }f(x) \tag{2}\] We do not need any more data to obtain the approximation of \(u(x,t_{1})\). \(u(x,t_{1})\) should be consistent with \(W_{1}(t_{1})\) and \(W_{2}(t_{1})\). With the idea of the numerical solver for PDEs, it is easy to obtain \(W_{1}(t_{1})\) and \(W_{2}(t_{1})\) if \(\frac{\partial W_{1}}{\partial t}\) and \(\frac{\partial W_{2}}{\partial t}\) are known. The time derivative of the solution \(u\) can be written by a chain rule: \[\frac{\partial u}{\partial t}=\frac{\partial u}{\partial W}\frac{\partial W}{ \partial t} \tag{3}\] where \(W\) consists of \(W_{1}\) and \(W_{2}\). \(\frac{\partial W}{\partial t}\) can be solved by a least square problem. Once we get \(\frac{\partial W}{\partial t}\), we can use any traditional time discretization schemes to get \(W^{n+1}\) with \(W^{n}\). Fig. 1: Energy-Dissipative Evolutionary Deep Operator Neural Network. The yellow block represents input at sensors and the blue block represents subnetworks. The green blocks represent the output of the subnetworks and also the last layer of the EDE-DeepONet. The difference between the stacked and unstacked EDE-DeepONet is the number of Branch nets. In the right minimization problem, the energy term \(r^{2}\) can be shown to be dissipative, i.e. \((r^{n+1})^{2}\leq(r^{n})^{2}\), where \(\mathcal{J}(\gamma_{1},\gamma_{2})=\frac{1}{2}\left\|\sum_{k=1}^{p}\frac{ \partial g_{k}(W_{1}^{2})}{\partial W_{1}^{2}}\gamma_{1}\mathbf{b}_{k}(W_{2}^{n} )+\sum_{k=1}^{p}g_{k}(W_{1}^{n})\frac{\partial g_{k}(W_{2}^{n})}{\partial W_{ 2}^{n}}\gamma_{2}-\frac{r^{n+1}}{\sqrt{E(w^{n})}}N_{x}(\mathbf{u}^{n})\right\|_{2}^ {2}\). The choice of the traditional time discretization scheme is dependent on the specific problem. The Euler or Runge-Kutta methods are commonly used in the evolutionary network. We are going to introduce a method with unconditional energy dissipation, which is the Energy-Dissipative Evolutionary Deep Operator Neural Network(EDE-DeepONet). Many kinds of PDEs are derived from basic physical laws, such as Netwon's Law, Conservation Law and Energy Dissipation Law. In many areas of science and engineering, particularly in the field of materials science, gradient flows are commonly employed in mathematical models[25; 26; 27; 28; 29; 30; 31; 32]. When approximating the solution of a certain PDE, it is desirable to satisfy these laws. We consider a gradient flow problem, \[\frac{\partial u}{\partial t}=-\frac{\delta E}{\delta u}, \tag{4}\] where \(E\) is a certain free energy functional. Since the general explicit Euler method does not possess the unconditionally dissipative energy dissipation law, we applied a scalar auxiliary variable(SAV) method[33] to generate the required least square problem. It introduces a new modified energy and the unconditionally dissipative modified energy dissipation law is satisfied for each iterative step. SAV method has been applied to solve plenty of PDEs with thermodynamically consistent property. It is robust, easy to implement and accurate to predict the solution. Introducing this method to neural network helps us explore how to combine neural network models and physical laws. The objectives of this article is: * Designing an operator learning neural network without data except the given information. * Predicting solutions of parametric PDEs after a long time period. * Keeping energy dissipative property of a dynamic system. Our main contributions are: * Constructing an evolutionary operator learning neural network to solve PDEs. * Solving a kind of PDEs with different parameters in a single neural network. * Introducing the modified energy in the neural network and applying SAV algorithm to keep the unconditionally modified energy dissipation law. * Introducing an adaptive time stepping strategy and restart strategy in order to speed the training process. The organization of this paper is as follows: In Section 2, we introduce the Evolutionary Deep Operator Neural Network for a given PDE problem. In Section 3, we consider the physics law behind the gradient flow problem and apply the SAV method to obtain the energy dissipation law. We proposed a new architecture for neural network, EDE-DeepONet. In Section 4, we presented two adaptive time stepping strategies, where the second one is called restart in some cases. In Section 5, we generally introduced the architecture of the EDE-DeepONet. In Section 6, we implement our neural network to predict solutions of heat equations, parametric heat equations, and Allen-Cahn equations to show the numerical results. ## 2 Evolutionary Deep Operator Neural Network Consider a general gradient flow problem, \[\begin{split}&\frac{\partial\mathbf{u}}{\partial t}+\mathcal{N}_{x}( \mathbf{u})=0\\ &\mathbf{u}(\mathbf{x},0)=\mathbf{f}(\mathbf{x})\end{split} \tag{5}\] where \(\mathbf{u}\in\mathbf{R}^{l}\), \(\mathcal{N}_{x}(\mathbf{u})\) can be written as a variational derivative of a free energy functional \(E[u(\mathbf{x})]\) bounded from below, \(\mathcal{N}_{x}(\mathbf{u})=\frac{\delta E}{\delta u}.\) The first step is to approximate the initial condition operator with DeepONet. ### Operator learning For an operator \(\mathcal{G}\), \(\mathcal{G}:\mathbf{u}(\mathbf{x})\mapsto\mathbf{f}(\mathbf{x})\), the data feed into the DeepONet is in the form \((\mathbf{u},y,\mathcal{G}(\mathbf{u})(y))\). It is obtained by the given initial conditions. The branch network takes \([\mathbf{u}(\mathbf{x}_{1}),\mathbf{u}(\mathbf{x}_{2}),\cdots,\mathbf{u}(\mathbf{x}_{m})]^{T}\) as the input, which is the numerical representation of \(\mathbf{u}\), and \([\mathbf{b}_{1},\mathbf{b}_{2},\cdots,\mathbf{b}_{p}]^{T}\in\mathbf{R}^{p\times}\), where \(\mathbf{b}_{k}\in\mathbf{R}^{l}\) for \(k=1,2,\cdots,p\), as outputs. The trunk network takes \(\mathbf{y}\) as the input and \([g_{1},g_{2},\cdots,g_{p}]\in\mathbf{R}^{p}\) as outputs. The Unstacked DeepONet net uses FNN as the baseline model and concatenate the function value at sensor locations and the evaluated point together, i.e. \([\mathbf{u}(\mathbf{x}_{1}),\mathbf{u}(\mathbf{x}_{2}),\cdots,\mathbf{u}(\mathbf{x}_{m}),\mathbf{y}]^{T}\). As the equation in the Universal Approximation Theorem for Operators, we can take the product of \(\mathbf{h}\) and \(t\), then we obtain: \[\mathcal{G}(\mathbf{u})(\mathbf{x})\approx\sum_{k=1}^{p}g_{k}\mathbf{b}_{k} \tag{6}\] The activation functions are applied to the trunk net in the last layer. There is no bias in this network. However, according to the theorem 1, the generalization error can be reduced by adding bias. We also give the form with bias \(\mathbf{b}_{0}\): \[\mathcal{G}(\mathbf{u})(\mathbf{x})\approx\sum_{k=1}^{p}g_{k}\mathbf{b}_{k}+\mathbf{b}_{0} \tag{7}\] As mentioned before, we assumed the initial condition operator has been trained very well. We are going to find the update rule of the parameters to evolve the neural network. ### The evolution of parameters in the neural network Denoting the parameters in the branch network as \(W_{1}\) and the parameters in the trunk network as \(W_{2}\), \(W_{1}\) and \(W_{2}\) can be regarded a function of \(t\) since they change in every time step. According to the derivative's chain rule, we have \[\frac{\partial\mathbf{u}}{\partial t}=\frac{\partial\mathbf{u}}{\partial W_{1}}\frac {\partial W_{1}}{\partial t}+\frac{\partial\mathbf{u}}{\partial W_{2}}\frac{ \partial W_{2}}{\partial t} \tag{8}\] Since \(\mathbf{u}=\sum_{k=1}^{p}g_{k}\mathbf{b}_{k}=\sum_{k=1}^{p}g_{k}(W_{1}(t))\mathbf{b}_{k}( W_{2}(t))\), then \[\frac{\partial\mathbf{u}}{\partial t}=\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t) )}{\partial W_{1}}\frac{\partial W_{1}}{\partial t}\mathbf{b}_{k}(W_{2}(t))+\sum_ {k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}} \frac{\partial W_{2}}{\partial t} \tag{9}\] Our objective is to obtain \(\frac{\partial W_{1}}{\partial t}\) and \(\frac{\partial W_{2}}{\partial t}\), the update rule for parameters. It is equivalent to solve a minimization problem, \[\left[\frac{\partial W_{1}}{\partial t};\frac{\partial W_{2}}{\partial t} \right]=\text{argmin}\mathcal{J}(\gamma_{1},\gamma_{2}) \tag{10}\] where \[\mathcal{J}(\gamma_{1},\gamma_{2})=\frac{1}{2}\left\|\sum_{k=1}^{p}\frac{ \partial g_{k}(W_{1}(t))}{\partial W_{1}}\gamma_{1}\mathbf{b}_{k}(W_{2}(t))+\sum_ {k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}} \gamma_{2}-\mathcal{N}_{\mathbf{x}}(\mathbf{u})\right\|_{2}^{2} \tag{11}\] In this article, the inner product \((a,b)\) is defined in the integral sense, \((a,b)=\int_{\Omega}a(\mathbf{x})b(\mathbf{x})\,\mathrm{d}\mathbf{x}\) and the \(L_{2}\) norm is defined as \(\left\|a\right\|_{2}^{2}=\int_{\Omega}|a(\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\). The minimization problem can be transformed into a linear system by the first-order optimal condition: \[\frac{\partial\mathcal{J}}{\partial\gamma_{1}}=\int_{\Omega}\!\! \left(\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}( W_{2}(t))\right)^{T}\left(\gamma_{1}\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t))}{ \partial W_{1}}\mathbf{b}_{k}(W_{2}(t))+\sum_{k=1}^{p}g_{k}(W_{1}(t))\frac{ \partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}}\gamma_{2}-\mathcal{N}_{\mathbf{x}}( \mathbf{u})\right)\!\mathrm{d}\mathbf{x}=0 \tag{12}\] \[\frac{\partial\mathcal{J}}{\partial\gamma_{2}}=\int_{\Omega}\!\! \left(\sum_{k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{ \partial W_{2}}\right)^{T}\!\left(\gamma_{1}\sum_{k=1}^{p}\frac{\partial g_{k}(W _{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t))+\sum_{k=1}^{p}g_{k}(W_{1}(t)) \frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}}\gamma_{2}-\mathcal{N}_{\bm {x}}(\mathbf{u})\right)\!\mathrm{d}\mathbf{x}=0 \tag{13}\] In this system, the gradient with respect to \(W_{1}(t)\) and \(W_{2}(t)\) can be computed by automatic differentiation at each time step. By denoting \[(\mathbf{J_{1}})_{ij_{1}} =\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t))}{\partial W_{1}^{ji} }\mathbf{b}_{k}^{i}(W_{2}(t)) \tag{14}\] \[(\mathbf{J_{2}})_{ij_{2}} =\sum_{k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}^{i}(W_{2}( t))}{\partial W_{2}^{ji_{2}}}\] (15) \[(\mathbf{N})_{i} =\mathcal{N}\left(\mathbf{u}_{x}^{i}\right) \tag{16}\] where \(i=1,2,\cdots,l\), \(j_{1}=1,2,\cdots,N_{\text{para}}^{b}\), \(j_{2}=1,2,\cdots,N_{\text{para}}^{t}\). \(N_{\text{para}}^{b}\) is the number of parameters in Branch net and \(N_{\text{para}}^{t}\) is the number of parameters in Trunk net. \(\mathbf{N}\) is generated by the DeepONet, so it can be evaluated at any spatial point. The above integrals can be approximated by numerical methods: \[\frac{1}{|\Omega|}\int_{\Omega}\left(\sum_{k=1}^{p}\frac{\partial g _{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t))\right)^{T}\left(\sum_{k=1} ^{p}\frac{\partial g_{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t)) \right)\mathrm{d}\mathbf{x} =\lim_{l\to\infty}\frac{1}{l}\mathbf{J_{1}^{T}}\mathbf{J_{1}} \tag{17}\] \[\frac{1}{|\Omega|}\int_{\Omega}\left(\sum_{k=1}^{p}g_{k}(W_{1}(t ))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}}\right)^{T}\left(\sum_{k= 1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}} \right)\mathrm{d}\mathbf{x} =\lim_{l\to\infty}\frac{1}{l}\mathbf{J_{2}^{T}}\mathbf{J_{2}}\] (18) \[\frac{1}{|\Omega|}\int_{\Omega}\left(\sum_{k=1}^{p}\frac{\partial g _{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t))\right)^{T}\left(\mathcal{ N}_{\mathbf{x}}(\mathbf{u})\right)\mathrm{d}\mathbf{x} =\lim_{l\to\infty}\frac{1}{l}\mathbf{J_{1}^{T}}\mathbf{N} \tag{19}\] By denoting \(\gamma_{i}^{opt}\) as optimal values of \(\gamma_{i}\), \(i=1,2\), the objective function can be reduced to \[\mathbf{J_{1}^{T}}\left(\gamma_{1}^{opt}\mathbf{J_{1}}+\gamma_{2 }^{opt}\mathbf{J_{2}}-\mathbf{N}\right) =0 \tag{20}\] \[\mathbf{J_{2}^{T}}\left(\gamma_{1}^{opt}\mathbf{J_{1}}+\gamma_{2 }^{opt}\mathbf{J_{2}}-\mathbf{N}\right) =0 \tag{21}\] The feasible solutions of the above equations are the approximated time derivatives of \(W_{1}\) and \(W_{2}\). \[\frac{dW_{1}}{dt} =\gamma_{1}^{opt} \tag{22}\] \[\frac{dW_{2}}{dt} =\gamma_{2}^{opt} \tag{23}\] where the initial conditions \(W_{1}^{0}\) and \(W_{2}^{0}\) can be determined by DeepONets for initial condition operators. The two ODEs are the updated rules in the neural networks. The simple way to solve them is the explicit Euler method. \[\frac{W_{1}^{n+1}-W_{1}^{n}}{\Delta t} =\gamma_{1}^{opt} \tag{24}\] \[\frac{W_{2}^{n+1}-W_{2}^{n}}{\Delta t} =\gamma_{2}^{opt} \tag{25}\] The neural network can calculate the solution of given PDEs at any time step \(t_{n}\) and spatial point \(\mathbf{x}_{i}\) by weights \(W_{1}^{n}\), \(W_{2}^{n}\), spatial points \(\mathbf{x}\) and initial condition \(\mathbf{u}(\mathbf{x})\). ## 3 Energy Dissipative Evolutionary Deep Operator Neural Network Let's reconsider the given problem. \[\frac{\partial\mathbf{u}}{\partial t}+\mathcal{N}_{x}(\mathbf{u})=0 \tag{26}\] \[\mathbf{u}(\mathbf{x},0)=\mathbf{f}(\mathbf{x})\] where \(\mathbf{u}\in\mathbf{R}^{l}\), \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) can be written as a variational derivative of a free energy functional \(E[\mathbf{u}(\mathbf{x})]\) bounded from below, \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})=\frac{\delta E}{\delta\mathbf{u}}\). Taking the inner product with \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) of the first equation, we obtain the energy dissipation property \[\frac{dE[\mathbf{u}(\mathbf{x})]}{dt}=\left(\frac{\delta E}{\delta\mathbf{u}},\frac{ \partial\mathbf{u}}{\partial t}\right)=\left(\mathcal{N}_{\mathbf{x}}(\mathbf{u}),\frac{ \partial\mathbf{u}}{\partial t}\right)=-\left(\mathcal{N}_{\mathbf{x}}(\mathbf{u}), \mathcal{N}_{\mathbf{x}}(\mathbf{u})\right)\leq 0 \tag{27}\] However, it is usually hard for a numerical algorithm to be efficient as well as energy dissipative. Recently, the SAV approach [33] was introduced to construct numerical schemes which is energy dissipative (with a modified energy), accurate, robust and easy to implement. More precisely, assuming \(E[\mathbf{u}(\mathbf{x})]>0\), it introduces a \(r(t)=\sqrt{E[\mathbf{u}(\mathbf{x},t)]}\), and expands the gradient flow problem as \[\begin{split}&\frac{\partial\mathbf{u}}{\partial t}=-\frac{r}{\sqrt{E( \mathbf{u})}}\mathcal{N}_{\mathbf{x}}\left(\mathbf{u}\right)\\ & r_{t}=\frac{1}{2\sqrt{E(\mathbf{u})}}\left(\mathcal{N}_{\mathbf{x}} \left(\mathbf{u}\right),\frac{\partial\mathbf{u}}{\partial t}\right)\end{split} \tag{28}\] With \(r(0)=\sqrt{E[\mathbf{u}(\mathbf{x},t)]}\), the above system has a solution \(r(t)\equiv\sqrt{E[\mathbf{u}(\mathbf{x},t)]}\) and \(\mathbf{u}\) being the solution of the original problem. ### First order scheme By setting \(\mathbf{u}^{n}=\sum_{k=1}^{p}g_{k}\mathbf{b}_{k}\), a first order scheme can be constructed as \[\begin{split}&\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{\Delta t}=-\frac{r ^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\\ &\frac{r^{n+1}-r^{n}}{\Delta t}=\frac{1}{2\sqrt{E(\mathbf{u}^{n})}} \int_{\Omega}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{ \Delta t}dx.\end{split} \tag{29}\] This is a coupled system of equations for \((r^{n+1},\mathbf{u}^{n+1})\). But it can be easily decoupled as follows. Plugging the first equation into the second one, we obtain: \[\frac{r^{n+1}-r^{n}}{\Delta t}=-\frac{r^{n+1}}{2E(\mathbf{u}^{n})}\left\|\mathcal{ N}_{\mathbf{x}}(\mathbf{u}^{n})\right\|^{2}, \tag{30}\] which implies \[r^{n+1}=\left(1+\frac{\Delta t}{2E(\mathbf{u}^{n})}\left\|\mathcal{N}_{\mathbf{x}}(\bm {u}^{n})\right\|^{2}\right)^{-1}r^{n} \tag{31}\] **Theorem 3.1** (Discrete Energy Dissipation Law).: _With the modified energy define above, the scheme is unconditionally energy stable, i.e._ \[(r^{n+1})^{2}-(r^{n})^{2}\leq 0. \tag{32}\] **Proof 3.1**.: _Taking the inner product of the first equation with \(\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\) and the second equation with \(2r^{n+1}\)_ \[\begin{split}(r^{n+1})^{2}-(r^{n})^{2}&=2r^{n+1}( r^{n+1}-r^{n})-(r^{n+1}-r^{n})^{2}\\ &=\frac{\Delta r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\int_{\Omega} \mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{\Delta t}dx-( r^{n+1}-r^{n})^{2}\\ &=-\left(\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\right)^{2}\int_{ \Omega}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})dx-(r^{ n+1}-r^{n})^{2}\\ &\leq 0\end{split} \tag{33}\] In order to maintain the modified energy dissipation law in the evolution neural network, we only need to replace \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) by \(\frac{r^{n+1}}{\sqrt{E(\mathbf{u})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) in section 2. The update rule of the neural network is \[\left[\frac{\partial W_{1}}{\partial t};\,\frac{\partial W_{2}}{\partial t} \right]=\text{argmin}\mathcal{J}(\gamma_{1},\gamma_{2}) \tag{34}\] where \[\mathcal{J}(\gamma_{1},\gamma_{2})=\frac{1}{2}\left\|\sum_{k=1}^{p}\frac{\partial g _{k}(W_{1}^{n})}{\partial W_{1}^{n}}\gamma_{1}\mathbf{b}_{k}(W_{2}^{n})+\sum_{k=1}^ {p}g_{k}(W_{1}^{n})\frac{\partial\mathbf{b}_{k}(W_{2}^{n})}{\partial W_{2}^{n}} \gamma_{2}-\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n} )\right\|_{2}^{2} \tag{35}\] The corresponding linear system of the first order optimal condition is \[\mathbf{J}_{1}^{\rm T}\left(\gamma_{1}^{opt}\mathbf{J}_{1}+\gamma_{2}^{ opt}\mathbf{J}_{2}-\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathbf{N}\right)=0 \tag{36}\] \[\mathbf{J}_{2}^{\rm T}\left(\gamma_{1}^{opt}\mathbf{J}_{1}+\gamma_{2}^{ opt}\mathbf{J}_{2}-\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathbf{N}\right)=0 \tag{37}\] where \[(\mathbf{J}_{1})_{ij_{1}} =\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}^{n})}{\partial W_{1}^{ n,j_{1}}}\mathbf{b}_{k}^{i}(W_{2}^{n}) \tag{38}\] \[(\mathbf{J}_{2})_{ij_{2}} =\sum_{k=1}^{p}g_{k}(W_{1}^{n})\frac{\partial\mathbf{b}_{k}^{i}(W_{2} ^{n})}{\partial W_{2}^{n,j_{2}}}\] (39) \[(\mathbf{N})_{i} =\mathcal{N}\left(\mathbf{u}_{\mathbf{x}}^{i}\right) \tag{40}\] and \(i=1,2,\cdots,l\), \(j_{1}=1,2,\cdots,N_{\text{para}}^{b}\), \(j_{2}=1,2,\cdots,N_{\text{para}}^{r}\). \(N_{\text{para}}^{b}\) is the number of parameters in Branch net and \(N_{\text{para}}^{t}\) is the number of parameters in Trunk net. After getting \(\gamma_{1}^{opt}\) and \(\gamma_{2}^{opt}\), \(W^{n+1}\) can be obtained by the Forward Euler method as equation (24) and (25). \[W_{1}^{n+1} =W_{1}^{n}+\gamma_{1}^{opt}\Delta t \tag{41}\] \[W_{2}^{n+1} =W_{2}^{n}+\gamma_{2}^{opt}\Delta t \tag{42}\] ## 4 Adaptive time stepping strategy and Restart strategy One of the advantages of an unconditionally stable scheme is that the adaptive time step can be utilized. Since the coefficient of \(N_{x}\), \(\frac{r^{n+1}}{\sqrt{E^{n}}}\) should be around 1, by denoting \(\xi^{n+1}=\frac{r^{n+1}}{\sqrt{E^{n}}}\), larger \(\Delta t\) is allowed when \(\xi\) is close to 1 and the smaller \(\Delta t\) is needed when \(\xi\) is far away from 1. Thus, a simple adaptive time-stepping strategy can be described as follows: ``` 1. Set the tolerance for \(\xi\) as \(\epsilon_{0}\) and \(\epsilon_{1}\), the initial time step \(\Delta t\), the maximum time step \(\Delta t_{max}\) and the minimum time step \(\Delta t_{min}\) 2. Compute \(u^{n+1}\). 3. Compute \(\xi^{n+1}=\frac{r^{n+1}}{\sqrt{E^{n}}}\). 4. If\(|1-\xi^{n+1}|>\epsilon_{0}\), Then\(\Delta t=\max(\Delta t_{min},\Delta t/2)\); Else if\(|1-\xi^{n+1}|<\epsilon_{1}\), Then\(\Delta t=\min(\Delta t_{max},2\Delta t)\). Go to Step 2. 5. Update time step \(\Delta t\). ``` **Algorithm 1** Adaptive time stepping strategy Another popular strategy to keep \(r\) approximating the original energy \(E\) is to reset the SAV \(r^{n+1}\) to be \(E^{n+1}\) in some scenarios. The specific algorithm is as following: ``` 1. Set the tolerance for \(\epsilon_{0}\), \(\epsilon_{1}\) should be some small tolerance, usually \(10^{-1}\) and \(10^{-3}\). The choices for \(\Delta t_{max}\) and \(\Delta t_{min}\) are quite dependent on \(\Delta t\), usually \(\Delta t_{max}=10^{3}\times\Delta t\) and \(\Delta t_{min}=10^{-3}\times\Delta t\). In Algorithm 2, we usually take \(\epsilon_{2}\) as \(2\times 10^{-2}\). 2. Feed \([u(x_{1}),u(x_{2}),\cdots,u(x_{m})]\) into the branch network and \(y\in Y\) into the trunk network. Denote the output of the DeepONet as \(q\). 3. Update the parameters in the DeepONet by minimizing a cost function, where the cost function can be taken as the mean squared error as \(\frac{1}{|Y|}\sum_{y\in Y}\|\mathcal{G}(u)(y)-q\|^{2}\). 4. Once the DeepONet has been trained well, solve the system of equations of (36) and (37) to obtain \(\left[\frac{\partial W_{1}}{\partial t};\frac{\partial W_{2}}{\partial t}\right]\). 5.The value of \(\left[\frac{\partial W_{1}}{\partial t};\frac{\partial W_{2}}{\partial t}\right]\) can be obtained in the current step. Since the parameters \(W_{1}^{n}\) in the branch network, and \(W_{2}^{n}\) in the trunk network are known, \(W_{1}^{n+1}\) and \(W_{2}^{n+1}\) for the next step can be also obtained by the Forward Euler method or Runge-Kutta method. 6. Repeat step 5 until the final time \(T\), where \(T=t_{0}+s\Delta t\), \(t_{0}\) is the initial time of the given PDE, \(\Delta t\) is the time step in step 5 and \(s\) is the number of repeated times of step 5. 7. Output the solution at time \(T\) in the DeepONet with parameters obtained in step 6. ``` **Algorithm 2** Restart strategy ## 6 Numerical Experiments In this section, we implement EDE-DeepONet to solve heat equations, parametric heat equations, and Allen-Cahn equations to show its performance and accuracy. ### Example 1: Simple heat equations To show the accuracy of the EDE-DeepONet, we start with the simple heat equation with different initial conditions since we already have the exact solution. A 1D heat equation system can be described by \[u_{t}=u_{xx} \tag{43}\] \[u(x,0)=f\] (44) \[u(0,t)=u(2,t)=0 \tag{45}\] By the method of separation of variables, we can derive the solution to the heat equation. If we set \(f(x)=asin(\pi x)\), the solution is \(u(x,t)=asin(\pi x)e^{-\pi^{2}t}\), where \(a\in[1,2]\). The corresponding energy is \(E(u)=\int_{0}^{2}\frac{1}{2}|u_{x}|^{2}dx\approx\Delta x(\sum_{i=1}^{n}\frac{ 1}{2}|u_{x}(x_{i})|^{2})\). With different parameters \(a\), the above equation describes a kind of PDE. The input data samples can be generated as \((a,x,\mathcal{G}(a)(x))\), where \(\mathcal{G}(a)(x)=a\sin(\pi x)\) for specific \(a\) and \(x\). When generating the initial data samples, we choose 50 points from \([0,2)\) uniformly for x and 50 random values of \(a\) from \([1,2]\). The time step when updating the parameters in the neural network is \(2.5\times 10^{4}\). The number of iteration steps is 400. We compared the different solutions with 4 different \(a\), 1.0, 1.5, 1.8, 2.5 every 100 steps. Although \(a=2.5\) is out of the range of training data, it still performs well in this model. With the exact solution, we also get the error with different \(a\) as Table 1. The error is defined by \(\frac{1}{N_{x}}\sum_{k=1}^{N_{x}}(u(x_{k})-\hat{u}(x_{k}))^{2}\), where \(N_{x}=51\), \(u\) is the solution obtained by EDE-DeepONet and \(\hat{u}\) is the exact solution. To illustrate the relationship between the modified energy and the original energy, we compare \(r^{2}\) and \(E\) at each step as Figure 2. Both energy are actually disspative in the EDE-DeepONet except when restart strategy applied. The restart strategy is used to keep \(r^{2}\) approaching \(E\). The modified energy is initialized when the restart strategy applied. The restart strategy was triggered on the 370th step since the modified energy and the original energy are offset. After that, they are on the same trajectory again. It is clear that the modified energy approaches the original energy before and after the restart strategy applied. In Figure 3, we give the comparison between the exact solution and the solution obtained by EDE-DeepONet. From this simple heat equation, we show that EDE-DeepONet correctly predicts the solution of the PDE. The most important fact is that EDE-DeepONet can not only predict the solution in the training subset range but also the solution out of the training range. For instance, we take \(a=2.5\) while \(a\in[1,2]\) in the training process. EDE-DeepONet shows good accuracy compared to the exact solution as Figure 3 (a)-(d) and Table 1. \[u_{t}=cu_{xx} \tag{46}\] \[u(x,0)=sin(\pi x)\] (47) \[u(0,t)=u(2,t)=0 \tag{48}\] This PDE is more complex than the PDE in Example 1 since the parameter is inside the equation. The traditional numerical scheme needs to be run multiple times to deal with the case with different parameters because they are actually different equations. However, we only need to train the EDE-DeepONet once. The training range of \(c\) is chosen as \([1,2)\). We choose 50 points of \(x\) and \(c\) in the same way as example 1. First, we compared the modified energy with the original energy as Figure 4. The energy is not the same as the first example since the energy depends on the parameter \(c\). We compute the average of the energy with different \(c\) to represent the energy of the system. This case is more complex than the first one, so it needs more restarts during the training. Even though the modified energy oscillates when restart strategy used, it keeps decreasing after each restart. Second, we give the error between the solution obtained by the EDE-DeepONet and the reference solution in Table 2, where the reference solution can be obtained explicitly by variable separation method and the error is defined in the same way as example 1. Third, we give the comparison between our solution and the reference solution in Figure 5. Same as example 1, we give the predicted solution of \(c\notin[1,2]\). All of them show the good accuracy. Hence, EDE-DeepONet can actually solve parametric PDEs. ### Example 3: Allen-Cahn equations The energy in Examples 1 and 2 is quadratic and the right-hand side of the PDE is linear with respect to \(u\). We are going to show the result for the PDE with more complicated energy. The Allen-Cahn equation is a kind of reaction-diffusion equation. It is derived to describe the process of the phase separation. It was developed to solve a problem in the material science area and has been used to represent the moving interfaces in a phase-field model Figure 3: The heat equation: The solution with 4 different initial conditions \(f(x)=a\sin(\pi x)\). The curve represents the solution obtained by the EDE-DeepONet, and xxx represents the reference solution. The training parameter \(a\) is in the range of \([1,2)\), so we give three examples in this range. We also present the case out of the range. It also shows accuracy in Figure 3-(d). in fluid dynamics. The Allen-Cahn equation can be treated as a gradient flow in \(L^{2}\) with some specific energy. We discussed the 1D case and 2D case as follows: #### 6.3.1 1D case (a) Various initial conditions: We start with the simple case, 1D Allen-Cahn equation. It can be described by the following equations: \[u_{t}=u_{xx}-g(x) \tag{49}\] \[u(x,0)=a\sin\pi x\] (50) \[u(-1,t)=u(1,t)=0 \tag{51}\] The corresponding Ginzburg-Landau free energy \(E[u]=\int_{0}^{1}\frac{1}{2}|u_{x}|^{2}dx+\int_{x=0}^{x=1}G(u)dx\), where \(G(u)=\frac{1}{4e^{2}}(u^{2}-1)^{2}\) and \(g(u)=G^{\prime}(u)=\frac{1}{e^{2}}u(u^{2}-1)\), \(\epsilon=0.1\). The parameter \(\epsilon\) affects the width of the jump when arriving at the steady state as the Figure 7 (c), (j), (o) and (t). In the EDE-DeepONet, we set \(\Delta t=10^{-4}\), the number of spatial points \(N_{x}\) is 51 and the range of \(a\) is \([0.1,0.5]\). We also compared the modified energy and the original energy as Figure 6. The modified energy can approximate well to the original energy even in a much more complicated form. Then, we compared 4 different solutions with different \(a\in[0.1,0.5]\) obtained by the EDE-DeepONet and the reference solution obtained by the SAV method in traditional numerical computation as Figure 7. The error is shown in Table 3, where error is \begin{table} \begin{tabular}{||c|c c c c||} \hline Error & \(T=0.025\) & \(T=0.05\) & \(T=0.075\) & \(T=0.1\) \\ \hline \hline \(c=1.2\) & \(1.30\times 10^{-5}\) & \(1.43\times 10^{-5}\) & \(1.35\times 10^{-5}\) & \(1.20\times 10^{-5}\) \\ \hline \(c=1.5\) & \(1.35\times 10^{-5}\) & \(1.27\times 10^{-5}\) & \(9.80\times 10^{-6}\) & \(7.80\times 10^{-6}\) \\ \hline \(c=1.8\) & \(1.17\times 10^{-5}\) & \(1.03\times 10^{-5}\) & \(7.88\times 10^{-5}\) & \(1.83\times 10^{-5}\) \\ \hline \(c=2.5\) & \(2.20\times 10^{-4}\) & \(1.34\times 10^{-4}\) & \(6.02\times 10^{-5}\) & \(7.08\times 10^{-6}\) \\ \hline \end{tabular} \end{table} Table 2: The parametric heat equation: The initial condition of the PDE is \(f(x)=\sin\left(\pi x\right)\). The error is defined by \(\frac{1}{N_{x}}\sum_{k=1}^{N_{x}}(u(x_{0})-\hat{u}(x_{k}))^{2}\), where \(N_{x}=51\), \(u\) is the solution obtained by EDE-DeepONet and \(\hat{u}\) is the exact solution. Figure 4: The parametric heat equation: The modified energy and original energy when training the network. Each iteration step represents one forward step of the PDE’s numerical solution with \(\Delta t=2.5\times 10^{-4}\). This kind of PDEs is more complicated, so it need more restarts in the training process. The original energy keeps decreasing and the modified energy also shows good approximation of the original energy. defined in the same way as example 1. \(a=0.6\notin[0.1,0.5)\) shows that EDE-DeepONet can predict the solution well out of the training range. We compared the solution with 4 different initial condition parameter \(a\) every 100 steps until the final time \(T=0.04\) as Figure 7. Each row presents the solution under the same initial condition but with different evolution time \(T\). With this example, it shows that EDE-DeepONet can deal with the PDE with a jump, while it is hard for other neural networks. (b) Various thickness of the interface: Heuristically, \(\epsilon\) represents the thickness of the interface in the phase separation process. We are able to obtain a sharp interface when \(\epsilon\to 0\) with evolving in time. Each theoretical and numerical analysis of the limit makes a difference in the purpose of the understanding of the equation, cf. e.g. [34; 35]. We take \(\epsilon\) as a training parameter. The problem can be described as: \[u_{t}=u_{xx}-\frac{1}{\epsilon^{2}}(u^{3}-u) \tag{52}\] \[u(-1,t)=u(1,t)=0 \tag{53}\] Figure 5: The parametric heat equation: The solution with 4 different parameters \(c\). The curve represents the solution obtained by the EDE-DeepONet and xxx represents the reference solution. The training parameter \(c\) is in the range of \([1,2)\), so we give 3 examples in this range. We also present the case out of the range in Figure 5-(d). Since the training sample contains the parameter \(\epsilon\), we can not use the same initial condition as the last example. We use spectral methods for a few steps with initial condition \(u(x,0)=0.4\sin{(\pi x)}\). The training sample is generated based on the numerical solution of \(u_{\epsilon}(x,0.02)\). We randomly select \(50\) different \(\epsilon\) from \([0.1,0.2]\). We set the learning rate as \(\Delta t=10^{-4}\) and apply the adaptive time stepping strategy. We obtain the predicted solution after \(400\) iterations with different \(\epsilon\). The rest setting is the same as the last example. The solution with different \(\epsilon\) is shown in Figure 8. As \(\epsilon\) goes smaller, the interface is sharper. Besides, the range of the training parameter is \((0.1,0.2)\). We are also able to obtain the solution out of the above range. EDE-DeepONet can track the limit of \(\epsilon\) in only one training process, EDE-DeepONet can track the limit of \(\epsilon\) in only one training process, whereas other traditional numerical methods hardly make it. #### 6.3.2 2D case The 2D case Allen-Cahn equation is even more complex. The problem can be described as follows: \[u_{t}=\Delta u-g(u) \tag{54}\] \[u(x,y,0)=asin(\pi x)sin(\pi y)\] (55) \[u(-1,y,t)=u(1,y,t)=u(x,-1,t)=u(x,1,t)=0 \tag{56}\] The corresponding Ginzburg-Landau free energy \(E[u]=\int_{-1}^{1}\int_{-1}^{1}\frac{1}{2}(|u_{x}|^{2}+|u_{y}|^{2})dxdy+\int_ {-1}^{1}\int_{-1}^{1}G(u)dx\), where \(G(u)=\frac{1}{4e^{2}}(u^{2}-1)^{2}\) and \(g(u)=G^{\prime}(u)=\frac{1}{e}u(u^{2}-1)\). Usually, we take \(\epsilon=0.1\). In the training process, we take \(\Delta t=2\times 10^{-4}\). The number of spatial points is \(51\times 51\) and the number of training parameters \(a\) is \(20\). The way to choose \(a\in(0.1,0.4)\) and \(x\) is the same as in example 1. We first compared the exact solution and the solution obtained by EDE-DeepONet with initial condition \(f(x,y)=0.2sin(\pi x)sin(\pi y)\), where the exact solution is obtained by the traditional SAV method. EDE-DeepONet predicts the solution correctly based on Table 4 and Figure 9. Then in order to show its accuracy, we draw Figure 10 with more parameters. All the examples show good trends to separate. The case \(a=0.4\) is out of the training range, but it still approaches the exact solution. Figure 6: 1D Allen-Cahn equation: The modified energy and original energy when training the network are shown above. Each iteration step represents one forward step of the PDE’s numerical solution with \(\Delta t=10^{-4}\). The modified energy shows the same trends as the original energy. ## 7 Concluding Remarks In this paper, we provide a new neural network architecture to solve parametric PDEs with different initial conditions, while maintaining the energy dissipative of dynamic systems. We first introduce the energy dissipative law of dynamic systems to the DeepONet. We also introduce an adaptive time stepping strategy and restart strategy. With our experiments, both above strategies help keep the modified energy approaching the original energy. To avoid much cost of training the DeepONet, we evolve the neural network based on Euler methods. In this article, we adopt the SAV method to solve gradient flow problems. With this successful attempt, more work could be done. For example, we can consider a general Wasserstein gradient flow problem. We are only adopting the basic architecture of the DeepONet. The more advanced architecture is compatible to our work. It may further improve the accuracy of EDE-DeepONet. ## Acknowledgments SJ and SZ gratefully acknowledge the support of NSF DMS-1720442 and AFOSR FA9550-20-1-0309. GL and ZZ gratefully acknowledge the support of the National Science Foundation (DMS-1555072, DMS-2053746, and DMS \begin{table} \begin{tabular}{||c|c c c||} \hline Error & \(T=0.01\) & \(T=0.02\) & \(T=0.03\) \\ \hline \hline \(a=0.15\) & \(1.23\times 10^{-4}\) & \(6.53\times 10^{-4}\) & \(2.75\times 10^{-3}\) \\ \hline \(a=0.2\) & \(2.24\times 10^{-4}\) & \(1.10\times 10^{-3}\) & \(4.04\times 10^{-3}\) \\ \hline \(a=0.3\) & \(4.28\times 10^{-4}\) & \(1.84\times 10^{-3}\) & \(5.76\times 10^{-3}\) \\ \hline \(a=0.35\) & \(5.31\times 10^{-4}\) & \(2.17\times 10^{-3}\) & \(6.25\times 10^{-3}\) \\ \hline \(a=0.4\) & \(6.94\times 10^{-4}\) & \(2.71\times 10^{-3}\) & \(7.22\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 4: 2D Allen-Cahn equation: The initial condition of the 2D Allen-Cahn equation is \(f(x,y)=a\sin\left(\pi x\right)\sin\left(\pi y\right)\). The error is defined by \(\frac{1}{N_{x}N_{y}}\sum_{k=1}^{N_{x}}\sum_{j=1}^{N_{y}}\sum_{j=1}^{N_{y}}(u(x _{k},y_{j})-\hat{u}(x_{k},y_{j}))^{2}\), where \(N_{x}=N_{y}=51\), \(u\) is the solution obtained by EDE-DeepONet and \(\hat{u}\) is the reference solution. Figure 7: 1d Allen-Cahn equation: The solution for 1d Allen-Cahn equation with 4 different initial conditions \(f(x)=a\sin\pi x\). The curve represents the solution obtained by our model, and xxx represents the reference solution. We draw the figure for every 100 steps. The range of \(a\) is \([0.1,0.5]\). We also compare the solution with \(a\notin[0.1,0.5]\). All the figures show the trends of the phase separation. 2134209), Brookhaven National Laboratory Subcontract 382247, and U.S. Department of Energy (DOE) Office of Science Advanced Scientific Computing Research program DE-SC0021142 and DE-SC0023161. Figure 8: 1d Allen-Cahn equation: Solutions with different thickness of the interface at the same final time. The curve represents the solution obtained by EDE-DeepONet. xxx represents the reference solution. Figure 10: 2D Allen-Cahn equation: The solution of 2D Allen-Cahn equation with 4 different initial conditions \(f(x,y)=a\sin\pi x\sin\pi y\). The training parameter \(a\in[0.1,0.4]\). We draw three figures where \(a\) is in the training range and one figure where \(a\) is out of the training range. All the figures show the phase separation trends according to the reference solution. As \(a\) is further away from the training range, the error tends to be larger. Figure 9: 2D Allen-Cahn equation: (a)-(d) represents the reference solution of the 2D Allen-Cahn equation with initial condition \(f(x,y)=0.3\sin\left(\pi x\right)\sin\left(\pi y\right)\). (e)-(h) is the solution obtained by the EDE-DeepONet.
2301.02040
Temperature of Solar Orbiter/EUI quiet Sun small scale brightenings: evidence for a cooler component
Context: On 2020 May 30, small and short-lived EUV brightenings were observed in the Quiet Sun (QS) during a four minutes sequence by EUI/HRIEUV on board Solar Orbiter. Their physical origin and possible impact on coronal or Transition Region (TR) heating are still to be determined. Aims: Our aim is to derive the statistical thermal evolution of these events in order to establish their coronal or TR origin. Methods. Our thermal analysis takes advantage of the multithermal sensitivity of the Atmospheric Imaging Assembly (AIA) imager on board the Solar Dynamics Observatory (SDO). We first identified these HRIEUV events in the six coronal bands of AIA. We then performed a statistical time lag analysis, which quantifies the delays between the light curves from different bands. These time lags can give significant insights into the temperature evolution of these events. The analysis is performed taking into account the possible contribution to the results from the background and foreground emissions. Results: The events are characterized by time lags inferior to the AIA cadence of 12 s, for all nine couples of AIA bands analyzed. Our interpretation is the possible co-presence of events which reach or do not reach coronal temperatures ($\approx$ 1MK). We believe that the cool population dominates the events analyzed in this work.
A. Dolliou, S. Parenti, F. Auchère, K. Bocchialini, G. Pelouze, P. Antolin, D. Berghmans, L. Harra, D. M. Long, U. Schühle, E. Kraaikamp, K. Stegen, C. Verbeeck, S. Gissot, R. Aznar Cuadrado, E. Buchlin, M. Mierla, L. Teriaca, A. N. Zhukov
2023-01-05T12:24:18Z
http://arxiv.org/abs/2301.02040v2
Temperature of quiet Sun small scale brightenings observed by EUI on board Solar Orbiter: Evidence for a cooler component ###### Abstract Context:On May 30, 2020, small and short-lived extreme-UV (EUV) brightenings in the quiet Sun were observed over a four-minute sequence by the EUV channel of the Extreme Ultraviolet Imager - High Resolution Imager (EUHRI\({}_{\rm EUV}\)) on board the Solar Orbiter. The brightenings' physical origin and possible impact on coronal or transition region (TR) heating are still to be determined. Aims:Our aim is to derive the statistical thermal evolution of these events in order to establish their coronal or TR origin. Methods:Our thermal analysis took advantage of the multithermal sensitivity of the Atmospheric Imaging Assembly (AIA) imager on board the Solar Dynamics Observatory. We first identified the HRI\({}_{\rm EUV}\) events in the six coronal bands of AIA. We then performed a statistical time lag analysis that quantified the delays between the light curves from different bands, as these time lags can give significant insight into the temperature evolution of the events. The analysis was performed taking into account the possible contribution of the background and foreground emissions to the results. Results:For all nine couples of AIA bands analyzed, the brightening events are characterized by time lags inferior to the AIA cadence of 12 s. Our interpretation for these short time lags is the possible copresence of events that reach or do not reach coronal temperatures (\(\approx 1\) MK). We believe that the cool population dominates the events analyzed in this work. Conclusions: ## 1 Introduction Decades of investigation suggest that the solar corona is formed and maintained through small-scale processes, even though the mechanisms at the origin of such processes are only partially understood. Wave dissipation and magnetic field reconnection are present in the solar atmosphere and are the main candidate processes for the solar corona's plasma heating. See for instance Reale (2014) and Viall et al. (2021) for a review on the argument. Coronal observations suggest that the dissipation of magnetic energy leading to coronal heating must happen at unresolved spatial scales, and while many dissipation mechanisms are impulsive in nature, it is unclear whether the dissipation has a more continuous or sporadic character on average. The properties of the coronal heating events, such as their amplitude, duration, and frequency, are still a matter of debate. Parker (1988) proposed magnetic reconnection as the origin of these unresolved heating events (which became known as nanoflares). His theory is based on the shuffling and intermixing of the photospheric footpoints of magnetic flux tubes, which would produce reconnection and subsequent formation of tiny current sheets in which the energy is dissipated. This idea has been generalized in recent years, particularly for active region heating where processes other than reconnection (wave propagation) may also be at the origin of the nanoflares energy (Van Doorsselaere et al., 2020; Viall et al., 2021). For instance, small-scale energy dissipation can occur through a turbulent cascade created by the interaction of nonlinear waves (e.g. Buchlin and Velli, 2007) or through shock heating from nonlinear mode conversion (Moriyasu et al., 2004). Studies addressing the heating of the quiet Sun (QS) indicate that waves and reconnections are also present (e.g. McIntosh et al., 2011; Hahn and Savin, 2014; Upendran and Tripathi, 2021, 2022). In addition, observations of the corona from the hard X-rays (e.g. Crosby et al., 1993; Shimizu, 1995; Hannah et al., 2010) to the UV bands (e.g. Berghmans et al., 1998; Harra et al., 2000; Aschwanden and Parnell, 2002) also suggest that small-scale impulsive heating may play a role here. These observations have revealed that unresolved small bright transient events increase in number everywhere in the corona any time the spatial and temporal resolutions of instruments are increased. Examples of small and fast phenomena in the corona have been observed during the High-Resolution Coronal Imager (Hi-C) sounding rocket flights (Kobayashi et al., 2014), during which images were recorded in a band centered on 193 A (including the Fe XII 195 A line). These observations were made with a spatial resolution of about 0.3\({}^{\prime\prime}\) (\(\approx\) 220 km, Winebarger et al., 2014). The Hi-C instrument resolved small cool loops (Winebarger et al., 2013) and extreme-UV (EUV) bright dots with characteristic lengths of 680 km, durations of 25 s, and temperatures ranging between 0.5 and 1.5 MK (Regnier et al., 2014). The Interface Region Imaging Spectrograph (IRIS; De Pontieu et al., 2014) reaches a resolution of \(\approx 0.33^{\prime\prime}-0.4^{\prime\prime}\) (\(\approx\) 240 - 290 km in the corona) but is mostly sensitive to the transition region (TR) and chromospheric temperatures. With IRIS and the Atmospheric Imaging Assembly (Lemen et al., 2012, AIA; ) on board the Solar Dynamics Observatory (SDO; Pesnell et al., 2012), it was possible, for instance, to observe tiny, short-lived, and multithermal "nanojets" (size 1000 - 2000 km, \(\sim\)15 s, with chromospheric to coronal temperatures, Antolin et al., 2021; Sukarmadji et al., 2022) in large cool loops, which were interpreted as the transverse motion of field lines recommecting at small angles. Larger jet-like structures (Innes and Teriaca, 2013) were detected with the Solar Ultraviolet Measurements of Emitted Radiation spectrometer (Wilhelm et al., 1995) on board the Solar and Heliospheric Observatory (SOHO), along with UV (Peter et al., 2014) and EUV (Young et al., 2018) bursts. IRIS has also observed "unresolved fine structures" in TR lines, which have been associated with short (\(\approx 4-12\) Mm) loops or parts of loops. They were seen at the limb in QS regions and shown to be highly variable (a few minutes in lifetime), with strong Doppler shift dynamics (up to 100 km s\({}^{-1}\)). In addition to the aforementioned sporadic and short duration Hi-C rocket flights, SDO/AIA is able to obtain full-Sun images with a resolution of 1.5\({}^{\prime\prime}\) (corresponding to \(\approx\) 1100 km in the corona). Using the AIA 171 and 193 A channels, Raouafi and Stenborg (2014) detected small jets ("jetlets") at the footpoint of coronal plumes. More recently, Chitta et al. (2021) characterized the statistical properties of small EUV bursts detected in AIA 171, 193, and 211 A sequences. Similar and smaller scales brightening are now observed by Solar Orbiter. The Solar Orbiter mission (Muller, D. et al., 2020; Zouganelis et al., 2020) carries, as part of the remote-sensing payload (Auchere et al., 2020), the Extreme Ultraviolet Imager (EUI) suite (Rochus et al., 2020). The High-Resolution Imager (HRI\({}_{\rm EUV}\)) and the Full Sun Imager (FSI) 174 channels are dominated by emission from lines of Fe ix and Fe x. They image the plasma emission of the high TR and corona, which is the region of interest for this work. At its closest, the Solar Orbiter approaches the Sun down to 0.28 AU, allowing a two-pixel spatial resolution of \(\approx\) 200 km of the corona, with a maximal cadence of 1.6 s, thus providing the highest spatial and temporal resolution images to date at these wavelengths, for extended periods of time and on a variety of targets. On May 30, 2020, when the Solar Orbiter was at 0.556 AU, HRI\({}_{\rm EUV}\) made its first observation of the QS corona at high spatial (400 km) and temporal (5 s of cadence) resolutions. During this four-minute sequence, 1467 small EUV brightenings of variable size (400 to 4000 km) and lifetime (10 to 200 s) were detected, referred to as "camplifies" (Berghmans et al., 2021). The HRI\({}_{\rm EUV}\) field of view was also visible by SDO/AIA, and part of the events detected by HRI\({}_{\rm EUV}\) were also visible in at least one of the AIA coronal bands because of the lower spatial and temporal resolutions of AIA (about 1100 km and 12 s, respectively). Berghmans et al. (2021) used the AIA observations to infer the temperature of the EUV brightenings, applying the differential emission measure diagnostic method of Hannah and Kontar (2012). The resulting distribution was centered around 1.3 MK. These features are yet to be better characterized, but initial investigations suggest that their origin is linked to photospheric magnetic cancellation (Panesar et al., 2021) or magnetic reconnection close to the TR or the chromospheric part of the loops (Kahil et al., 2022). Zhukov et al. (2021) found that these EUV brightenings are low lying (1 Mm to 5 Mm), which indicates that they could be chromospheric or transition region features. The authors noticed that the estimated heights of the features are larger than their apparent lengths. If these events are loops, this implies that HRI\({}_{\rm EUV}\) does not see their full extent. Therefore, if they reach 1 MK, they do so only at their apex. Winebarger et al. (2013), using Hi-C and SDO/AIA data, estimated the temperature of small inter-mass loops to be about \(2.8\times 10^{5}\) K. These had a projected length between about 5 and 7 Mm, and their light curves, from the different AIA bands, peaked at the same time, suggesting the absence of cooling from coronal temperature. These Hi-C loops are larger than the ones observed by Berghmans et al. (2021) and Zhukov et al. (2021). Furthermore, they were observed in active regions. However, it is possible that they share similar physical mechanisms. These results motivated our work to further investigate the thermal properties of the HRI\({}_{\rm EUV}\) events. We performed a statistical study of over 1000 detected events, and the rest of the QS was used as a reference (see Sect. 2). Our analysis is based on the time lag method (see Sect. 3) applied to the AIA light curves from several pairs of channels. This method has been extensively used in active regions to study loops submitted to thermal non-equilibrium (Froment et al., 2015; Froment et al., 2017, 2020; Froment, 2016) and to test the nanoflare theory (Viall and Klimchuk, 2011; Viall and Klimchuk, 2015, 2017). The novelty of the present work relies on the application of this technique to QS region data and over short time lags. In Sect. 4, we show that there is no or little sign of a lag between all the chosen AIA bands. The implications of these results are discussed in Sect. 5. ## 2 Observations and data reduction On May 30, 2020, while the Solar Orbiter mission was still performing commissioning activities, HRI\({}_{\rm EUV}\) observed a QS region at a cadence of 5 seconds for 4 minutes from 14:54:00 to 14:58:05 UT. The field of view of HRI\({}_{\rm EUV}\) is visible in a full-Sun image taken in the FSI 174 channel (Fig. 1 (a)). Fig. 1 (c) shows the corresponding field of view on a full-Sun image of AIA 171, as observed by SDO, which has a similar temperature response, peaking at 0.9 MK. The apparent difference in position of the HRI\({}_{\rm EUV}\) field of view between Fig. 1 (a) and (c) was caused by the separation angle, equal to 31.5\({}^{\prime}\), between the Solar Orbiter line of sight and the Sun-Earth line. ### Detection of the EUV brightenings by HRI\({}_{\rm EUV}\) The HRI\({}_{\rm EUV}\) data used for the present work1 was taken at 0.556 AU from the Sun, resulting in a spatial resolution of \(\sim 400\) km in the corona. In this sequence, Berghmans et al. (2021) automatically detected and cataloged 1467 brightening events, nicknamed "camplifres" and referred to as "events" from hereon. The detection was performed after remapping the images on a regular \(2400\times 2400\) Carrington grid spanning from \(248.9^{\circ}\) to \(287.9^{\circ}\) in longitude and from \(-11.5^{\circ}\) to \(27.5^{\circ}\) in latitude (corresponding to a \(0.016\,25^{\circ}\) pitch, \(198\,\mathrm{km}\) on the sphere) and with a projection radius of \(1.004\,\,\mathrm{R}_{\odot}\) (Fig. 1). As the spacecraft jitter had been documented in the FITS headers, it was compensated for in the Carrington remapping, and the absolute pointing values were determined by cross-correlation with AIA. The automated detection scheme (appendix B of Berghmans et al. 2021) defined the events as pixels whose intensity is larger than an arbitrarily defined threshold of five times the local noise level in the first two smaller scales of a spatial a trous wavelet transform. Events overlapping between successive frames were merged to produce the final set of spatio-temporal events. Their surfaces range from \(0.04\,\mathrm{M}\mathrm{m}^{2}\) (the HRI\({}_{\mathrm{EUV}}\) spatial resolution) to \(5\,\mathrm{M}\mathrm{m}^{2}\), the upper limit being partly a consequence of the chosen maximum wavelet scale. No restriction was imposed on their duration. We note that the number of detected events, as well as their properties (surface, lifetime), depended highly upon the detection parameters. For consistency, we used the (Berghmans et al. 2021) cataloged as is. We however removed the events present in the first or last image of the HRI\({}_{\mathrm{EUV}}\) observation, as their lifetime might not have been fully captured. Fig. 1 (b) shows the location of the 1314 selected events on the first HRI\({}_{\mathrm{EUV}}\) image of the sequence. ### Multichannel observations with AIA A major limitation of HRI\({}_{\mathrm{EUV}}\) is its single passband, which makes it impossible to derive information on the plasma temperature. Therefore, we used data from six channels (94, 131, 171, 193, 211, and 335 A) of the AIA instrument for this purpose. We did not include the 304 band because the He ii 30.4 nm Figure 1: Images captured on May 30, 2020. Upper row: Field of view observed by FSI 174 (a) and the first image of the HRI\({}_{\mathrm{EUV}}\) sequence (b) in Carrington coordinates. The FSI image is the closest available to the HRI\({}_{\mathrm{EUV}}\) sequence. Lower row: AIA 171 image (c) and remapped on the same grid as HRI\({}_{\mathrm{EUV}}\) (d). Blue rectangles in the left column correspond to the field of view on the right column, and the blue dots in the right column are the positions of the 1467 detected events. spectral line is optically thick, and the interpretation of its intensity is not straightforward. The selected bands cover a wide range of plasma temperatures (0.2 MK to 8 MK, Fig. 2) but have only less than half the temporal resolution (12 s) of HRI\({}_{\rm EUV}\) (5 s). For our work, we needed to take into account the lower spatial and temporal resolutions of AIA, compared to those of HRI\({}_{\rm EUV}\). This difference meant that small and short-lived events detected by HRI\({}_{\rm EUV}\) may be unresolved when observed with AIA 171. In addition, events might not be sufficiently bright in some of the AIA bands to be detectable. The HRI\({}_{\rm EUV}\) and AIA images were paired taking into account the 229 s difference in light travel time to Solar Orbiter and to the Earth. The AIA images were remapped onto the same Carrington grid as the HRI\({}_{\rm EUV}\) data (Fig. 1, d). On this common grid, the HRI\({}_{\rm EUV}\) images were resampled with at least one grid point per pixel, and the AIA images were resampled with at least two. ## 3 Method In order to characterize the evolution of the thermal structure of these events, we used the time lag method. Because the AIA bands peak at different temperatures (Fig. 2), the time lags between them are a signature of plasma cooling (or heating) over time. For example, the response functions of the AIA 193 and 171 bands peak respectively at 1.5 MK and 0.9 MK. The intensity in the 171 band peaking after the 193 band can be interpreted as a hot plasma cooling, while the opposite behavior can be a signature of plasma heating. We discuss the various possible scenarios in detail in Sect. 5. In the following, we describe the computation and classification of the AIA light curves (Sect. 3.1) and the computation of the time lags (Sect. 3.2). The analysis was performed pixel by pixel to take into account the spatial and the temporal information contained in the data. Several events were spatially resolved in the AIA data so that the thermal behavior in individual pixels of each event would be independently characterized. This avoids the assumption that the event has no thermal substructure. This method could involve the use of low SNR for some of the pixels, but this can be avoided by performing the analysis over the integrated intensity from the whole spatial extension of the event. However, the latter approach would impose the above-mentioned assumption, which we preferred to avoid. We verified in Appendix B that a same time lag analysis, performed over whole events, yields the same results. Also in the following, we use "background" to refer to the total of background and foreground emission superimposed on the events along the line of sight. Background emission can represent a large fraction of the total emission (Sect. 4.3) and has the same properties as the QS emission observed outside the events. Since we wanted to measure the time lags of the events themselves, it was necessary to check the influence of the background (described in Section 3.1). The background intensity was estimated for each pixel and time step. ### Light curves For our analysis, we classified the pixels into two categories: "event" pixels, that is, those containing at least one event from Berghmans et al. (2021) during the sequence, and non-event pixels that we refer to as "QS" for simplicity. The QS pixels were used as a reference, and their statistics were compared to those of the event pixels (Sect. 4.3). While the AIA and HRI\({}_{\rm EUV}\) data were reprojected to the same Carrington grid, the location of each event could be different in the two data sets. Indeed, the separation angle between the two vantage points induced a parallax shift for those events located above or below the projection sphere. The contour of each event detected in HRI\({}_{\rm EUV}\) was shifted by the amount measured by Zhukov et al. (2021) to obtain the corresponding contour in AIA. In the case of spatially overlapping events, this can cause the classification mask (the union of the contours at each time step) in AIA to have a different shape than in HRI\({}_{\rm EUV}\). This is the case for the area shown in Fig. 3, in which two successive events peaking at 14:54:30 and 14:55:04 UT are overlapping and do not have the same height and thus do not have the same parallax shift. We estimated the background emission at each pixel using the open-cv implementation of the inpainting method of Bertalmio et al. (2001). This method estimates the intensity inside the mask by matching the intensity and intensity gradients at the boundary of the mask. This operation was performed at each time step. Whenever this background subtraction was applied to the analysis, we mentioned so explicitly in the text. Figure 3 (b) shows an example of the result from this treatment. We have selected a pixel inside the mask (pixel 1 in Fig. 3 (a)), and we plotted the light curves as measured in the HRI\({}_{\rm EUV}\) and AIA channels together with their calculated background emission. For display purposes, original and background-subtracted light curves were normalized to the standard deviation of the original. To plot all the curves on the same panel, we separated the curves from a given channel vertically by an arbitrary value of five. The error bars are the root sum square of the photon shot noise (as computed in Appendix A) and read noise components. Boerner et al. (2012) provides the read noise for all the AIA bands. For HRI\({}_{\rm EUV}\), the read noise is estimated to be equal to 1.5 DN. In Fig. 3 (b), the light curves of all channels but AIA 94 and 335 have a similar behavior. In the AIA 94 and 335 channels, the event in pixel 1 was not detected above the noise. The absence of signal in these two bands is caused by their low response (see Fig. 2) and is common for most of the events. Figure 3 (c) shows, for a comparison, the same as Fig. 3 (b) but for a representative QS pixel (pixel 2 in Fig. 3 (a)). Apparently uncorrelated fluctuations of the intensity can be seen in this figure. Figure 2: HRI\({}_{\rm EUV}\) and AIA temperature response functions computed with CHIANTI 10.0.1 (Dere et al., 1997; Del Zanna et al., 2021), assuming an electron number density \(n_{e}=10^{9}\) cm\({}^{-3}\). ### Time lags In the following subsection, we describe the computation of time lags between couples of AIA light curves. The time lags are defined as the temporal offset between the two light curves that yields the maximum Pearson's cross-correlation coefficient. By design, the images of the six channels are not cotemporal. For this reason, we resampled the light curves on the timeline of the 171 band using linear interpolation before applying the cross-correlation procedure. The latter was performed on a range of temporal offsets of \(\pm 2\) minutes with steps of 12 s. A finer estimate of the time lag was obtained by parabolic interpolation around the maximum. Figures 3 (e) and (f) show the results of this analysis for the AIA pixels 1 and 2. We plotted the values of the correlation as a function of the time offset applied between the two light curves. For the event pixel, we chose three couples with high SNRs (193 \(-\) 171, 211 \(-\) 171, and 211 \(-\) 131). These couples have a strong correlation peak at near-zero offsets: 0.3 s for 193 \(-\) 171, 0.8 s for 211 \(-\) 171, and \(-\)0.6 s for 211 \(-\) 131. The other two curves (335 \(-\) 171 and 94 \(-\) 171) involve low SNR bands and have a maximum of correlation at a time offset different from zero. Their time offset is positive for 335 \(-\) 171 (7.8 s) and negative for 94 \(-\) 171 (\(-\)3.2 s), with a maximum correlation below 0.5. The SNR is low in the 335 and 94 bands, and the peak correlation is of the order of that found in the QS (Fig. 3 (f)). We discuss the significance the cross-correlations involving low SNR bands in Sect. 4.1 and 4.2. Figure 3 (f) shows the results for the selected QS pixel. Clearly there is no strong correlation at any time offset and for any pair of AIA channels. Figure 3: Illustration of time lag extraction method between AIA channels on event and QS pixels. Images of HRI\({}_{\rm EUV}\) (a) (14:54:00 to 14:58:05 UT) and AIA 171 Å (d) (14:57:45 to 15:01:57 UT) averaged in time over their respective sequence on May 30, 2020. Both images are centered around Carrington coordinates (275.00, 9.07)\({}^{\circ}\). The white contours represent the masks that isolate the event pixels from the QS ones. Pixels 1 and 2 were selected, respectively, as an example for the event pixel and QS pixel. Subfigure (b) shows the light curves in pixel 1 for HRI\({}_{\rm EUV}\) and the AIA channels original data (dots) and background data estimated with the “pinatining” algorithm (solid curves). For each channel, both curves are normalized over the standard deviation over time of the original data (dots). Subfigure (c) shows the light curves in pixel 2 for the same channels of (b) normalized to their standard deviation over time. Different couples were separated by an arbitrary value of five. The error bars in subfigures (b) and (c) were computed from the shot and read noises. (e) and (f) show the cross-correlation as a function of the time offset between the AIA light curves for pixels 1 (b) and 2 (c), respectively. Figure 4 displays the maps of AIA intensity averaged over the sequence, time lag, and maximum of cross-correlation for the area shown in Fig. 3. We noticed that the emission is not cospatial in all bands. The intensity maps show a displacement of emission peak for AIA 211 and 94 (even though the signal is very low for AIA 94). Since the AIA channels are all coaligned, this could be due to the thermal structure of the observed features. These observations show the importance of analyzing the plasma evolution pixel by pixel, as opposed to averaging the intensity over the event surface. While doing the latter might increase the SNR, it would mix light curves of regions at different temperatures. The bands in the top row of Fig. 4 are ordered by decreasing mean intensity and thus decreasing SNR. In the bottom row, within the mask, we see correspondingly decreasing correlation values. Higher correlation values are associated with spatially coherent near-zero time lags, whereas lower correlations show an apparently random distribution of the time lags. ## 4 Results In Sect. 4.1 we present the statistical analysis over the whole field of view of the data. In Sect. 4.2, we discusses the effect of the SNR on the results. Finally, we estimate the effect of the background on the time-lag analysis in Sect. 4.3. ### Zero time lags For the event pixels, Fig. 5 displays the time lag and the maximum correlation 2D histograms. We chose nine representative AIA couples covering a wide range of temperature sensitivities. In this part of the work, the estimated background has been subtracted from the event pixel intensity. The 80%, 90%, and 95% confidence levels, displayed in Fig. 5, are computed in Appendix A. The counts above the 95% level are at most 5% likely to occur by chance. For most of the couples, a significant number of pixels are centered around short time lags (below the 12 s cadence), and are above the 95 % confidence level in cross-correlation. This part of the distribution is therefore statistically significant. In contrast, 94 - 335 shows no Figure 4: Time extraction procedure applied pixel by pixel to event pixels and their surrounding QS pixels. Top row: Intensity maps for five AIA bands (averaged over the temporal sequence) showing the event of Fig. 3 (a) and (d). The ”event” region is identified by the black contour. Middle and bottom rows: Time lag and associated maximum correlation maps for five couples of AIA bands. These are the result of the pixel-by-pixel cross-correlation analysis. The maximum correlations of the events decreases as the intensities of the involved AIA channels decrease. significant pixel counts above the 95 % confidence level, which matches the contour of the 2D histogram. Given that these bands are largely affected by noise, this validates, a posteriori, the principle of computing confidence levels from uncorrelated light curves (Appendix A). While the time lags are near zero, the distributions are slightly asymmetric. This can be quantified by the parameter \(\nu_{95}\), which represents the average of the time lag values above the 95% confidence level weighted by their maximum correlation. Apart from the 335 - 211 couple, all the asymmetries are below the exposure time of 2 s. For 335 - 211, the positive asymmetry is above the exposure time but below the temporal resolution. Figure 5: Two-dimensional histograms (shades of red) of time lags and maximum correlations of nine couples of AIA channels for the 4451 event pixels of the HRI\({}_{\rm EUV}\) field of view. The estimated background has been subtracted. The green dashed lines are the confidence levels, derived in Appendix A. The \(\nu_{95}\) parameter quantifies the asymmetry of the time lag distributions. It represents the average of the event time lags above the 95% confidence level weighted by their respective maximum correlations. ### Influence of the signal level The main panels of Fig. 6 display the 2D histograms of the average intensity over the time sequence versus the maximum correlations for the two AIA couples 193 - 171 (high-high SNR) and 94 - 171 (low-high SNR). The bottom and top rows, respectively, display the intensity of the first and second band of the pair. Both distributions of event and QS pixels are displayed in Fig. 6. The orange distributions and red contours refer to the event pixels, and the blue contours refer to the QS pixels. For the 193 - 171 couple (Fig. 6 (a) and (c)), the event pixel distribution shows a wide range of possible correlation values, in contrast to the QS pixel distribution. The latter is more compact and centered around lower maximum correlation and intensity values. However, the 171 - 94 case shows both the event and the QS pixel histograms as sharing a similar compact shape. This is mostly due to the lower intensity and thus lower SNR in the 94 band. The intensity distributions, which are displayed in the histograms in right margins of Fig. 6, peak at higher values for the event pixels than for the QS pixels for every channel. This implies that, on average, the HRI\({}_{\rm EUV}\) events are also visible in the AIA channels. The most significant difference between the two AIA couples shown in the figure is their maximum correlation distributions, displayed in the top-margin histograms. Indeed, while the event pixel distribution peaks at higher correlation values than the QS distribution for 193 - 171, both distributions share a similar shape for 94 - 171. As shown in the intensity distributions of the right-margin histograms, the signal in the 94 band is much lower than in the other two bands. Given the exposure time of 3 s, the 94 band intensity distributions (Fig. 6 (d)) are close to the read noise value of 1.14 DN. The SNR of the median intensity over the QS in the field of view is 13.7, 9.5, and 0.7 for the 171, 193, and 94 bands, respectively. Thus in the 94 band, the noise dominates, and the Figure 6: Main panels: Histograms of the time-averaged intensity, as a function of the maximum cross-correlation, in the whole HRI\({}_{\rm EUV}\) field of view. The left and right columns show the results for the 193 – 171 and the 94 – 171 couples, respectively. The 2D orange histograms are the counts of event pixels. The 2D red and blue contours correspond to the 20, 40, 60, and 80 percentiles of the events and the QS pixel distributions, respectively. The histograms in the margins were normalized by their total number of counts. No histogram is displayed in the right margin of (a), as it would be a repetition of that of (b). Similarly, the top-margin histograms of (c) and (d) have been omitted, as they are the same as those of (a) and (b), respectively. events, if present in the band, remain undetected for most of the cases (see Fig. 3 (b) as an example). This is why in the 94 - 171 case the maximum correlation distributions of the events and the QS pixels share the same statistical behavior, as most of the signal in this band originates from the noise. Figure 5 mostly shows non-significant time lags resulting from noise for the couples 94 - 171 and 94 - 335. ### Influence of the background subtraction According to Fig. 6, the maximum of the AIA 171 intensity distribution is only about 1.3 times higher for the event pixels compared with the QS pixels. Therefore, the background largely contributes to the overall signal. This is why it is necessary to evaluate the influence of the background on the cross-correlations, which we illustrate using the couple 193 -171. Figure 7 displays the time lag and the maximum correlation distributions without and with the subtraction of the estimated background component (as described in Sect. 3.1). Both the event and the QS pixel distributions are included. In the margins, the 1D event pixels distributions are displayed in red and the QS ones in blue. The green histograms correspond to the uncorrelated light curves used to compute the confidence levels (Appendix A). As in Fig. 5, the event distributions peak at high correlation values and are concentrated around short time lags. The impact of the background intensity on the event distributions is visible when comparing Fig. 7 (a) with (b); the time lags and their asymmetries are mostly unchanged. However, when removing the background, the distribution of the event pixels becomes flattened (most visible in the margin histogram), and the counts are redistributed in the low-correlation random time lag wings. This outcome has two causes. First, the noise from the QS is propagated to the background by the inpainting (Sect. 3.1) and in turn to the background-subtracted light curves. Thus, the correlations are lowered in this case. Second, the QS signal is partly correlated around the zero time lag. This forms the high-correlation tail visible in the blue contours of Fig. 7 (a). Subtracting the background removes this correlated signal and also lowers the correlations. To conclude, removing the background isolates the contribution of the events to the time lags. Thus, the time lags in Figures 7 (b) and 5 are a property of the event pixels and not of the QS. ## 5 Discussion In this work, we have presented the results from the statistical analysis of the time lags measured in the AIA data for the small-scale EUV brightening events (the so-called campfires) cataloged by Berghmans et al. (2021). This catalog has the unique property of collecting the tiniest and most rapid brightening ever observed, which are the manifestation of physical processes probably already known but now observed over shorter temporal and spatial scales. For this reason, we preferred to use the general name of EUV events. Our observational work points to the following result: The EUV events are characterized by short time lags (within \(\pm 12\) s) and high correlations. We verified that these results are statistically significant and are not caused by background variations alone. In comparison, the QS mostly exhibits random time lags with lower correlations. It is possible that the timescales of thermal changes between events and the surrounding areas are different, the latter being much longer than the maximum time lags considered here. To our knowledge, this is the first time that the time lags associated with small-scale EUV brightenings and their surroundings have been statistically characterized. Earlier works, as mentioned in the introduction, that used the time lag technique reported zero time lags in the QS surrounding active region loops without taking into account the possible presence of small-scale brightenings. Concerning the interpretation of the short time lags, there are three possible scenarios that can be raised: (1) the observed events do not reach the peak temperatures of the response function (\(\sim\) one million degrees); (2) The observed events reach coronal temperatures (\(>1\) MK), but their fast cooling, subpixel multithermal structure, and the width of the AIA response function prevent us from detecting significant time lags; and (3) the observed events are the transition region (\(\sim 1\)MK) emission of long and hot (i.e., \(\sim 10-100\) Mm, \(\sim 3\) MK; Reale 2014) loops, which are heated impulsively. Starting with the interpretation given by the first scenario, looking at the most intense bands of AIA (Figure 2), we understand that a time lag of zero arises when the plasma temperature does not reach the peak of the 171 band. At the temperatures below this peak, all the bands behave similarly and so do the light curves. Furthermore, the observational properties (low-lying, short time lags) of these events resemble what is observed by Winebarger et al. (2013) for the Hi-C loops (\(T_{e}\sim 10^{5}\) K and \(n_{e}\sim 10^{10}\) cm\({}^{-3}\)) in the inter-moss loop areas. Their time lag analysis on the AIA light curves also displayed near-zero time lags, which led them to conclude that the loops did not reach one million degrees. They interpreted their observations as loops being heated impulsively with low-energy nanoflares. These loops would then cool rapidly because of their short length. Given the similarities of these Hi-C loops with the HRI\({}_{\rm EUV}\) events, we suggest that they may have a similar physical origin, that is, being the result of impulsive heating. For cold events to be visible in the AIA bands and in HRI\({}_{\rm EUV}\), they should be quite dense. We did a first order estimation of density of the HRI\({}_{\rm EUV}\) events using an average value of the background-subtracted event intensity on AIA 171 and assuming an isothermal plasma. We obtained \(n_{e}\sim 10^{9}\) cm\({}^{-3}\) for \(T_{e}=1.3\times 10^{6}\) K and \(n_{e}\sim 10^{10}\) cm\({}^{-3}\) for \(T_{e}=3\times 10^{5}\) K. The latter supports the result of Winebarger et al. (2013). However, we had to consider possible differences between the Hi-C loops and our HRI\({}_{\rm EUV}\) events. First, as mentioned, the observed solar region is not the same. But small low-lying cool loops (\(T_{e}\leq 0.5\) MK) are observed in the QS (Hansteen et al. 2014) and are ubiquitous along the supergranular cell boundaries in the upper solar atmosphere (see for instance, Feldman et al. 1999; Sanchez Almeida et al. 2007, and references therein). And since there is no distinction between supergranular cells in QS and active regions, we expected to observe similar events in both regions. Berghmans et al. (2021) showed with HRI\({}_{\rm Ly\alpha}\) observations that the HRI\({}_{\rm EUV}\) events are organized mostly around the supergranular network. Another difference between the Hi-C and the HRI\({}_{\rm EUV}\) events are their estimated temperature, around \(T_{e}\approx 0.25\pm 0.06\) MK for Hi-C events (Winebarger et al. 2013) and around \(1.3\pm 0.1\) MK for HRI\({}_{\rm EUV}\) events (Berghmans et al. 2021). In case these are similar events, we suggest that the above difference in temperature may be the result of the uncertainties in the data, on the adopted inversion methods (which is different for the two analysis), and the associated assumptions which are applied to relatively broad band instruments, as for these two imagers. Indeed, measuring the temperature of these events is very challenging. For instance, Schonfeld & Klimchuk (2020) showed that the cool plasma emission often dominates the bands even though the hot plasma is present. Assuming the second scenario, a time lag close to zero for AIA bands has been predicted by Viall & Klimchuk (2015) in the TR emission of active region coronal loops heated by nanoflares. These authors showed that the combination of the multi-temperature sensitivity of the AIA bands combined with the almost constant pressure property of the TR and its variable extension along the loop during the heating-cooling phases result in a narrower time lag with respect to the coronal emission part of the loop. We emphasize here that the TR of a loop is defined as the region where the thermal conduction acts as a plasma coolant, contrary to the coronal region where it acts as a heater (e.g. Klimchuk et al., 2008). While the presence of short time lags for all the AIA couples in the simulation corroborates with our results, the loops modeled by Viall & Klimchuk (2015) are much longer than what we observed (\(L\approx 30-50\) Mm, with respect to \(0.4-4\) Mm). Moreover, in those simulations, a clearly different signature in the time lag exists between TR and coronal emission, while this characteristic is not visible in our data. This could possibly be explained by the short cooling time from coronal temperatures of one of these tiny loops. For instance, for the shorter loops (\(\sim 0.4\) Mm) detected by HR\({}_{\rm EUV}\) at a temperature of \(\sim 1.3\) MK and density of \(n_{e}=10^{10}\)cm\({}^{-3}\), the cooling time is about 14 s. Due to the AIA cadence of 12 s, it is possible that our time lag method is not sensitive enough to detect both TR and coronal emission populations of short time lags. We propose to further investigate this aspect in the future through numerical simulations. The small asymmetries we have in our time lag distributions are below the cadence of our observation, and we would need data with a higher temporal resolution to corroborate such a result. The cadence should be at least similar to the one of HR\({}_{\rm EUV}\), where the emission variation of the event is better captured. At present, we verified that the measured time lags are independent of the event's duration. Concerning the third scenario, if such large loops exist in the QS, they remain undetected by the AIA channels, meaning that they would have a very low density. Without independent evidence that this is the case, we exclude this possibility for now. In conclusion, in the picture of impulsive heating phenomena acting in the QS region and considering the wide temperature response of the AIA bands, our results appear to also be consistent with predominantly fast cooling plasma from more than 1 MK, satisfying our scenario two interpretation. Consistent with this picture are the results from a 3D magneto-hydrodynamics (MHD) simulation using Max Planck fur Sonnensys-temforschung/University of Chicago Radiative MHD (MURaM) code by Chen et al. (2021). In those simulations, magnetic reconnections in the coronal part of small QS loops produced events with properties similar to what was observed in HR\({}_{\rm EUV}\). The authors noticed that the simulated HR\({}_{\rm EUV}\) emission only showed the apex of the heated loop, where the lower density allows the available stored energy to heat the plasma up to \(\approx 1.3\) MK, even though some hotter temperatures could also be reached. To summarize, our results are consistent with two possible scenarios: Either the events do not reach coronal temperatures or they do but they cool faster than the AIA temporal resolution. It is possible that the two scenarios coexist, as the HR\({}_{\rm EUV}\) catalog does not separate events produced by different physical processes. The AIA cadence and the multithermal nature of the bands do not allow separating the emissions from the possible cool and hot plasma along the line of sight. Figure 7: Margin and 2D histograms of time lags and associated maximum correlation values for couple 193 –171. Subfigure (a) in red shows the original distribution for the event pixels. Subfigure (b) shows the background subtracted event pixels. The blue contours in the central panel of subfigure (a) are the 20, 40, 60, and 80 percentiles of the QS pixel distribution. The green colors in the main panels are the confidence levels, and the distributions of the light curves used to compute them are plotted with the same color in the margin histograms. The margin histograms were normalized by their total number of pixels. The parameter \(\nu_{\rm-95}\) is defined as in Fig. 5. To solve the ambiguity in the temperature, we need to use spectroscopic data. This has been done recently by using the Spectral Imaging of the Coronal Environement (SPICE) instrument on board Solar Orbiter (Huang et al. submitted to this issue). The authors investigated a few HRIEUV events and came to the conclusion that the studied events do not show significant emission at temperatures higher than that of Ne viii (0.63 MK). Although such spectroscopic analysis needs to be extended to a larger sample to better quantify the fraction of events not reaching high temperatures, we find that this analysis supports our conclusion that QS small-scale EUI brightenings are in most cases largely dominated by cool emission. Further investigations are needed to confirm this idea. For these reasons, we plan to extend our methodology to forward modeling constrained by spectroscopic data. ###### Acknowledgements. The authors gratefully thank J.A. Klimchuk for the fruitful discussions and suggestions. The authors thank the referee for the constructive comments that helped to improve the manuscript. A.D. acknowledges the funding by CNES and EDOM. S.P. acknowledges the funding by CNES through the MEDOC data and operations center. G.P. was supported by a CNES postdoctoral allocation. P.A. and D.M.L. are grateful to the Science Technology and Facilities Council for the award of Ernest Rutherford Fellowships (ST/RO40285/2 and ST/RO303246/1, respectively). The ROB team thanks the Belgian Federal Science Policy Office (BEL SPO) for the provision of financial support in the framework of the PRODEX Programme of the European Space Agency (ESA) under contract numbers 4000134474 and 4000136424. This paper uses the Solar Orbiter/EUI data release 1.0 [https://doi.org/10.2/1444/WJC-MH32](https://doi.org/10.2/1444/WJC-MH32). Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. The EUI instrument was built by CSL, IAS, MPS, MSSUI/UCL, PMO/WRC, ROB, LCF/IO with funding from the Belgian Federal Science Policy Office for ELISP/PRODEX PEA 4000134808, 400011292, 4000117262, and 400013447); the Centre National of Etudes Spatiales (CNES); the UK Space Agency (UKSA); the Bundesministerium fur Luft- und Raumfahrt Energie (BMW) through the Deutsches Zentrum fur Luft- und Raumfahrt (DLR); and the Swiss Space Office (SSO). This work used data provided by the MEDOC data and operations centre (CNES / CNRS / Univ. Paris-Saclay), [http://medoc.ias.us-u.psu.de/](http://medoc.ias.us-u.psu.de/). This research used version 0.6.4 (Barnes et al. 2021) of the a major source software package (Barnes et al. 2020).
2306.00828
Gravitational corrections to the Einstein-Scalar-QCD model
This study employs the effective field theory approach to quantum gravity to investigate a non-Abelian gauge theory involving scalar particles coupled to gravity. The study demonstrates explicitly that the Slavnov-Taylor identities are maintained at one-loop order, which indicates that the universality of the color charge is preserved. Additionally, the graviton corrections to the two-loop gluon self-energy and its renormalization are computed.
Huan Souza, L. Ibiapina Bevilaqua, A. C. Lehum
2023-06-01T15:49:21Z
http://arxiv.org/abs/2306.00828v1
# Gravitational corrections to the Einstein-Scalar-QCD model ###### Abstract This study employs the effective field theory approach to quantum gravity to investigate a non-Abelian gauge theory involving scalar particles coupled to gravity. The study demonstrates explicitly that the Slavnov-Taylor identities are maintained at one-loop order, which indicates that the universality of the color charge is preserved. Additionally, the graviton corrections to the two-loop gluon self-energy and its renormalization are computed. ## I Introduction Although we are still in need of a consistent and generally accepted description of quantum gravity at high energies, if we restrict ourselves to low energies compared to the Planck scale, we can nevertheless draw some trustful conclusions about the gravitational phenomena at quantum level using the viewpoint and methods of effective field theories [1; 2; 3]. Thus, the well known nonrenormalizability of Einstein's theory coupled to other fields [4; 5; 6] is not an impediment to study the influence of gravity in the renormalization of other fields and parameters in a meaningful way. The central idea is that we add to the action the high-order terms needed to renormalize the parameters of the lower-order terms and the new parameters introduced will be irrelevant to the low-energy behavior of the theory. As it is well known, the renormalized quantities of a theory depend on an arbitrary scale and the renormalization group is the theoretical tool to study this dependence and allows us to describe how the coupling constants change with this scale, establishing the so-called running of the coupling constants [7]. If this dependence is such that the coupling constant gets weaker as we go to higher energies the theory is said to be asymptotically free [8; 9; 10]. The possibility that gravitational corrections could render all gauge coupling constants asymptotically free was suggested by Robinson and Wilczek, who used the effective field theory approach of quantum gravity to reach this conclusion [11]. However, this result was soon contested by Pietrykowski [12], who showed that the result was gauge dependent. Subsequently, many works investigate the use of the renormalization group in quantum gravity as an effective field theory (See for instance Refs. [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]). In a previous work [21], we used dimensional regularization to compute gravitational effects on the beta function of the scalar quantum electrodynamics at one-loop order and found that all gravitational contributions cancel out. The situation is different at two-loop order, in which we do find nonzero gravitational corrections to the beta function for both scalar and fermionic QED, as shown in a latter work [22]. However, those corrections give a positive contribution to the beta function and thus the electrical charge is not asymptotically free neither has a nontrivial fixed point. The use of renormalization group in the context of non-renormalizable field theories raise some subtle questions. The universality of the coupling constants in effective field theories was discussed by Anber _et al._ in [20], where it was suggested that an operator mixing could make the coupling constants dependent on the process under consideration and therefore non-universal. That would imply that, unlike renormalizable field theories, the concept of running coupling may not be useful in the effective field theory approach to quantum gravity. This is indeed the case for the quartic self-interaction of scalars in scalar-QED, as discussed in [21] but, as shown in [21] for scalar-QED and in [23] for fermionic-QED it seems not to be the case for the gauge coupling because of the Ward identity. The central role of the gauge symmetry in the universality of the gauge coupling for QED led us to explore this issue in the non-Abelian case. Using dimensional regularization, we showed that the Slavnov-Taylor identities are satisfied in a non-Abelian gauge theory coupled to fermions and gravity [24]. In the same work, we have also calculated the gravitatinal correction for the beta function at one-loop thus verifying directly the absence of contributions from the gravitational sector. In previous studies, the coupling of non-Abelian gauge theories to gravity has been investigated [24; 25; 26; 27]. In this research, we extend our previous analysis by investigating the asymptotic behavior of a non-Abelian gauge theory coupled to complex scalars and gravity. This exploration is motivated by the significant role scalar theories play in the advancement of high-energy theory. Over the years, scalar models have been proposed to tackle issues such as renormalization group theory for non-renormalizable theories [28], the study of dilatons [29], and potential candidates for dark matter [30; 31].. In fact, Ref. [32] argue that quantum gravity might have crucial implications in a theory of dark matter. Additionally, a recent study [33] investigated the interaction between SU(2) Yang-Mills waves and gravitational waves. The results revealed that while the problem can be perturbatively studied in the symmetric phase, non-perturbative approaches are necessary in the broken phase. Hence, the examination of a non-Abelian gauge theory coupled to complex scalars and gravity is of particular interest due to the fundamental role scalar theories have played in addressing diverse problems in high-energy theory. The paper is structured as follows. Section II introduces the Lagrangian and propagators of the model. In Section III, the one-loop renormalization of the model is presented, highlighting the preservation of gauge invariance of the gravitational interaction and respect for the Slavnov-Taylor identities. Section IV utilizes the Tarasov algorithm to compute the two-loop counterterm for the gluon wave-function. Finally, concluding remarks are provided in Section V. The minimal subtraction (MS) scheme is used throughout this work to handle the UV divergences, with \((+---)\) being the spacetime signature and natural units of \(c=\hbar=1\) are adopted. ## II The Einstein-scalar-QCD model To get an effective field theory description for our model, we add higher order terms to the Lagrangian of a non-Abelian gauge theory with complex scalars coupled to gravity: \[\mathcal{L}= \sqrt{-g}\sum_{f}\Big{\{}\frac{2}{\kappa^{2}}R-\frac{1}{4}g^{ \mu\alpha}g^{\nu\beta}G^{a}_{\mu\nu}G^{a}_{\alpha\beta}+g^{\mu\nu}(D_{\mu} \phi^{i})^{\dagger}D_{\nu}\phi^{i}-m_{i}(\phi^{i})^{\dagger}\phi^{i}+\lambda( (\phi^{i})^{\dagger}\phi^{i})^{2}+\mathcal{L}_{HO}\Big{\}},\] where the index \(i=1,2,\cdots,N_{s}\) runs over the scalars flavors, \(G^{a}_{\mu\nu}=\nabla_{\mu}A^{a}_{\nu}-\nabla_{\nu}A^{a}_{\mu}+gf^{abc}A^{b}_ {\mu}A^{c}_{\nu}\) is the non-Abelian field-strength with \(f^{abc}\) being the structure constants of the \(SU(N)\) group, and \(D_{\mu}=\partial_{\mu}-igt^{a}A^{a}_{\mu}\) is the covariant derivative. The higher order terms \(\mathcal{L}_{HO}\) are written as \[\mathcal{L}_{HO}=\frac{\tilde{\lambda}_{1}}{M_{P}^{2}}\left[\text{Re}((\phi^{i })^{\dagger}\partial_{\mu}\phi^{i})\right]^{2}+\frac{\tilde{\lambda}_{2}}{M_{ P}^{2}}\left[\text{Im}((\phi^{i})^{\dagger}\partial_{\mu}\phi^{i})\right]^{2}- \frac{\tilde{e}_{3}}{4}G^{\mu\nu}_{a}\frac{\Box}{M_{P}^{2}}G^{a}_{\mu\nu}. \tag{2}\] To obtain the usual quadratic term for the gravitational field, we need to expand \(g_{\mu\nu}\) around the flat metric as \[g_{\mu\nu}=\eta_{\mu\nu}+\kappa h_{\mu\nu}, \tag{3}\] such that \[g^{\mu\nu}=\eta^{\mu\nu}-\kappa h^{\mu\nu}+\cdots\qquad\text{and}\qquad\sqrt{ -g}=1+\frac{\kappa}{2}h+\cdots, \tag{4}\] where \(h=\eta^{\mu\nu}h_{\mu\nu}\). The affine connection is written as \[\Gamma^{\lambda}_{\ \mu\nu}=\frac{1}{2}\kappa(\eta^{\lambda\sigma}-\kappa h^{ \lambda\sigma})(\partial_{\mu}h_{\sigma\nu}+\partial_{\nu}h_{\sigma\mu}- \partial_{\sigma}h_{\mu\nu}). \tag{5}\] Organizing the Lagrangian as, \[\mathcal{L} = \mathcal{L}_{h}+\mathcal{L}_{f}+\mathcal{L}_{A}; \tag{6a}\] \[\mathcal{L}_{h} = \frac{2}{\kappa^{2}}\sqrt{-g}R;\] (6b) \[\mathcal{L}_{s} = \sqrt{-g}[g^{\mu\nu}(D_{\mu}\phi^{i})^{\dagger}D_{\nu}\phi^{i}-m_ {i}(\phi^{i})^{\dagger}\phi^{i}+\lambda((\phi^{i})^{\dagger}\phi^{i})^{2}];\] (6c) \[\mathcal{L}_{A} = -\frac{\sqrt{-g}}{4}g^{\mu\alpha}g^{\nu\beta}G^{a}_{\mu\nu}G^{a}_ {\alpha\beta}. \tag{6d}\] Using Eqs. (3)-(5), we write the pure gravity sector (6b) in terms of \(h_{\mu\nu}\). Moreover, it is convinient to organize \(\mathcal{L}_{h}\) in powers of \(h\) as follows: \[\mathcal{L}_{h} = \mathcal{L}_{h}^{0}+\kappa\mathcal{L}_{h}^{1}+\cdots \tag{7a}\] \[\mathcal{L}_{h}^{0} = -\frac{1}{4}\partial_{\mu}h\partial^{\mu}h+\frac{1}{2}\partial_{ \mu}h^{\sigma\nu}\partial^{\mu}h_{\sigma\nu};\] (7b) \[\mathcal{L}_{h}^{1} = \frac{1}{2}h^{\alpha}_{\ \beta}\partial^{\mu}h^{\beta}_{\ \alpha}\partial_{\mu}h-\frac{1}{2}h^{\alpha}_{\ \beta}\partial_{\alpha}h^{\mu}_{\ \nu}\partial^{\beta}h^{\nu}_{\ \mu}-h^{\alpha}_{\ \beta}\partial_{\mu}h^{\nu}_{\ \alpha} \partial^{\mu}h^{\beta}_{\ \nu}\] (7c) \[+\frac{1}{4}h\partial^{\beta}h^{\mu}_{\ \nu}\partial_{\beta}h^{\nu}_{\ \mu}+h^{\beta}_{\ \mu}\partial_{\nu}h^{\alpha}_{\ \beta}\partial^{\mu}h^{\nu}_{\ \alpha}-\frac{1}{8}h \partial^{\nu}h\partial_{\nu}h,\] where the indices are raised and lowered with the flat metric (here and henceforth, we are following the results in Ref. [34]). For the matter sector (6c), the expansion around the flat metric give us \[\mathcal{L}_{s} = (D^{\mu}\phi^{i})^{\dagger}D_{\mu}\phi^{i}-m_{i}^{2}((\phi^{i})^ {\dagger}\phi^{i})-\frac{\lambda}{4}((\phi^{i})^{\dagger}\phi^{i})^{2}-\kappa h ^{\mu\nu}(D_{\mu}\phi^{i})^{\dagger}D_{\nu}\phi^{i} \tag{8a}\] \[+\frac{\kappa}{2}h\left[(D^{\mu}\phi^{i})^{\dagger}D_{\mu}\phi^{i }-m_{i}^{2}(\phi^{i})^{\dagger}\phi^{i}-\frac{\lambda}{4}((\phi^{i})^{\dagger }\phi^{i})^{2}\right],\] which we organize as follows \[\mathcal{L}_{s} = \mathcal{L}_{s}^{0}+\kappa\mathcal{L}_{s}^{1}+\cdots \tag{9a}\] \[\mathcal{L}_{s}^{0} = (D^{\mu}\phi^{i})^{\dagger}D_{\mu}\phi^{i}-m_{i}^{2}((\phi^{i})^ {\dagger}\phi^{i})-\frac{\lambda}{4}((\phi^{i})^{\dagger}\phi^{i})^{2}\] (9b) \[\mathcal{L}_{s}^{1} = -h^{\mu\nu}(D_{\mu}\phi^{i})^{\dagger}D_{\nu}\phi^{i}+\frac{1}{2} h\left[(D^{\mu}\phi^{i})^{\dagger}D_{\mu}\phi^{i}-m_{i}^{2}(\phi^{i})^{ \dagger}\phi^{i}-\frac{\lambda}{4}((\phi^{i})^{\dagger}\phi^{i})^{2}\right]; \tag{9c}\] and finally, for the gauge sector, \[\mathcal{L}_{A} = \mathcal{L}_{A}^{0}+\kappa\mathcal{L}_{A}^{1}+\cdots \tag{10a}\] \[\mathcal{L}_{A}^{0} = -\frac{1}{4}G^{a}_{\mu\nu}G^{\mu\nu}_{a}\] (10b) \[\mathcal{L}_{A}^{1} = \frac{1}{2}h^{\tau}_{\ \nu}G^{\mu\nu}_{a}G^{a}_{\mu\tau}+\frac{1}{2} h\mathcal{L}_{A}^{0}. \tag{10c}\] As usual for gauge theories, in order to quantize this model, we have to deal with the excess of degrees of freedom in \(A^{a}_{\mu}\) and \(h_{\mu\nu}\) due to their symmetries. In our calculations, we have followed the Faddeev-Popov procedure that introduces gauge-fixing terms in the action that will modify the propagators of both \(A^{a}_{\mu}\) and \(h_{\mu\nu}\). Moreover, we must also introduce ghost fields for both vector and tensor fields. However, the ghost field associated with the graviton will not appear in this text because, since we are working with the one-graviton exchange approximation, the new term containing the ghosts added to the action will not contribute to the renormalization of the gauge coupling constant. Therefore, whenever we refer to ghost field in what follows, we mean the one associated with \(A^{a}_{\mu}\). The propagators for scalars, ghosts, gluons and gravitons are given, respectively, by \[\Delta_{s}(p) = \frac{i}{p^{2}-m_{a}^{2}}; \tag{11a}\] \[\Delta_{ab}(p) = \frac{i}{p^{2}}\delta_{ab};\] (11b) \[\Delta^{\mu\nu}_{ab}(p) = \frac{i}{p^{2}}\left(\eta^{\mu\nu}-(1-\xi_{A})\frac{p^{\mu}p^{\nu }}{p^{2}}\right)\delta_{ab};\] (11c) \[\Delta^{\alpha\beta\mu\nu}(p) = \frac{i}{p^{2}}\left(P^{\alpha\beta\mu\nu}-(1-\xi_{h})\frac{Q^{ \alpha\beta\mu\nu}}{p^{2}}\right). \tag{11d}\] The gauge-fixing parameters \(\xi_{A}\) and \(\xi_{h}\) will be carried out through the whole calculation, since we do not want to choose any specific gauge. The projectors \(P^{\alpha\beta\mu\nu}\) and \(Q^{\alpha\beta\mu\nu}\) in the graviton propagator are given by \[P^{\alpha\beta\mu\nu} = \frac{1}{2}\left(\eta^{\alpha\mu}\eta^{\beta\nu}+\eta^{\alpha\nu} \eta^{\beta\mu}-\eta^{\alpha\beta}\eta^{\mu\nu}\right);\] \[Q^{\alpha\beta\mu\nu} = (\eta^{\alpha\mu}p^{\beta}p^{\nu}+\eta^{\alpha\nu}p^{\beta}p^{ \mu}+\eta^{\beta\mu}p^{\alpha}p^{\nu}+\eta^{\beta\nu}p^{\alpha}p^{\mu}). \tag{12}\] ## III The one-loop renormalization The Slavnov-Taylor identities are a set of relations that must be satisfied by the n-point functions to ensure the gauge independence of the observables of the theory. In this section we want to explicitly show that the Slavnov-Taylor identities are respected at one-loop order for our model. To simplify our computations, we will consider here that all the masses are the same, so we drop the index \(i\). As we will see, this will not affect our final result. We start by computing the n-point functions. Namely, the self-energy of scalar, vector and ghost fields (\(\Sigma_{s},\Pi^{\mu\nu}_{ab}\) and \(\Sigma_{ab}\), respectively), also the scalar-gluon, ghost-gluon and gluon-gluon three-point functions (\(\Gamma^{\mu}_{a}\), \(\Gamma^{\mu}_{abc}\) and \(\Pi^{\mu\nu\alpha}_{abc}\), respectively), the gluon four-point function (\(\Gamma^{\mu\nu\rho\sigma}_{abcd}\)), and finally the scalar-gluon four-point function (\(\Pi^{\mu\nu}_{abcd}\)). All the computations were done using the _Mathematica_ packages: _FeynRules_ to generate the models [35], _FeynArts_ to draw the diagrams [36], and _FeynCalc_ to simplify and compute the amplitudes [37]. At one-loop, the self-energy of the scalar field, Fig. 1, results in \[-i\Sigma_{s}(p) =ip^{2}\left(\frac{C_{A}\left(\xi_{A}-3\right)g^{2}-\left(\xi_{h}-2 \right)\kappa^{2}m^{2}}{16\pi^{2}\epsilon}+Z_{2s}^{(1)}\right) \tag{13}\] \[+im^{2}\left(\frac{-C_{A}\xi_{A}g^{2}+4\lambda N_{s}-\left(\xi_{h }-2\right)\kappa^{2}m^{2}}{16\pi^{2}\epsilon}-Z_{m_{s}}^{(1)}\right)+\text{ finite},\] where \(C_{A}=N\) for the \(SU(N)\) group. By imposing finiteness to \(\Sigma_{s}(p)\), we find the following one-loop counterterms: \[Z_{2s}^{(1)} = \frac{\kappa^{2}m^{2}\left(\xi_{h}-2\right)-C_{A}\left(\xi_{A}-3 \right)g^{2}}{16\pi^{2}\epsilon}, \tag{14a}\] \[Z_{m}^{(1)} = \frac{-C_{A}\xi_{A}g^{2}+4\lambda N_{s}-\left(\xi_{h}-2\right) \kappa^{2}m^{2}}{16\pi^{2}\epsilon}. \tag{14b}\] For the gluon self-energy, it is convenient to write the one-loop correction (corresponding to the diagrams in Fig. 2) as \[\Pi_{ab}^{\mu\nu}(p)=\left(p^{2}\eta^{\mu\nu}-p^{\mu}p^{\nu}\right)\Pi(p)\delta _{ab}, \tag{15}\] where the function \(\Pi(p)\) is found to be \[\Pi(p)=-iZ_{3}^{(1)}-ip^{2}\tilde{Z}_{3}^{(1)}+\frac{i\kappa^{2}p^{2}\left(2- 3\xi_{h}\right)}{96\pi^{2}\epsilon}-\frac{iC_{A}g^{2}\left(2N_{s}+3\xi_{A}-13 \right)}{96\pi^{2}\epsilon}+\text{finite}, \tag{16}\] and, imposing the finiteness on \(\Pi(p)\), we find \[Z_{3}^{(1)} = -\frac{C_{A}g^{2}\left(2N_{s}+3\xi_{A}-13\right)}{96\pi^{2} \epsilon}, \tag{17a}\] \[\tilde{Z}_{3}^{(1)} = -\frac{\kappa^{2}(3\xi_{h}-2)}{96\pi^{2}\epsilon}. \tag{17b}\] We can see from Eq. (16) that \(Z_{3}\) is the relevant counterterm to the beta function of the color charge, since it is the renormalizing factor for the quadratic term \(G_{a}^{\mu\nu}G_{\mu\nu}^{a}\), while \(\tilde{Z}_{3}\) renormalizes Figure 1: Feynman diagrams for the scalar self-energy. Continuous, wiggly, dotted, and dashed lines represent the scalar, gluon, ghost, and graviton propagators, respectively. a higher derivative term like \(G_{a}^{\mu\nu}\Box G_{\mu\nu}^{a}\). Notice also that the UV divergent part of Eq. (16) is not dependent on the masses of the scalars. Contributions to the ghost self-energy up to one-loop order are depicted in Fig. 3. The resulting expression is \[-i\Sigma_{ab} = \left(\frac{ip^{2}C_{A}\left(\xi_{A}-3\right)g^{2}}{64\pi^{2} \epsilon}+ip^{2}Z_{2c}^{(1)}\right)\delta_{ab}+\mbox{finite}, \tag{18}\] and, imposing finiteness, we find \[Z_{2c}^{(1)} = -\frac{C_{A}g^{2}\left(\xi_{A}-3\right)}{64\pi^{2}\epsilon}. \tag{19}\] Notice that in Fig. 3 the gravitational interactions are not shown. Although in the action there is a coupling of \(h^{\mu\nu}\) to the kinetic term of the ghosts associated with the gluons, the gravitational contributions to the ghost self-energy will be renormalized by a higher-order term and is therefore irrelevant for our purposes here. One way to see why this is happens is to observe that both the ghosts and the graviton are massless, so the only contribution proportional to \(\kappa^{2}\) must be of the order \(p^{4}\). Figure 3: Feynman diagrams for the ghost self-energy. Figure 2: Feynman diagrams for the gluon self-energy. For the 3-point functions, let's first consider the ghost-ghost-gluon vertex (Fig. 4), where again all the gravitational corrections are renormalized by higher-order terms and are therefore omitted here. Also, in the following expressions, we will use \(p_{1}\) and \(p_{2}\) to represent incoming external momenta, and \(p_{3}\) and \(p_{4}\) for outgoing momenta. The expression obtained for these diagrams is \[\Gamma^{\mu}_{abc}=-gp^{\mu}_{3}f_{abc}\left(\frac{C_{A}g^{2}\xi_{A}}{32\pi^{2 }\epsilon}+Z^{(1)}_{1c}\right)+\mbox{finite}, \tag{20}\] and the subtraction of the UV pole will give us \[Z^{(1)}_{1c}=-\frac{C_{A}g^{2}\xi_{A}}{32\pi^{2}\epsilon}. \tag{21}\] For the other 3-point function, the scalar-scalar-gluon vertex, the gravitational interaction will be present in some diagrams, as we can see in Fig. 5, where the relevant contributions to this Figure 4: Feynman diagrams for the vertex interaction between gluons and ghosts up to one-loop order. Figure 5: Feynman diagrams to the vertex interaction between quarks top and gluons up to one-loop order. function up to one-loop order are shown. The resulting expression is \[-i\Gamma_{abc}^{\mu} = gf_{abc}(p_{2}^{\mu}-p_{3}^{\mu})\left(\frac{C_{A}\left(9-5\xi_{A} \right)g^{2}+4\kappa^{2}m^{2}\left(\xi_{h}-2\right)}{64\pi^{2}\epsilon}-Z_{1}^{ (1)}\right) \tag{22}\] \[+O(p^{3})+\mbox{finite},\] from which, through MS, we find \[Z_{1}^{(1)}=\frac{C_{A}\left(9-5\xi_{A}\right)g^{2}+4\kappa^{2}m^{2}\left(\xi _{h}-2\right)}{64\pi^{2}\epsilon}. \tag{23}\] The 3-point function describing the vertex with three gluons in shown in Fig. 6. We have used the projection \[\Pi_{abc}^{\mu\nu\alpha}=\eta^{\mu\nu}\Pi_{abc}^{\alpha}\qquad\Rightarrow \qquad\Pi_{abc}^{\alpha}=\frac{1}{4}\eta_{\mu\nu}\Pi_{abc}^{\mu\nu\alpha} \tag{24}\] and used the fact that \(p_{3}=p_{1}+p_{2}\), to get \[-i\Pi_{abc}^{\alpha} = \frac{g^{3}f_{abc}C_{A}\left(-9\xi_{A}-4N_{s}+17\right)(p_{1}-p_{ 2})^{\alpha}}{256\pi^{2}\epsilon}-\frac{3}{4}Z_{3g}^{(1)}g\left(p_{1}-p_{2} \right){}^{\alpha}f_{abc} \tag{25}\] \[+O(p^{2})+\mbox{finite},\] Through MS, we impose finiteness and find \[Z_{3g}^{(1)}=-\frac{g^{2}C_{A}\left(9\xi_{A}-17-4N_{s}\right)}{192\pi^{2} \epsilon}. \tag{26}\] Figure 6: Feynman diagrams to the gluons vertex interaction at one-loop order. Now, we consider the scattering of four gluons (Fig. 7 showed at the end of the paper for convenience). Since the interaction of four gluons has no derivatives, the \(Z_{4g}\) counterterm will renormalize terms proportional to \(p^{0}\) and therefore we can set external momentum equals to zero if we restrict ourselves to the computation of this counterterm. Also, for simplicity, we have used the scalar projection \[\Gamma_{abcd}=\frac{1}{16}\eta_{\mu\nu}\eta_{\rho\sigma}\Gamma_{abcd}^{\mu\nu \rho\sigma}, \tag{27}\] to obtain the expression for the gluon 4-point function \[-i\Gamma_{abcd} = -\bigg{(}\frac{iC_{A}g^{4}\left(N_{s}+3\xi_{A}-2\right)}{32\pi^{2 }\epsilon}+\frac{3}{2}iZ_{4g}^{(1)}g^{2}\bigg{)}\Big{(}\text{tr}(t_{a}t_{b}t_{ c}t_{d})-2\text{tr}(t_{a}t_{c}t_{b}t_{d})-2\text{tr}(t_{b}t_{c}t_{a}t_{d}) \tag{28}\] \[+\text{tr}(t_{b}t_{a}t_{c}t_{d})+\text{tr}(t_{c}t_{a}t_{b}t_{d}) +\text{tr}(t_{c}t_{b}t_{a}t_{d})\Big{)},\] Then, again imposing finiteness through MS, we have \[Z_{1_{4g}}^{(1)}=-\frac{C_{A}g^{2}\left(N_{s}+3\xi_{A}-2\right)}{48\pi^{2} \epsilon}. \tag{29}\] The other 4-point function involves two scalars and two gluons (Fig. 8, again showed at the end of the paper for convenience). For this vertex, we use the following projection \[\Pi_{abcd}^{\mu\nu}=\eta^{\mu\nu}\Pi_{abcd}\qquad\Rightarrow\qquad\Pi_{abcd}= \frac{1}{4}\eta_{\mu\nu}\Pi_{abcd}^{\mu\nu} \tag{30}\] and then we have \[\Pi_{abcd} =\bigg{(}\frac{ig^{2}-3C_{A}\left(\xi_{A}-1\right)g^{2}-2(\xi_{h} -2)\kappa^{2}m^{2}}{16\pi^{2}\epsilon}-2iZ_{2g}^{(1)}g^{2}\bigg{)}\Big{(}2 \text{tr}(t_{a}t_{b}t_{c}t_{d})-\text{tr}(t_{a}t_{c}t_{b}t_{d}) \tag{31}\] \[-\text{tr}(t_{b}t_{a}t_{c}t_{d})-\text{tr}(t_{b}t_{c}t_{a}t_{d}) -\text{tr}(t_{c}t_{a}t_{b}t_{d})+2\text{tr}(t_{c}t_{b}t_{a}t_{d})\Big{)}.\] and the counterterm is found to be \[Z_{2g}^{(1)}=-\frac{3C_{A}\left(\xi_{A}-1\right)g^{2}-2(\xi_{h}-2)\kappa^{2}m ^{2}}{32\pi^{2}\epsilon}. \tag{32}\] From Eqs. (14a), (17a), (19), (21), (23), (26), (29) we conclude that \[Z_{1}^{(1)}-Z_{2s}^{(1)}=Z_{3g}^{(1)}-Z_{3}^{(1)}=\frac{1}{2} \left(Z_{4g}^{(1)}-Z_{3}^{(1)}\right)=\frac{1}{2}\left(Z_{2g}^{(1)}-Z_{2s}^{( 1)}\right)=Z_{1c}^{(1)}-Z_{2c}^{(1)}=-\frac{C_{A}g^{2}(3+\xi_{A})}{64\pi^{2} \epsilon} \tag{33}\] so the Slavnov-Taylor identities [38; 39] are indeed respected and thus gravitational interaction does not spoil the gauge symmetry. This result allows us to define a global color charge. Moreover, we can show that the beta function is independent of \(\kappa\) and \(m\), as the expression the one-loop beta function of the color charge can be found through the relations between the renormalized coupling constants and the counterterms given by \[g = \mu^{-2\epsilon}\frac{Z_{2s}Z_{3}^{1/2}}{Z_{1}}g_{0}; \tag{34a}\] \[g = \mu^{-2\epsilon}\frac{Z_{3}^{3/2}}{Z_{3g}}g_{0};\] (34b) \[g = \mu^{-2\epsilon}\frac{Z_{3}}{Z_{4g}^{1/2}}g_{0};\] (34c) \[g = \mu^{-2\epsilon}\frac{Z_{2c}Z_{3}^{1/2}}{Z_{1c}}g_{0};\] (34d) \[g = \mu^{-2\epsilon}\frac{Z_{2}^{1/2}Z_{3}^{1/2}}{Z_{2g}^{1/2}}g_{0}. \tag{34e}\] Therefore, the beta function for the color charge is \[\beta(g) = \lim_{\epsilon\to 0}\mu\frac{dg}{d\mu}=\lim_{\epsilon\to 0}\mu\frac{d}{d\mu}\left[g_{0} \left(1-Z_{1}^{(1)}+Z_{2s}^{(1)}+\frac{Z_{3}^{(1)}}{2}\right)\mu^{-2\epsilon}\right] \tag{35}\] \[= -\frac{g^{3}}{(4\pi)^{2}}\left(\frac{11}{3}C_{A}-\frac{2}{6}N_{s }\right).\] The observed outcome is gauge-independent, a characteristic that was previously established via a functional approach in Ref.[40]. This property has also been verified in the context of the Effective Field Theory of gravity when coupled with fermionic QCD in [24]. As we can see, it does not depend on the mass, so our choice to make all masses the same does not affect our result for the beta function at one-loop order. On the other hand, as discussed in [23], at two-loop we would expect a \(\sum_{i}\kappa^{2}m_{i}^{2}\) term. It is needed to stress here the importance of a regularization scheme that preserves the symmetries of the model. In fact, the authors in Ref.[40] showed that in the weak-gravity limit there is no gravitational contribution at one-loop order if the regularization scheme preserves the symmetries of the model, such as dimensional regularization. On the other hand, if the regularization scheme does not preserve all the symmetries, there will be a negative contribution to the beta function (as seen in [11]). ## IV Two-loop gluon self-energy This section presents the computation of the two-loop gluon self-energy and its renormalization. TARCER [41], in combination with previously cited _Mathematica_ packages, is utilized for this computation. TARCER implements the Tarasov algorithm for the reduction of two-loop scalar propagator type integrals with external momentum and arbitrary masses [42]. The Feynman and harmonic gauges (\(\xi_{A}=\xi_{h}=1\)) are used for simplicity, and the analysis is limited to the case in which there is only one scalar particle (\(N_{s}=1\)). The Feynman diagrams we need to compute are showed in Fig. 9. Due to gauge invariance, our result can be expressed as \[\Pi_{\mu\nu}^{(2)}=\left(p^{2}g_{\mu\nu}-p_{\mu}p_{\nu}\right)\Pi^{(2)}, \tag{36}\] where the function \(\Pi^{(2)}\) is a scalar function that can be expressed in terms of a set of basic integrals. To present the results in a simplified manner, we will adopt a notation similar to the one used in the original TARCER paper [41] for the basic integrals that will be utilized, \[{\bf A}_{\nu}(m)=\frac{1}{\pi^{D/2}}\int\frac{d^{D}k}{[k^{2}-m^{2 }]^{\nu}} \tag{37a}\] \[{\bf B}_{\nu_{1},\nu_{2}}(m_{1},m_{2})=\frac{1}{\pi^{D/2}}\int \frac{d^{D}k}{[k^{2}-m_{1}^{2}]^{\nu_{1}}[(k-p)^{2}-m_{2}^{2}]^{\nu_{2}}}\] (37b) \[{\bf J}_{\nu_{1},\nu_{2},\nu_{3}}(m_{1},m_{2},m_{3})=\frac{1}{ \pi^{D}}\int\frac{d^{D}k_{1}d^{D}k_{2}}{[k_{1}^{2}-m_{1}^{2}]^{\nu_{1}}[k_{5}^ {2}-m_{2}^{2}]^{\nu_{2}}[k_{4}^{2}-m_{3}^{2}]^{\nu_{3}}}\] (37c) \[{\bf F}_{\nu_{1},...,\nu_{5}}(m_{1},...,m_{5})=\frac{1}{\pi^{D}} \int\frac{d^{D}k_{1}d^{D}k_{2}}{[k_{1}^{2}-m_{1}^{2}]^{\nu_{1}}[k_{2}^{2}-m_{ 2}^{2}]^{\nu_{2}}[k_{3}^{2}-m_{3}^{2}]^{\nu_{3}}[k_{4}^{2}-m_{4}^{2}]^{\nu_{4} }[k_{5}^{2}-m_{5}^{2}]^{\nu_{5}}}, \tag{37d}\] in which \(p\) is the external momentum and we introduced \(k_{3}=k_{1}-p\), \(k_{4}=k_{2}-p\), and \(k_{5}=k_{1}-k_{2}\). Therefore, we can write \[\Pi^{(2)} = c_{1}\ {\bf A}_{1}(m)\ {\bf B}_{1,1}(0,0)+c_{2}\ {\bf A}_{1}(m)\ {\bf B}_{1,1}(m,m)+c_{3}\ {\bf B}_{1,1}(0,0)\ {\bf B}_{1,1}(m,m)+c_{4}\left({\bf A}_{1}(m)\right)^{2} \tag{38}\] \[c_{5}\left({\bf B}_{1,1}(0,0)\right)^{2}+c_{6}\left({\bf B}_{1,1} (m,m)\right)^{2}+c_{7}\ {\bf J}_{1,1,1}(0,0,0)+c_{8}\ {\bf J}_{1,1,1}(m,m,0)+c_{9}\ {\bf J}_{2,1,1}(m,m,0)\] \[c_{10}\ {\bf F}_{1,1,1,1,1}(0,m,0,m,m)+c_{11}\ {\bf F}_{1,1,1,1,1}(m,0,m,0,m).\] All of the aforementioned integrals are established and can be found in Refs.[43; 44], and the coefficients \(c_{i}\) are presented in appendix A. As we are only concerned with the renormalization of the gluon wave-function, we expand Eq.(38) around \(p=0\) and retain only terms proportional to \(p^{0}\). Higher powers in the external momentum will be renormalized by higher-order terms. Thus, we obtain: \[\Pi^{(2)} = -\frac{i\lambda C_{A}\ g^{2}}{384\pi^{4}\epsilon}-\frac{i\kappa^ {2}m^{2}C_{A}\ g^{2}}{256\pi^{4}\epsilon}+\frac{iC_{A}^{2}\ g^{4}\log\left(m^{2 }\right)}{384\pi^{4}\epsilon}-\frac{iC_{A}^{2}\ g^{4}\log\left(-p^{2}\right)}{6 4\pi^{4}\epsilon}-\frac{i\lambda C_{A}\ g^{2}}{384\pi^{4}\epsilon}+\frac{5i \gamma C_{A}^{2}\ g^{4}}{384\pi^{4}\epsilon} \tag{39}\] \[+\frac{17iC_{A}^{2}\ g^{4}}{576\pi^{4}\epsilon}+\frac{5i\log(4 \pi)C_{A}^{2}\ g^{4}}{384\pi^{4}\epsilon}+\frac{5iC_{A}^{2}\ g^{4}}{768\pi^{4} \epsilon^{2}}+O(p)+\mbox{finite}.\] Now, we should compute the 1-loop diagrams with counterterms insertion in Fig. 10. By doing so, we obtain \[\Pi^{(2)}_{\mu\nu CT}=(p^{2}g_{\mu\nu}-p_{\mu}p_{\nu})\Pi^{(2)}_{CT}, \tag{40}\] where \[\Pi^{(2)}_{CT} = -\frac{iC_{A}^{2}\ g^{4}\log\left(m^{2}\right)}{384\pi^{4}\epsilon }+\frac{iC_{A}^{2}\ g^{4}\log\left(-p^{2}\right)}{64\pi^{4}\epsilon}-\frac{5iC _{A}^{2}\ g^{4}}{384\pi^{4}\epsilon^{2}}+\frac{i\lambda C_{A}\ g^{2}}{192\pi^{ 4}\epsilon}-\frac{5i\gamma C_{A}^{2}\ g^{4}}{384\pi^{4}\epsilon} \tag{41}\] \[-\frac{59iC_{A}^{2}\ g^{4}}{2304\pi^{4}\epsilon}-\frac{5i\log(4 \pi)C_{A}^{2}\ g^{4}}{384\pi^{4}\epsilon}+O(p)+\text{finite}.\] Therefore, we obtain that the two-loop gluon wave-function counterterm is given by \[Z^{(2)}_{3}=\frac{C_{A}^{2}g^{4}}{256\pi^{4}\epsilon}-\frac{5C_{A}^{2}g^{4}}{7 68\pi^{4}\epsilon^{2}}-\frac{\kappa^{2}m^{2}C_{A}g^{2}}{256\pi^{4}\epsilon}. \tag{42}\] ## V Concluding remarks In summary, we have evaluated the n-point functions for the Einstein-Scalar-QCD model and demonstrated that there are no gravitational corrections to the beta function of the color charge at one-loop order. Additionally, we have explicitly verified that the Slavnov-Taylor identities are preserved at this order of perturbation theory, indicating that the universality of the color charge is maintained. Lastly, we have computed the counterterm for the gluon wave-function at two-loop order. It is important to contextualize our results and compare them with previous research. To this end, we will follow the discussion in [45] and highlight some distinctions between our findings and theirs. One such difference lies in the adoption of a distinct regularization scheme. In reference [26], it is argued that there are three primary concerns that should be considered when working with quantum gravity: gauge invariance, gauge conditions introduced in the quantization process, and the ability of the method to regulate any type of divergence. It was further argued that although dimensional regularization (DR) satisfies the first two requirements, it cannot handle more than logarithmic divergences. Therefore, Tang and Wu employed the Loop Regularization method (LP) in their studies [26; 27] to regulate the divergences. This method is capable of dealing with the quadratic divergences that appear in the Feynman diagrams. The authors used LP to compute the beta functions of the Einstein-Yang-Mills theory and compared the results with those obtained using DR. They found that while using DR leads to no gravitational contribution at one-loop, the use of LP leads to a contribution that is proportional to \(\mu^{2}\). It is a fundamental requirement that physical results should not depend on the choice of the regularization scheme. Anber pointed out in [20] that the quadratic divergences are not relevant when using the S-matrix, which is a physical quantity. Moreover, Toms demonstrated in [46] that it is possible to define the electrical charge in quantum gravity using the background field method in a physically meaningful way that is not influenced by the quadratic divergences. Therefore, such contributions should be regarded as unphysical and should not be included in the evaluation of the running coupling. An intriguing avenue for further investigation pertains to the existence of a non-Abelian scalar particle serving as a potential dark matter candidate, as well as the implications of quantum gravity for dark matter. In the study conducted in Ref.[32], the potential ramifications of quantum gravity on dark matter models were explored. It was demonstrated that quantum gravity would give rise to a fifth force-like interaction, setting a lower limit on the masses of bosonic dark matter candidates. The authors also argued that, due to the influence of quantum gravity, these potential candidates would decay. However, given the ongoing observation of dark matter in the present universe, the authors were able to calculate an upper bound on the mass of a scalar singlet dark matter particle. In our future work, we intend to investigate the mass range for a non-Abelian scalar dark matter candidate, as presented in our study. In such a scenario, the fifth force-like interaction would also be non-Abelian in nature. This particular scenario was discussed in [31]. In our future endeavors, we plan to investigate the dynamics of the renormalized coupling constant in non-Abelian gauge theories, considering the presence of fermions and scalars coupled to gravity at the two-loop level. This investigation will involve an expansion of our research to incorporate modified theories of gravity, such as quadratic gravity [48; 49; 50; 51]. Drawing on the qualitative analysis presented in [24], we expect that modified theories of gravity, characterized by unconventional properties such as repulsive gravity under specific regimes, could potentially impact the behavior of the beta function. These modified gravity theories introduce additional gravitational interactions and might influence the running of the coupling constant in non-Abelian gauge theories, leading to intriguing and novel phenomena. ###### Acknowledgements. The work of HS is partially supported by Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES). ## Appendix A Two-loop coefficients In this section we present the two-loop coefficients for the two-loop gluon self-energy from Eq. (38). \[c_{1} = -\frac{i\left(D^{4}-10D^{3}+35D^{2}-50D+24\right)C_{A}g_{s}^{2}}{960 (D-4)(D-3)(D-1)^{2}m^{4}}(-4C_{A}g_{s}^{2}(20\left(2D^{2}-3D-11\right)m^{2} \tag{43a}\] \[+\left(2D^{2}-11D+12\right)p^{2})-5\left(D^{2}-8D+12\right) \kappa^{2}m^{2}\left((D-8)p^{2}-48m^{2}\right));\] \[c_{2} = -\frac{iC_{A}g_{s}^{2}}{16(D-4)(D-3)(D-1)^{2}m^{2}p^{2}}(-64(D-1) ^{2}\left(D^{2}-7D+12\right)\lambda m^{2}\] (43b) \[+8(D-1)C_{A}g_{s}^{2}\left(4\left(D^{3}-8D^{2}+19D-16\right)m^{2} +(D-2)Dp^{2}\right)+2D^{6}\kappa^{2}m^{4}-18D^{5}\kappa^{2}m^{4}\] \[-D^{5}\kappa^{2}m^{2}p^{2}+22D^{4}\kappa^{2}m^{4}-64D^{4}\lambda m ^{2}+23D^{4}\kappa^{2}m^{2}p^{2}+262D^{3}\kappa^{2}m^{4}+576D^{3}\lambda m^{2}\] \[-196D^{3}\kappa^{2}m^{2}p^{2}-1124D^{2}\kappa^{2}m^{4}-1728D^{2} \lambda m^{2}+696D^{2}\kappa^{2}m^{2}p^{2}+8D^{2}\kappa^{2}p^{4}+1712D\kappa^{ 2}m^{4}\] \[+1984D\lambda m^{2}-1048D\kappa^{2}m^{2}p^{2}-24D\kappa^{2}p^{4}- 928\kappa^{2}m^{4}-768\lambda m^{2}+544\kappa^{2}m^{2}p^{2}+16\kappa^{2}p^{4});\] \[c_{3} = -\frac{i\left(D^{3}-8D^{2}+19D-12\right)C_{A}g_{s}^{2}\left(2C_{A }g_{s}^{2}+\kappa^{2}\left(2(D-2)m^{2}-(D-4)p^{2}\right)\right)}{2(D-4)(D-3)(D- 1)^{2}};\] (43b) \[c_{4} = \frac{i\left(3D^{4}-40D^{3}+180D^{2}-320D+192\right)C_{A}g_{s}^{2} }{960(D-6)(D-5)(D-4)^{2}(D-3)(D-2)(D-1)^{2}(3D-4)m^{4}p^{4}}(-1920(D-1)^{2}(D^ {4}-14D^{3}\] (44c) \[+71D^{2}-154D+120)\lambda m^{2}p^{2}+4\left(D^{2}-3D+2\right)C_{A }g_{s}^{2}\big{(}\big{(}2D^{3}-19D^{2}+54D-45\big{)}\left(D-4\right)^{2}p^{4}\] \[+32\left(4D^{5}-48D^{4}+113D^{3}+616D^{2}-3099D+3470\right)m^{4}+ 4(8D^{5}-40D^{4}-281D^{3}+2224D^{2}\] \[-4899D+3924)m^{2}p^{2})+5(D-5)m^{2}p^{2}\big{(}\big{(}D^{2}-3D+2 \big{)}\left((D^{5}-23D^{4}+200D^{3}-820D^{2}+1584D\right.\] \[\left.-1056\right)\kappa^{2}p^{2}-384\left(D^{3}-8D^{2}+19D-12 \right)\lambda)+4(5D^{7}-113D^{6}+1052D^{5}-5122D^{4}+13896D^{3}\] \[-20896D^{2}+16032D-4800)\kappa^{2}m^{2}));\] \[c_{5} = \frac{iC_{A}g_{s}^{2}}{128(D-4)(D-1)^{2}}(64\left(D^{3}-5D^{2}+2D+ 2\right)C_{A}g_{s}^{2}+(-24D^{5}+497D^{4}-3680D^{3}+12984D^{2}\] (44d) \[-21560D+11840);\kappa^{2}p^{2})\] \[c_{6} = \frac{iC_{A}g_{s}^{2}}{64(D-4)(D-1)^{2}p^{2}}(\kappa^{2}(16\left( D^{3}-10D^{2}+36D-36\right)m^{4}-8\left(D^{3}-10D^{2}+48D-48\right)m^{2}p^{2}\] (45e) \[+\left(D^{3}-10D^{2}+64D-64\right)p^{4})-128(D-1)C_{A}g_{s}^{2} \left(2m^{2}-p^{2}\right));\] \[c_{7} = -\frac{iC_{A}g_{s}^{2}}{48(D-6)(D-4)^{2}(D-1)(3D-4)p^{2}}(24(9D^{6 }-189D^{5}+1364D^{4}-4756D^{3}+9280D^{2}\] (46f) \[-10336D+4992)C_{A}g_{s}^{2}+(6D^{8}-35D^{7}-2454D^{6}+39327D^{5}-2 40012D^{4}+695044D^{3}\] \[-915664D^{2}+366464D+98304)\kappa^{2}p^{2});\] \[c_{8} = -\frac{iC_{A}g_{s}^{2}}{480(D-4)(D-2)(D-1)m^{2}p^{4}}(4(D-2)C_{A} g_{s}^{2}(32\left(12D^{4}-92D^{3}-41D^{2}+1577D-2776\right)m^{4}\] (47g) \[+4\left(24D^{4}-172D^{3}+273D^{2}+193D-516\right)m^{2}p^{2}+ \left(6D^{4}-67D^{3}+271D^{2}-468D+288\right)p^{4})\] \[+5\kappa^{2}m^{2}p^{2}(4\left(6D^{6}-213D^{5}+2417D^{4}-12716D^{3}+ 34112D^{2}-45272D+23616\right)m^{2}\] \[+\left(3D^{6}-63D^{5}+518D^{4}-2092D^{3}+4296D^{2}-3968D+1024 \right)p^{2}));\] \[c_{9} = \frac{iC_{A}g_{s}^{2}}{480(D-4)(D-3)(D-2)(D-1)m^{2}p^{4}}(4(D-2)C_{A} g_{s}^{2}(240\left(7D^{2}-57D+100\right)m^{4}p^{2} \tag{41}\] \[-(D-4)^{2}\left(2D^{2}-9D+9\right)p^{6}+128\left(4D^{4}-32D^{3}-7D ^{2}+548D-1041\right)m^{6}\] \[-4\left(6D^{4}-39D^{3}-22D^{2}+517D-876\right)m^{2}p^{4})+5\kappa^ {2}m^{2}p^{2}(16(2D^{6}-69D^{5}+789D^{4}-4236D^{3}\] \[+11684D^{2}-16012D+8664)m^{4}-4(D^{6}-44D^{5}+543D^{4}-3040D^{3}+ 8736D^{2}-12616D\] \[+7296)m^{2}p^{2}-\left(D^{6}-25D^{5}+246D^{4}-1220D^{3}+3224D^{2} -4416D+2496\right)p^{4}));\] \[c_{10} = \frac{i\kappa^{2}m^{2}C_{A}g_{s}^{2}\left(\left(D^{2}-6D+4\right) p^{2}-4(D-2)m^{2}\right)}{2(D-1)};\] (42) \[c_{11} = -\frac{iC_{A}g_{s}^{2}\left(C_{A}g_{s}^{2}\left(8m^{2}-p^{2} \right)+(D-2)\kappa^{2}m^{2}\left((D-4)p^{2}-8m^{2}\right)\right)}{2(D-1)}. \tag{43}\]
2302.04161
Masking Kernel for Learning Energy-Efficient Representations for Speaker Recognition and Mobile Health
Modern smartphones possess hardware for audio acquisition and to perform speech processing tasks such as speaker recognition and health assessment. However, energy consumption remains a concern, especially for resource-intensive DNNs. Prior work has improved the DNN energy efficiency by utilizing a compact model or reducing the dimensions of speech features. Both approaches reduced energy consumption during DNN inference but not during speech acquisition. This paper proposes using a masking kernel integrated into gradient descent during DNN training to learn the most energy-efficient speech length and sampling rate for windowing, a common step for sample construction. To determine the most energy-optimal parameters, a masking function with non-zero derivatives was combined with a low-pass filter. The proposed approach minimizes the energy consumption of both data collection and inference by 57%, and is competitive with speaker recognition and traumatic brain injury detection baselines.
Apiwat Ditthapron, Emmanuel O. Agu, Adam C. Lammert
2023-02-08T16:13:28Z
http://arxiv.org/abs/2302.04161v2
# Masking Kernel for Learning Energy-Efficient Speech Representation ###### Abstract Modern smartphones are equipped with powerful audio hardware and processors, allowing them to acquire and perform on-device speech processing at high sampling rates. However, energy consumption remains a concern, especially for resource-intensive DNNs. Prior mobile speech processing reduced computational complexity by compacting the model or reducing input dimensions via hyperparameter tuning, which reduced accuracy or required more training iterations. This paper proposes gradient descent for optimizing energy-efficient speech recording format (length and sampling rate). The goal is to reduce the input size, which reduces data collection and inference energy. For a backward pass, a masking function with non-zero derivatives (Gaussian, Hann, and Hamming) is used as a windowing function and a lowpass filter. An energy-efficient penalty is introduced to incentivize the reduction of the input size. The proposed masking outperformed baselines by 8.7% in speaker recognition and traumatic brain injury detection using 49% shorter duration, sampled at a lower frequency. Apiwat Ditthapron, Emmanuel O. Agu and Adam C. Lammert+Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609,USA window function, energy efficiency, deep learning, speaker recognition, TBI detection Footnote †: This material is based on research funded by DARPA under agreement number FA8750-18-2-0077. ## 1 Introduction Speech processing hardware embedded into smartphones facilitates various speech tasks such as voice authentication, automatic speech recognition, and health assessment on the device either as a short session or continuously. Most state-of-the-art speech processing utilizes Deep Neural Networks (DNNs) for accurate analyses, with inference typically done either on a mobile device or a remote computing server. While on-device computing protects speaker privacy more than cloud computing, it utilizes more computing resources. Although computing resources on smartphones can perform real-time DNN inference, energy consumption remains an issue for deploying such high-performance DNNs, especially those that perform continuous processing. Previous work in mobile speech processing addressed limited energy problems by using energy-efficient hardware [1] including using a low-power processor with optimizations done on window size and model complexity [2], or using compact DNN models as in MobileNet [3]. Instead of the model and hardware-specific optimizations, our approach is to optimize the size of the speech input, which in turn reduces the energy consumption of data acquisition and DNN inference. DNNs for speech processing typically operate on a sequence of discrete signals referred to as chunks, containing \(n\times s\) frames from \(n\) seconds of audio sampled at \(s\) Hz. A chunk of speech ranges from 200 milliseconds (ms), in speech recognition, to 15 seconds, in depression detection [4, 5]. While longer speech sampled at a higher sampling frequency demonstrates improved recognition performance [2], its practical application is often bounded by energy consumption, since recording and processing 1) longer speech durations at 2) higher sampling rates consume more energy [6]. The size of a chunk is typically optimized as hyperparameters across multiple trainings [2], usually over a finite set of values in grid-search or using Bayesian optimization, which learns an optimal hyper-parameter on an open set of values [7]. To efficiently optimize the shape of the input signal as energy-efficient parameters, this study minimizes the duration of speech \(m\) and sampling rate \(s\) via masking using DNN training while learning other DNN parameters (\(\theta\)). Additional windowing and down-sampling layers are proposed to be included at the beginning of the DNN to optimize \(m\) and \(s\), together with \(\theta\) on a computing server with inference running natively on a mobile device. Gaussian, Hamming, Hann, and Tukey windows [8] were considered as masks during backpropagation to facilitate learning of \(m\) in the windowing layer. In contrast to the traditional use of masking (referred to as soft-masking), we applied a binary step as a hard-masking for constructing a discrete window. In the down-sampling layer, the binary step is used as a masking function to learn the appropriate signal bandwidth on discrete Fourier transform. A learning approach that discovers DNN architecture as an end-to-end model was previously proposed in Automated Machine Learning (AutoML). Utilizing only training data, AutoML transforms the architecture of each DNN layer into derivable functions, such as masking, which can be back-propagated via gradient descent. Flexconv [9] proposed using a Gaussian function as a mask on the convolution weights to learn the kernel size for the image recognition task. Diff-Stride [10] proposed masking for back-propagation to learn the scale factor of the pooling layer. Searching for optimal architectures using AutoML achieves performance superior to hyper-parameter tuning. The uses of masking in previous work are similar to ours, but the main distinctions are in the learning objective and methodology. Our method applies masking to the input to optimize the input format, and not on the weights to optimize model architecture as in AutoML. An energy-efficient penalty is introduced to prevent \(m\) and \(s\) from expanding, thereby reducing the amount of energy required for inference and data recording on a linear scale [6]. Our proposed method is able to reduce energy utilized for inference while minimizing performance loss. We evaluated the windowing layer, down-sampling layer, and energy-efficient penalty at the window-level (a session of one speech chunk) and sentence-level (a session of multiple speech chunks) for the speaker recognition task, and for the continuous TBI detection task (continuous processing). The energy used for DNN inference is significantly improved in all three scenarios whereas energy expenditure during data acquisition is reduced only in the first scenario. Moreover, we show that our proposed method outperforms or is competitive with conventional hyperparameter tuning methods in terms of accuracy on significantly reduced size of speech input. ## 2 Proposed Method The optimization of window size and the sampling rate is accomplished via back-propagation through windowing \(\mathcal{W}_{m}\) and down-sampling layers \(\mathcal{D}_{s}\). The parameters in these two layers (\(m,s\)) are learned jointly with other parameters (\(\theta\)) in the DNN but are controlled by energy-efficient penalty \(\mathcal{J}\) in order to minimize the size of the speech sample. Given a speech model \(\mathcal{F}_{\theta}(x)\) with a loss function of \(\mathcal{L}(x,y)\), parameters are optimized from dataset \(\{x_{i},y_{i}\}_{i=0}^{P}\) by \(\operatorname*{argmin}_{\theta,m,s}\sum_{i=0}^{P}\mathcal{L}\big{(}\mathcal{ F}_{\theta}(\mathcal{W}_{m}(\mathcal{D}_{s}(x_{i})),y_{i}\big{)}+\mathcal{J}(m,s)\) **Windowing layer**: Let \(x_{i}\in\mathbb{R}^{N}\) be a speech sample, composed of \(N\) frames where \(N\) is the upper bound of window length. Windowing should allow a signal of length \(m\) (\(\lfloor\frac{N-m}{2}\rfloor\leq n\leq\lfloor\frac{N+m}{2}\rfloor\)) into the DNN. A rectangle window, a standard method for segmenting the signal for DNN, is defined to have values of 1 within the length of \(m\) and values of 0 everywhere else, resulting in zero derivatives. To learn \(m\) via gradient descent, derivatives of the masking function must be non-zeros. This study proposes hard-masking, a learnable rectangle window, that uses functions with a peak at its center during back-propagation. The functions considered are Gaussian, Hamming, Hann and Tukey [8], which are well-known functions in signal analysis. Gaussian window function has a mean value of \(\lfloor\frac{N-1}{2}\rfloor\) with a learnable variance \(\sigma^{2}\). This study defines \(\sigma^{2}\) in terms of window \(m\) at which the function is approaching zero (\(m^{2}=-8\log(\epsilon)\sigma^{2},\epsilon=1e^{-5}\)) as \(w_{G}(n;m)=exp(4\log(\epsilon)(n-\lfloor\frac{N-1}{2}\rfloor)^{2}/m^{2})\). Hamming and Hann windows are defined as \(w_{HM}(n;m)=0.54-0.46\cos(2\pi(n-\lfloor\frac{N-m}{2}\rfloor)/(m-1))\) and \(w_{HN}(n;m)=0.5-0.5\cos(2\pi(n-\lfloor\frac{N-m}{2}\rfloor)/(m-1))\), respectively. Tukey is also included as a tapered cosine function of \(w_{HN}\). A window \(w(n;m)\) is applied to \(x(n)\) to attenuate values outside the window. We call the output of this operation soft-masking and consider it a baseline for hard-masking. To create hard-masking \(\mathcal{W}_{m}\), a value of 1 is assigned to non-zero values of soft-masking. The hard-masking derivative of \(\delta\mathcal{L}/\delta m\) is computed by applying a straight-through estimator [11] to \(w\). **Down-sampling layer:** The down-sampling layer \(\mathcal{D}_{s}\) applies masking in the frequency domain, resampling \(x\) to \(2s\) Hz. The discrete Fourier transform \(X(\hat{n})=FFT(x(n))\) is obtained from the Fast Fourier Transform (FFT) of \(x(n)\). Due to Hermitian symmetry, the term with negative frequency can be disregarded. A rectangle mask is used as a low-pass filter to zero frequency bins higher than \(s\). To reduce artifacts from the rectangle mask and allow back-propagation, a linear function is applied, which extends the cutoff frequency by \(r\). The mask \(w_{r}(n;s,r)\) is defined as \(min(1,max(-\frac{n-s}{r})),0\leq n\leq N\),visualized in Fig. 2. After applying \(w_{r}(n;s,r)\) to \(X(\hat{n})\), \(x(n)\) is downsampled to \(s\) Hz using inverse FFT only on DFT bins between \(0\) and \(s\) Hz, mathematically explained by \(\mathcal{D}_{s}=iFFT(X(\hat{n})\odot w_{r}(\hat{n};s,r))\), where \(0<\hat{n}\leq s\). **Energy-efficient penalty:** A penalty term is introduced to encourage the minimization of window length and sampling rate, which, in turn, reduces the amount of energy required for data acquisition and inference. The energy-efficient penalty \(\mathcal{J}(m,s)=\lambda\big{[}\frac{\max(m-\mu_{m},0)}{\mu_{m}}+\frac{\max(s- \mu_{s},0)}{\mu_{s}^{2}}\big{]}\tilde{\mathcal{L}}\) is incorporated into the loss function to penalize \(\tilde{\mathcal{L}}\) if m or s are increasing from their average values (\(\mu_{m}\),\(\mu_{s}\)) in the previous epoch. The penalty values are normalized and added proportionally to the value of \(\tilde{\mathcal{L}}\) (no gradient). \(\lambda\) is adjustable to control the penalty term. \(\mathcal{J}\) is clipped at zero to prevent exploding gradient. Figure 1: Windowing layer using hard-masking Figure 2: Masking \(w_{r}\) in down-sampling layer ## 3 Evaluation The proposed method was evaluated using a state-of-the-art DNN previously proposed for speaker recognition (short) and TBI detection (continuous, passive health assessment) tasks. Our implementation is publicly available 1. Footnote 1: [https://github.com/adithapron/windowMasking](https://github.com/adithapron/windowMasking) **1) Speaker recognition task:** The speaker recognition speech processing task tries to identify a speaker based on their voice characteristics. In smartphones, speaker identification is frequently performed as a snippet or as continuous authentication, which consumes significant energy [12]. _Dataset_: Text-independent speech from the TIMIT corpus was used to train and evaluate the model. Read speech in English was collected from 462 speakers at 16-bit with a sampling rate of 16kHz. All data preprocessing steps, including removing non-speech segments at the beginning and end, removing calibration sentences, and normalizing the amplitude, were performed similarly to [4]. The space between each window center was fixed to 10ms. \(M\) was set to 500ms. The split between training and testing was the same as in [4]. _Evaluation Metric:_ Classification Error Rate (CER) is reported at both the window and sentence levels. At the window level, the speaker with the highest negative log-likelihood is predicted, whereas, the negative log-likelihood from all windows is summed to make the prediction at the sentence level. The reduction of \(m\) in window-level speaker recognition means that the duration of speech necessary to collect was reduced. To assess the training's consistency, all evaluations were repeated ten times with random seeds of varying values. _Baseline:_ SincNet [4] was the baseline for speaker recognition. It replaces traditional convolutional weights with the Sinc function as the kernel in CNN layers. The model consists of one CNN layer with Sinc filters and two conventional CNN layers. After the CNN, the tensor is transformed into a one-dimensional tensor to classify the speaker. _Experiment:_ We extended SincNet [4] to learn energy-efficient parameters by including windowing and \(\mathcal{D}_{s}\) layers prior to SincNet layers. As the input shape changes throughout the learning process, the layers following the CNN were modified to only apply weights to the signal's valid length. **2) TBI detection task:** Frequently, impaired speech is considered a TBI biomarker that can be observed using smartphones, preventing fatalities and facilitating the recovery of TBI [13]. Even though smartphones enable passive, non-invasive monitoring of TBI, continuous speech assessment using a DNN is extremely energy-intensive. To enable TBI detection to run on smartphones, energy-efficient speech representations were learned using the proposed methods. _Dataset:_ Speech collected during discourses following TBI from Coelho corpus [14] was used for evaluation. The Coelho corpus contains speech during story retelling, story generation, and conversation discourses from 55 subjects with non-penetrating head injuries and 52 subjects without head injuries. This evaluation considered speech recorded in the conversation section. The pre-processing was done following [13], including 1) removing noisy audio from four subjects, 2) normalizing audio magnitude, and 3) vocal-tract length normalization. The speech was recorded with a sampling rate of 44.1 kHz, where [13] down-sampled the signal to 16 kHz. This study initialized \(s\) in the down-sampling layer to 22 kHz. _Metric:_ Balanced Accuracy\({}_{,}(\text{Sensitivity}+\text{Specificity})/2\), is reported using subject-level split 10-fold cross-validation. _Baseline:_ The cascading Gated Recurrent Unit (cGRU) previously proposed for TBI detection from speech [13] was the baseline model. cGRU is a two-step DNN where the first model extracts TBI features from 200ms of speech using five CNN layers, and the second model applies GRU on stacked features from the first model for binary TBI classification. The CNNs were applied on 200ms with an interval of 25 ms and GRU makes a TBI prediction over 4s of speech. _Experiment:_We integrated the proposed method into the cascading Gated Recurrent Unit (cGRU) model [13] to learn the energy-efficient input for the TBI detection task. Windowing and down-sampling layers were applied at the instance level (\(M=8\)s). Due to space constraints, ablation results are only reported for speaker recognition. **Model optimization Baselines:** Grid-search and Bayesian model-based optimization were used to tune the input format. \(m\) and \(s\) were searched in ranges of [100,300] ms and [6000,8000] Hz for speaker recognition, and [2,8] s and [6000,22050] Hz for TBI detection in grid-search. Ten evenly spaced values were used for each parameter interval. In Bayesian model-based optimization, the Tree-structured Parzen Estimator (TPE) [15] was utilized. TPE uses past evaluations of hyperparameters to construct a probabilistic model over multiple iterations. We considered a low-pass filter with a learnable Sinc filter [4] as a baseline for \(\mathcal{D}_{s}\) layer. **Energy-efficient metrics:** The numbers of parameters and Multiply-Accumulates (MACs) have previously been shown to be effective estimations of DNN energy consumption at inference [16]. Our proposed method adds window length parameters \(m\) and sampling rate \(s\) to the model, but MACs change significantly depending on the size of the input (\(m\times s\)). Energy consumption at inference can be reduced by lowering the MAC, whereas power consumption during speech recording can be reduced by lowering \(s\). We reported normalized MAC and training time as measurements of energy used at inference and time used to obtain energy-efficient parameters. They are normalized as a ratio to grid-search. ## 4 Result **Speaker recognition:** Windowing functions are compared as hard-masking and soft-masking for speaker recognition in Fig. 3 (top). Only the hard-mask is able to optimize \(m\), approximately at 120-200ms. Window-CER is significantly lower than the baseline trained on a 200ms speech. From the plot between CER and window length (Fig. 4), Hamming window shows to be the most efficient at reducing window length while maintaining the same error range as Gaussian and Tukey windows. The CER of Hann window is lower than the other windows, however, using \(m\) higher than 200ms. Fig. 3 (bottom) compares the \(\mathcal{D}_{s}\) layer to the Sinc. \(\mathcal{D}_{s}\) is competitive with the Sinc filter but with a significantly lower sampling rate of 7.2 kHz. Together, the two proposed masking layers can be optimized using penalty terms as shown in Fig. 5. \(\lambda=0.5\) and \(1\) provide the most energy-efficient performance, reducing the window length to 118 ms and sampling rate to 7.2 kHz while maintaining a low CER of 0.49. The comparison between the proposed method, grid-search, and TPE is shown in Table 1. The proposed method is capable of reducing MAC (an indicator for energy consumption used for inference) by 49%, with superior performance at the window-level and comparable performance at sentence-level. The energy used for data acquisition is also reduced by 40% for window-level speaker recognition. The improved CRE may be due to the mechanism of optimization in the windowing layer that allows other parameters in the model to learn on various receptive fields over the training epochs. This conjecture is evidenced by the result of the fixed value, which trains the model using fixed \(m\) and \(s\) values as in the proposed method, which has inferior results. **TBI detection:** TBI detection result is reported in Table 1. The best TBI BA is obtained using 3.14s of speech sampled at 12.4 kHz, improving the grid-search result by 3.9%. The training time used to tune the model is significantly lower than TPE and is competitive with grid-search that trained DNN in parallel. Energy consumption at inference is expected to reduce by 26% compared to baseline. Similar to speaker recognition results, windowing and \(\mathcal{D}_{s}\) layers allow the DNN to learn from different lengths and sampling of speech, which provides a better detection BA. ## 5 Conclusion DNN-based speech processing has the potential for impact but currently has high energy consumption, limiting the mobile deployment of state-of-the-art results. This study proposed optimizing the length and sampling rate of speech using a masking function during the back-propagation of DNN, which consequently reduces the energy consumption of speech acquisition and DNN inference. Our evaluation demonstrates that learning speech format using an end-to-end model provides better performances than tuning window length and sampling rate as hyperparameters. The power consumption used for inference is reduced as can be estimated from MAC operation, up to 49% and 26% in speaker recognition and TBI detection tasks, respectively while maintaining high accuracy. Our proposed method has the limitation of requiring subsequent DNN layers to operate on a tensor with a dynamic temporal dimension. This study modified the DNN fully-connected layer to have a flexible input size, which may create inconsistent loss across training epochs. \begin{table} \begin{tabular}{l|c c|c c c} \hline \hline **Speaker** & \multicolumn{2}{c|}{**CER (\(\%_{k\to d}\))**} & \multicolumn{3}{c}{**Energy-efficient metrics**} \\ **classification** & Window & Sentence & \(m\)(ms) & s(Hz) & MAC & Training Time \\ \hline Hamming & **48.6\({}_{4}\)** & 1.02\({}_{1}\) & **118** & 8k & 0.58 & **0.96** \\ Hamming + \(\mathcal{D}_{s}\) & **49.1\({}_{3}\)** & 1.08\({}_{1}\) & 120 & 7.2k & **0.51** & 1.05 \\ Grid-search & 53.8\({}_{1}\) & **0.94\({}_{1}\)** & 200 & 8k & 1 & 1 \\ Fixed values & 56.0\({}_{1}\) & 1.24\({}_{2}\) & 120 & 7.2k & 0.51 & 0.64 \\ TPE & 52.80\({}_{7}\) & 1.29\({}_{1}\) & 272 & 7.6k & 0.87 & 22 \\ \hline **TBI detection** & **BA (\(\%_{k\to d}\))** & & & & & \\ \hline Hamming & 86.53\({}_{1,3}\) & **2.89\({}_{8}\)** & 8k & 0.79 & 1.05 \\ Hamming + \(\mathcal{D}_{s}\) & **87.12\({}_{1,4}\)** & 3.14s & **6.2k** & **0.74** & 1.02 \\ Grid-search & 83.82\({}_{1,4}\) & 4s & 8k & 1 & 1 \\ Fixed values & 82.62\({}_{1,1}\) & 3.14s & 6.2k & 0.74 & 0.78 \\ TPE & 81.90\({}_{1}\) & 3.94s & 8k & 1 & 18 \\ \hline MAC and training time are reported as a ratio to Grid-search. & & & & \\ \end{tabular} \end{table} Table 1: Speaker recognition and TBI detection results Figure 4: Trade-off between CER and window length Figure 5: Effect of energy-efficient penalty Figure 3: **TOP**: Window-CER between hard-masking and soft-masing, **BOTOM**: Window-CER using Down-sampling layer (DS) and Sinc filter
2308.11251
Twist-3 Contributions in Semi-Inclusive DIS in the Target Fragmentation Region
We present the complete results up to twist-3 for hadron production in the target fragmentation region of semi-inclusive deep inelastic scattering with a polarized lepton beam and polarized nucleon target. The nonperturbative effects are factorized into fracture functions. The calculation up to twist-3 is nontrivial since one has to keep gauge invariance. By applying collinear expansion, we show that the hadronic tensor can be expressed by gauge-invariant fracture functions. We also present the results for the structure functions and azimuthal asymmetries.
K. B. Chen, J. P. Ma, X. B. Tong
2023-08-22T07:48:27Z
http://arxiv.org/abs/2308.11251v2
# Twist-3 Contributions in Semi-Inclusive DIS in the Target Fragmentation Region ###### Abstract We present the complete results up to twist-3 for hadron production in the target fragmentation region of semi-inclusive deep inelastic scattering with a polarized lepton beam and polarized nucleon target. The non-perturbative effects are factorized into fracture functions. The calculation up to twist-3 is non-trivial since one has to keep gauge invariance. By applying collinear expansion, we show that the hadronic tensor can be expressed by gauge-invariant fracture functions. We also present the results for the structure functions and azimuthal asymmetries. ## I Introduction Semi-Inclusive Deep Inelastic Scattering (SIDIS) is an important process for hadronic physics. It provides a cleaner environment for detecting the inner structure of the initial hadron than inclusive processes in hadron-hadron collisions. The kinematic region of SIDIS can roughly be divided into two parts (see e.g., [1; 2; 3; 4; 5] for more discussions). One is called the Current Fragmentation Region (CFR) where the observed hadron in the final state moves into the forward region of the virtual photon. Another one is the Target Fragmentation Region (TFR), where the measured hadron predominantly travels in the forward direction of the incoming target. Events in both regions can be used to comprehend the internal structure of hadrons and the properties of strong interactions. So far, the bulk of research on SIDIS has focused on the CFR, where hadron production can be understood as the fragmentation of a parton emitted from the target and struck by the virtual photon. This allows us to investigate various parton distributions functions (PDFs) [6; 7; 8; 9] and fragmentation functions (FFs) [6; 10; 11] within the transverse-momentum-dependent (TMD) [12; 13; 14; 15; 16] or collinear factorization formalisms [17; 18; 19; 20; 21] at small or large hadron transverse momentum. While there have been significant developments in recent years for physics in the CFR [16; 22], the physics in the TFR has received less attention. The early analysis of the experimental data at HERA [23; 24] indicates a surprisingly high number of events in the TFR and has stimulated the introduction of fracture functions [25; 26; 27]. Physically, fracture functions describe the distributions of the struck parton inside the target when the remnant spectators fragment inclusively into the detected hadron. They encompass intricate initial-final state correlations and provide a unique perspective into the partonic dynamics and hadronization, complementing PDFs and FFs. Most of our current knowledge about fracture functions comes from the analysis of proton diffraction (see e.g., [28] for a recent review), where the final hadron coincides with the target proton, and the fracture function is conventionally called as diffractive PDF [26]. Phenomenological fittings of diffractive PDFs from HERA data [24; 29; 30; 31; 32; 33; 34; 35] have been conducted in [36; 37; 38; 39; 40; 41]. Fracture functions for other leading baryon production, such as neutron and \(\Lambda\)-hyperon, are also constrained with parameterization assumptions in [42; 43; 44; 45; 46] and [47; 48], respectively. Fracture functions are also utilized in hadron collisions in [49; 50; 51]. From a theoretical point of view, most of the aforementioned investigations about TFR hadron production are based on the factorization at twist-2 in terms of collinear fracture functions. This factorization has been proven to hold to all orders of \(\alpha_{s}\)[52] and has been confirmed through the explicit calculations up to \(\mathcal{O}(\alpha_{s}^{2})\)[53; 54; 55; 56]. Initially, the fracture functions in the factorization included an integration over the final-hadron transverse momentum \(P_{h\perp}\)[25], however, without this integration the momentum transfer [26; 27] and azimuthal-angle distribution [57; 58] can be studied. To further probe the spin [54; 57] and parton-transverse-momentum [57; 59; 60] dependence of fracture functions, several observables and factorization assumptions are proposed in [61; 62; 63]. CLAS collaboration at JLab has recently reported the first measurement of these dependences [64]. More detailed discussions on the factorization with TMD fracture functions and relevant evolution is presented in [51]. Furthermore, recent investigations have also addressed the factorization properties of fracture functions in different kinematic regions [51; 58; 65]. The small-\(x\) behavior of fracture functions is studied in [65]. In [58] the large-\(P_{h\perp}\) behavior is explored in detail, aiming to understand the transition of production mechanisms between the TFR and CFR in SIDIS. Despite these progresses, the contributions of SIDIS in the TFR beyond the leading twist are still unknown, and the theoretical framework for embracing higher-twist effects in the TFR has not yet been established. The importance of higher-twist effects in improving the description of the experimental data has already been emphasized by recent phenomenological studies of fracture functions [39; 66; 40]. Moreover, it is known that the absence of higher-twist effects results in the loss of predictions for fourteen SIDIS structure functions in the TFR at the tree level. At the leading twist, only four structure functions are nonzero [57] for the case of unpolarized hadron production and a spin-1/2 target [67; 68]. These higher-twist contributions are responsible for various intriguing azimuthal and spin asymmetries. Especially, some of these asymmetries are already within the reach of the ongoing experimental program by CLAS12 [69] at JLab due to the availability of a longitudinally polarized target [70; 71]. For instance, a preliminary investigation utilizing CLAS12 data has revealed the significance of the beam-spin asymmetry in the TFR, suggesting that its sign and magnitude could serve as a novel indicator for tracking the transition between the TFR and the CFR (Sec. 5.3 in [71]). Furthermore, the potential JLab@22GeV program [71] and the planned Electron-ion colliders in the U.S. [72; 73; 74; 75; 76] and China [77] are poised to provide additional exciting opportunities for exploring new frontiers in TFR physics. Given these experimental progresses, it is important to undertake an evaluation of the higher-twist contributions of SIDIS in the TFR. The objective of this paper is to present a first analysis of twist-3 contributions to SIDIS in the TFR within the framework of collinear factorization at the tree level of quantum chromodynamics (QCD) perturbation theory. Our focus is on the scenario where the target is spin-1/2 and the polarization of the final hadron is unobserved. The framework can be easily extended to the case of a spin-1 target. By employing the collinear expansion technique [78; 79; 80; 81; 82; 83; 84; 85; 86], we demonstrate that the hadronic tensor of SIDIS in the TFR can be expressed in terms of three distinct types of twist-3 collinear fracture functions. We discuss the classification of these fracture functions and show that they are not independent due to the constraints imposed by the QCD Equation of Motion (EOM). With the EOM, the twist-3 contributions can be expressed with two-parton fracture functions at the considered order. Our findings also have significant phenomenological implications. The rest of this paper is organized as follows. In section II, we discuss the kinematics for the polarized SIDIS in TFR and present the general form for the cross-section in terms of the structure functions. In section III, we present detailed calculations of the hadronic tensor up to twist-3. In section IV, we give the final results for the structure functions and azimuthal or spin asymmetries expressed by fracture functions. A short summary is given in section V. ## II Kinematics and structure functions of SIDIS in the TFR Through out this paper, we use the light-cone coordinate system, in which a vector \(a^{\mu}\) is expressed as \(a^{\mu}=(a^{+},a^{-},\vec{a}_{\perp})=\big{(}(a^{0}+a^{3})/\sqrt{2},(a^{0}-a^{ 3})/\sqrt{2},a^{1},a^{2}\big{)}\). With the light-cone vectors \(n^{\mu}=(0,1,0,0)\) and \(\bar{n}^{\mu}=(1,0,0,0)\), the transverse metric is defined as \(g^{\mu\nu}_{\perp}=g^{\mu\nu}-\bar{n}^{\mu}n^{\nu}-\bar{n}^{\nu}n^{\mu}\), and the transverse antisymmetric tensor is given as \(\varepsilon^{\mu\nu}_{\perp}=\varepsilon^{\mu\nu\alpha\beta}\bar{n}_{\alpha}n _{\beta}\) with \(\varepsilon^{12}_{\perp}=1\). We also use the notation \(\tilde{a}^{\mu}_{\perp}\equiv\varepsilon^{\mu\nu}_{\perp}a_{\perp\nu}\). We consider the SIDIS process with a polarized electron beam and nucleon target as follows: \[e(l,\lambda_{e})+h_{A}(P,S)\to e(l^{\prime})+h(P_{h})+X, \tag{1}\] where \(l\), \(l^{\prime}\), \(P\) and \(P_{h}\) are the 4-momenta of the incident, the outgoing electron, the nucleon target and the detected final state hadron, respectively. At the leading order of quantum electrodynamics, there is an exchange of one virtual photon between the electron and the nucleon. The momentum of the virtual photon is given by \(q=l-l^{\prime}\). The helicity of the electron is denoted by \(\lambda_{e}\), and \(S\) is the polarization vector of the nucleon. We consider the production of a spin-0 or unpolarized final state hadron \(h\). The Lorentz invariant variables of SIDIS are conventionally defined by \[Q^{2}=-q^{2},\ x_{B}=\frac{Q^{2}}{2P\cdot q},\ y=\frac{P\cdot q}{P\cdot k_{e}},\ z_{h}=\frac{P\cdot P_{h}}{P\cdot q}. \tag{2}\] We are interested in the TFR, where \(P_{h}\) is almost collinear with \(P\) and \(z_{h}\ll 1\). As discussed in [53; 57], \(z_{h}\) is not convenient for us to describe the hadron production in TFR, because one can not differentiate the scenario of TFR considered here from the soft-hadron production. Instead, we will use [4; 57] \[\xi_{h}=\frac{P_{h}\cdot q}{P\cdot q}. \tag{3}\] We work in the reference frame shown in Fig. 1, where the nucleon \(h_{A}\) moves along the \(+z\)-direction and the virtual photon moves in the \(-z\)-direction. In this frame, the momenta of the particles are given by \[P^{\mu}\approx(P^{+},0,0,0), \tag{4}\] \[P^{\mu}_{h}=(P^{+}_{h},P^{-}_{h},\vec{P}_{h\perp}),\] (5) \[l^{\mu}=\Big{(}\frac{1-y}{y}x_{B}P^{+},\ \frac{Q^{2}}{2x_{B}yP^{+}},\ \frac{Q\sqrt{1-y}}{y},\ 0\Big{)},\] (6) \[q^{\mu}=\Big{(}-x_{B}P^{+},\ \frac{Q^{2}}{2x_{B}P^{+}},0,0\Big{)}. \tag{7}\] For the case that the produced hadron \(h\) has small transverse momentum and in the TFR, we have \(P^{+}_{h}\gg|\vec{P}_{h\perp}|\gg P^{-}_{h}\) and \(\xi_{h}\approx P^{+}_{h}/P^{+}\), which specifies the longitudinal momentum fraction of the nucleon taken by the final state hadron \(h\). The polarization vector of the nucleon with mass \(M\) can be decomposed by \[S^{\mu}=S_{L}\frac{P^{+}}{M}\bar{n}^{\mu}+S^{\mu}_{\perp}-S_{L}\frac{M}{2P^{+} }n^{\mu}, \tag{8}\] where \(S_{L}\) is the longitudinal polarization of the nucleon and \(S^{\mu}_{\perp}=(0,0,\vec{S}_{\perp})\) the transverse polarization vector. The incoming- and outgoing electron span the lepton plane. We define the azimuthal angle \(\phi_{h}\) for \(\vec{P}_{h\perp}\) with respect to the lepton plane, and \(\phi_{S}\) is that for \(\vec{S}_{\perp}\). The azimuthal angle of the outgoing lepton around the lepton beam with respect to the spin vector is denoted by \(\psi\). In the kinematic region of SIDIS with large \(Q^{2}\), one has \(\psi\approx\phi_{S}\)[67]. With these specifications, the differential cross-section is given by \[\frac{d\sigma}{dx_{B}dyd\xi_{h}d\psi d^{2}P_{h\perp}}=\frac{\alpha^{2}y}{4\xi _{h}Q^{4}}L_{\mu\nu}(l,\lambda_{e},l^{\prime})W^{\mu\nu}(q,P,S,P_{h}), \tag{9}\] where \(\alpha\) is the fine structure constant. The leptonic tensor is \[L^{\mu\nu}(l,\lambda_{e},l^{\prime})=2(l^{\mu}l^{\prime\nu}+l^{\nu}l^{\prime \mu}-l\cdot l^{\prime}g^{\mu\nu})+2i\lambda_{e}\epsilon^{\mu\nu\rho\sigma}l_{ \rho}l^{\prime}_{\sigma}. \tag{10}\] The hadronic tensor is defined by \[W^{\mu\nu}(q,P,S,P_{h})=\sum_{X}\int\frac{d^{4}x}{(2\pi)^{4}}e^{iq\cdot x} \langle S;h_{A}|J^{\mu}(x)|hX\rangle\langle Xh|J^{\nu}(0)|h_{A};S\rangle, \tag{11}\] where \(J^{\mu}(x)=e_{q}\bar{\psi}(x)\gamma^{\mu}\psi(x)\) is the electromagnetic current. A summation over quark favors is implicit in Eq. (11). In general, the hadronic tensor can be decomposed into a sum of basic Lorentz tensors constructed by the kinematic variables of the process. After contracting with the leptonic tensor, one can get the differential cross section in terms of the structure functions. It has been shown that the differential cross section of SIDIS at small transverse momentum with the polarized lepton beam and nucleon target is described by eighteen structure functions [68]. We have the same number of structure functions for SIDIS in the TFR, and the general form of the differential cross section can be expressed as \[\frac{d\sigma}{dx_{B}dyd\xi_{h}d\psi d^{2}P_{h\perp}}=\frac{\alpha ^{2}}{x_{B}yQ^{2}}\Big{\{}A(y)F_{UU,T}+E(y)F_{UU,L}^{\rm{cos}\,\phi_{h}}\cos \phi_{h}+E(y)F_{UU}^{\rm{cos}\,2\phi_{h}}\cos 2\phi_{h}\] \[+\lambda_{e}D(y)F_{LU}^{\rm{sin}\,\phi_{h}}\sin\phi_{h}+S_{L} \Big{[}B(y)F_{UL}^{\rm{sin}\,\phi_{h}}\sin\phi_{h}+E(y)F_{UL}^{\rm{sin}\,2 \phi_{h}}\sin 2\phi_{h}\Big{]}+\lambda_{e}S_{L}\Big{[}C(y)F_{LL}+D(y)F_{LL}^{\rm{cos} \,\phi_{h}}\cos\phi_{h}\Big{]}\] Figure 1: The kinematics for SIDIS in the TFR \[+|\vec{S}_{\perp}|\Big{[}\big{(}A(y)F^{\sin(\phi_{h}-\phi_{S})}_{UT,T} +E(y)F^{\sin(\phi_{h}-\phi_{S})}_{UT,L}\big{)}\sin(\phi_{h}-\phi_{S})+E(y)F^{ \sin(\phi_{h}+\phi_{S})}_{UT}\sin(\phi_{h}+\phi_{S})\] \[\qquad+B(y)F^{\sin\phi_{S}}_{UT}\sin\phi_{S}+B(y)F^{\sin(2\phi_{h }-\phi_{S})}_{UT}\sin(2\phi_{h}-\phi_{S})+E(y)F^{\sin(3\phi_{h}-\phi_{S})}_{UT} \sin(3\phi_{h}-\phi_{S})\Big{]}\] \[+\lambda_{e}|\vec{S}_{\perp}|\Big{[}D(y)F^{\cos\phi_{S}}_{LT}\cos \phi_{S}+C(y)F^{\cos(\phi_{h}-\phi_{S})}_{LT}\cos(\phi_{h}-\phi_{S})+D(y)F^{ \cos(2\phi_{h}-\phi_{S})}_{LT}\cos(2\phi_{h}-\phi_{S})\Big{]}\Big{\}}. \tag{12}\] Here we have defined several functions of \(y\) for convenience, i.e., \[A(y) = y^{2}-2y+2,\] \[B(y) = 2(2-y)\sqrt{1-y},\] \[C(y) = y(2-y),\] \[D(y) = 2y\sqrt{1-y},\] \[E(y) = 2(1-y). \tag{13}\] All the structure functions in Eq. (12) are scalar functions depending on \(x_{B}\), \(\xi_{h}\), \(Q^{2}\) and \(\vec{P}^{2}_{h\perp}\). The first and second subscripts of the structure functions denote the polarization of the electron and the nucleon, respectively. The third subscript, if any, specifies the polarization of the virtual photon. Note that the normalization of the structure functions adopted here is different from that in [68] by a Jacobian since we have used \(\xi_{h}\) instead of \(z_{h}\). ## III The hadronic tensor results up to twist-3 ### Collinear expansion for the hadronic tensor Now we perform the collinear expansion for the hadronic tensor in Eq. (11) up to twist-3. At the tree level of QCD perturbation theory, the hadronic tensor in the TFR can be represented by the diagrams in Fig. 2. The gray boxes represent the parton correlation matrices with a hadron \(h\) identified in the final state, which we call fracture matrices in the following. The contributions for each diagram in Fig. 2 are \[W^{\mu\nu}\Big{|}_{2a}= \int\frac{d^{3}k}{(2\pi)^{3}}\left[\big{(}\gamma^{\mu}(\not{k}+ \not{q})\gamma^{\nu}\big{)}_{ij}\,2\pi\delta\big{(}(k+q)^{2}\big{)}\right] \sum_{X}\int\frac{d^{3}\eta}{(2\pi)^{4}}e^{-ik\cdot\eta}\langle h_{A}|\bar{ \psi}_{i}(\eta)|hX\rangle\langle Xh|\psi_{j}(0)|h_{A}\rangle, \tag{14}\] \[W^{\mu\nu}\Big{|}_{2b}= \int\frac{d^{3}k_{1}d^{3}k_{2}}{(2\pi)^{6}}\left[\bigg{(}\gamma^ {\mu}(\not{k}_{1}+\not{q})\gamma_{\alpha}\frac{i(\not{k}_{2}+\not{q})}{(k_{2}+ q)^{2}+i\epsilon}\gamma^{\nu}\bigg{)}_{ij}\,2\pi\delta\big{(}(k_{1}+q)^{2} \big{)}\right]\] \[\times(-ig_{s})\sum_{X}\int\frac{d^{3}\eta d^{3}\eta_{1}}{(2\pi)^{ 4}}e^{-ik_{1}\cdot\eta}e^{i(k_{1}-k_{2})\cdot\eta_{1}}\langle h_{A}|\bar{ \psi}_{i}(\eta)|hX\rangle\langle Xh|G^{\alpha}(\eta_{1})\psi_{j}(0)|h_{A}\rangle,\] (15) \[W^{\mu\nu}\Big{|}_{2c}= \int\frac{d^{3}k_{1}d^{3}k_{2}}{(2\pi)^{6}}\left[\bigg{(}\gamma^ {\mu}\frac{i(\not{k}_{1}+\not{q})}{(k_{1}+q)^{2}-i\epsilon}\gamma_{\alpha}( \not{k}_{2}+\not{q})\gamma^{\nu}\bigg{)}_{ij}\,2\pi\delta\big{(}(k_{2}+q)^{2} \big{)}\right]\] \[\times(-ig_{s})\sum_{X}\int\frac{d^{3}\eta d^{3}\eta_{1}}{(2\pi)^{ 4}}\,e^{-ik_{1}\cdot\eta}e^{i(k_{1}-k_{2})\cdot\eta_{1}}\langle h_{A}|\bar{ \psi}_{i}(\eta)G^{\alpha}(\eta_{1})|hX\rangle\langle Xh|\psi_{j}(0)|h_{A}\rangle, \tag{16}\] Figure 2: Diagrams for the hadronic tensor in TFR at tree level. where \(ij\) are the Dirac and color indices. The summation over quark flavors \(\sum_{q}e_{q}^{2}\) is implied in the expressions. The integration variables take the following forms: \[k^{\mu}=(k^{+},0,\vec{k}_{\perp}), k^{\mu}_{2}=(k^{+}_{1},0,\vec{k}_{1\perp}), k^{\mu}_{2}=(k^{+}_{2},0,\vec{k}_{2\perp}), \tag{17}\] \[\eta^{\mu}=(0,\eta^{-},\vec{\eta}_{\perp}), \eta^{\mu}_{1}=(0,\eta^{-}_{1},\vec{\eta}_{1\perp}), \eta^{\mu}_{2}=(0,\eta^{-}_{2},\vec{\eta}_{2\perp}). \tag{18}\] \(k\) is the momentum carried by the quark line leaving the box of Fig. 2(a). \(k_{1}\), \(k_{2}\) are the momenta carried by the quark lines flowing into and out of the boxes of Figs. 2(b) or 2(c). These momenta follow the collinear scaling, e.g., \(k^{\mu}\sim Q(1,\lambda^{2},\lambda)\) with \(\lambda=\Lambda_{\rm QCD}/Q\). To obtain the contributions up to twist-3, one has to expand the contributions in Figs. 2(a)-2(c) in powers of \(\lambda\) up to \({\cal O}(\lambda)\). Here we have already neglected the minus components of \(k\), \(k_{1}\) and \(k_{2}\) in \([\cdots]\) of Eqs. (14)-(16), since these components only yield the corrections beyond twist-3. For \(W^{\mu\nu}|_{2a}\), if we further neglect the quark transverse momentum and take \(k^{\mu}\approx(k^{+},0,\vec{0}_{\perp})\) in \([\cdots]\) of Eq. (14), one can obtain the contribution with the collinear fracture matrix involving a non-local operator of quark and anti-quark fields. To obtain a gauge-invariant form, we should sum over the contributions from the \(G^{+}\)-gluon exchange in Fig. 2(b) and Fig. 2(c) as well as those with the exchange of any number of \(G^{+}\)-gluons. Here the gluon field \(G^{\mu}\) scales like \((1,\lambda^{2},\lambda)\), and hence the \(G^{+}\)-gluon does not induce any power suppression. After this summation, we can obtain the following gauge-invariant contribution \[W^{\mu\nu}\Big{|}_{\rm q}= (\gamma^{\mu}\gamma^{+}\gamma^{\nu})_{ij}\sum_{X}\int\frac{d\eta^ {-}}{2(2\pi)^{4}}e^{-ix_{B}P^{+}\eta^{-}}\langle h_{A}|\bar{\psi}_{i}(\eta^{- }){\cal L}^{\dagger}_{n}(\eta^{-})|hX\rangle\langle Xh|{\cal L}_{n}(0)\psi_{j} (0)|h_{A}\rangle, \tag{19}\] where the gauge link is defined as \[{\cal L}_{n}(x)={\cal P}\exp\biggr{\{}-ig_{s}\int_{0}^{\infty}d \lambda\;G^{+}(\lambda n+x)\biggr{\}}. \tag{20}\] The above contribution yields the gauge-invariant collinear quark fracture matrix. As we will present later in Sec. III.2, by parametrization of this matrix up to \({\cal O}(\lambda)\), one can obtain the hadronic tensor in terms of twist-2 and twist-3 quark collinear fracture functions. To derive the other twist-3 contributions from \(W^{\mu\nu}|_{2a}\), we need to take into account the \(k_{\perp}\)-dependence in \([\cdots]\) of Eq. (14) by the collinear expansion to \({\cal O}(\lambda)\). We notice that there is no contribution from the partial derivatives acting on the delta function in the expansion since \(\partial\delta\big{(}(\hat{k}+q)^{2}\big{)}/\partial k_{\perp}^{\alpha}\propto q _{\perp\alpha}=0\). After this expansion, we get the contribution from the fracture matrix with the transverse partial derivative acting on the (anti-)quark fields. Again, after the combination with the relevant gauge-link contributions from \(W^{\mu\nu}|_{2b+2c}\), we obtain, up to \({\cal O}(g_{s})\), \[W^{\mu\nu}\Big{|}_{\partial}= \frac{-i}{2q^{-}}(\gamma^{\mu}\gamma^{+}\gamma_{\perp\alpha} \gamma^{-}\gamma^{\nu})_{ij}\sum_{X}\int\frac{d\eta^{-}}{2(2\pi)^{4}}e^{-ix_{ B}P^{+}\eta^{-}}\langle h_{A}|\bar{\psi}_{i}(\eta^{-}){\cal L}^{\dagger}_{n}( \eta^{-})|hX\rangle\] \[\times\langle Xh|\partial_{\perp}^{\alpha}({\cal L}_{n}\psi_{j} )(0)|h_{A}\rangle+(\mu\leftrightarrow\nu)^{*}\, \tag{21}\] where \((\mu\leftrightarrow\nu)^{*}\) stands for exchanging \(\mu\nu\) indices and taking complex conjugate of the first term. Due to the presence of the transverse derivative, the leading contribution of the fracture matrix in Eq. (21) is at twist-3. In addition, after subtracting the gauge-link contributions to Eqs. (19) and (21) from the collinear expansion of \(W^{\mu\nu}|_{2b+2c}\), we find the remaining part can be expressed by the fracture matrix with the gluon field strength tensor \(g_{s}F^{+\alpha}=g_{s}[\partial^{+}G_{\perp}^{\alpha}-\partial_{\perp}^{ \alpha}G^{+}]+{\cal O}(g_{s}^{2})\). This gives another contribution that starts from twist-3, which up to \({\cal O}(g_{s})\) can be summarized as \[W^{\mu\nu}\Big{|}_{F}= \frac{-i}{2q^{-}}(\gamma^{\mu}\gamma^{+}\gamma_{\perp\alpha} \gamma^{-}\gamma^{\nu})_{ij}\int dx_{2}\Bigl{[}{\rm P}\frac{1}{x_{2}-x_{B}}-i \pi\delta(x_{2}-x_{B})\Bigr{]}\] \[\times\sum_{X}\int\frac{d\eta^{-}d\eta^{-}_{1}}{4\pi(2\pi)^{4}}e ^{-ix_{B}P^{+}\eta^{-}-i(x_{2}-x_{B})P^{+}\eta^{-}_{1}}\langle h_{A}|\bar{\psi }_{i}(\eta^{-})|hX\rangle\langle Xh|g_{s}F^{+\alpha}(\eta^{-}_{1})\psi_{j}(0)|h _{A}\rangle+(\mu\leftrightarrow\nu)^{*}. \tag{22}\] Here \({\rm P}\) in \([\cdots]\) of Eq. (22) stands for the principle-value prescription. The \(\delta\)-function term in Eq. (22) comes from the absorptive part of the quark propagator that connects the electromagnetic current to the quark-gluon vertex in Figs. 2(b) and 2(c). In this term, the gluon has zero momentum and generates the so-called soft-gluon-pole contributions, see e.g., [87] and references therein. The total contribution of the hadronic tensor is given by the sum of the results in Eqs. (19), (21) and (22). The following gauge-invariant collinear fracture matrices are relevant: \[{\cal M}_{ij}(x)=\int\frac{d\eta^{-}}{2\xi_{h}(2\pi)^{4}}e^{-ixP^{+} \eta^{-}}\sum_{X}\langle h_{A}|\bar{\psi}_{j}(\eta^{-}){\cal L}^{\dagger}_{n}( \eta^{-})|hX\rangle\langle Xh|{\cal L}_{n}(0)\psi_{i}(0)|h_{A}\rangle, \tag{23}\] \[\mathcal{M}^{\alpha}_{\partial,ij}(x) =\frac{(\gamma^{-})_{ij}}{2N_{c}}i\Big{(}-P^{\alpha}_{h\perp}u^{h}_{ \partial}+M\tilde{S}^{\alpha}_{\perp}u_{\partial T}+S_{L}\tilde{P}^{\alpha}_{h \perp}u^{h}_{\partial L}+\frac{P^{(\alpha}_{h\perp}P^{\beta)}_{h\perp}}{M} \tilde{S}_{\perp\beta}u^{h}_{\partial T}\Big{)}\] \[+\frac{(\gamma^{-}\gamma_{5})_{ij}}{2N_{c}}i\Big{(}\tilde{P}^{ \alpha}_{h\perp}l^{h}_{\partial}+MS^{\alpha}_{\perp}l_{\partial T}+S_{L}P^{ \alpha}_{h\perp}l^{h}_{\partial L}-\frac{P^{(\alpha}_{h\perp}P^{\beta)}_{h \perp}}{M}S_{\perp\beta}l^{h}_{\partial T}\Big{)}+\cdots\,, \tag{28}\] \[\mathcal{M}^{\alpha}_{F,ij}(x_{1},x_{2}) =\frac{(\gamma^{-})_{ij}}{2N_{c}}\Big{(}P^{\alpha}_{h\perp}w^{h} -M\tilde{S}^{\alpha}_{\perp}w_{T}-S_{L}\tilde{P}^{\alpha}_{h\perp}w^{h}_{L}- \frac{P^{(\alpha}_{h\perp}P^{\beta)}_{h\perp}}{M}\tilde{S}_{\perp\beta}w^{h}_ {T}\Big{)}\] \[-\frac{(\gamma^{-}\gamma_{5})_{ij}}{2N_{c}}i\Big{(}\tilde{P}^{ \alpha}_{h\perp}v^{h}+MS^{\alpha}_{\perp}v_{T}+S_{L}P^{\alpha}_{h\perp}v^{h}_ {L}-\frac{P^{(\alpha}_{h\perp}P^{\beta)}_{h\perp}}{M}S_{\perp\beta}v^{h}_{T} \Big{)}+\cdots\,, \tag{29}\] where \(\cdots\) denote the contributions beyond twist-3 or the chirality-odd parts. In the above, we have used the shorthand notations \(P^{(\alpha}_{h\perp}P^{\beta)}_{h\perp}\equiv P^{\alpha}_{h\perp}P^{\beta}_{h \perp}+g^{\alpha\beta}_{\perp}\tilde{P}^{2}_{h\perp}/2\) for simplicity. As pointed out in [57], the fracture matrix is not constrained by time reversal invariance, as it identifies a hadron in the out state. Additionally, we note that the collinear fracture matrices formally share similar parametrization forms with those of the conventional TMD PDFs (see e.g., [88]). The functions \(u\)'s and \(l\)'s in Eqs. (27) and (28) are quark collinear fracture functions, they are functions of \(x\), \(\xi_{h}\) and \(\vec{P}^{2}_{h\perp}\). \(w\)'s and \(v\)'s in Eq. (29) are quark-gluon collinear fracture functions, they depend on \(x_{1}\) and \(x_{2}\) besides of \(\xi_{h}\) and \(\vec{P}^{2}_{h\perp}\). We have suppressed all these arguments for simplicity. From hermiticity, the fracture functions defined in Eqs. (27) and (28) are real, while those in Eq. (29) are complex in general. The naming rules for these fracture functions we have used are as follows: Four fracture functions in Eq. (27) with "1" in the subscript are of twist-2. The remaining is of twist-3. The "\(\partial\)" in the subscript denote that the fracture functions are defined via the fracture matrix with the partial derivative operator. The "\(L\)" or "\(T\)" in the subscript denotes the dependence on the longitudinal or transverse polarization of the nucleon. The superscript "\(h\)" denotes the explicit dependence on the transverse momentum of the final state hadron \(h\) in the decomposition of the matrix elements. We note that the TMD quark fracture functions at twist-2 have been classified for a polarized nucleon target in [57; 61]. After integrating over the transverse momentum of the parton, they are equivalent to the twist-2 collinear quark fracture functions defined in Eq. (27). We further note that the twist-3 fracture functions defined above are not independent of each other. From the QCD equation of motion \(i\gamma\cdot D\psi=0\), one can show that their relations can be written in a unified form as follows: \[x[u_{S}^{K}(x)+il_{S}^{K}(x)]=u_{\partial S}^{K}(x)+il_{\partial S}^{K}(x)+i\int dy \Big{[}\text{P}\frac{1}{y-x}-i\pi\delta(y-x)\Big{]}[w_{S}^{K}(x,y)-v_{S}^{K}(x,y )], \tag{30}\] where \((S,\ K)=(\text{null},\ h)\), \((L,\ h)\), \((T,\ \text{null})\) or \((T,\ h)\). I.e., we have four sets of relations in the unified form of Eq. (30). With these relations, we find that the hadronic tensor in Eq. (26) can be expressed only with the fracture functions defined via \(\mathcal{M}_{ij}\) in Eq. (27). We obtain \[W^{\mu\nu} =-2g_{\perp}^{\mu\nu}\Big{(}u_{1}-\frac{P_{h\perp}\cdot\tilde{S}_ {\perp}}{M}u_{1T}^{h}\Big{)}+2i\varepsilon_{\perp}^{\mu\nu}\Big{(}S_{L}l_{1L} -\frac{P_{h\perp}\cdot S_{\perp}}{M}l_{1T}^{h}\Big{)}\] \[+\frac{2}{P\cdot q}P_{h\perp}^{\{\mu}\bar{q}^{\nu\}}\Big{(}u^{h} -\frac{P_{h\perp}\cdot\tilde{S}_{\perp}}{M}u_{T}^{h}\Big{)}+\frac{2i}{P\cdot q }P_{h\perp}^{[\mu}\bar{q}^{\nu]}\Big{(}h^{-}-\frac{P_{h\perp}\cdot\tilde{S}_{ \perp}}{M}l_{T}^{h}\Big{)}-\frac{2M}{P\cdot q}\tilde{S}_{\perp}^{\{\mu}\bar{q} ^{\nu\}}\Big{(}u_{T}-\frac{\vec{P}_{h\perp}^{2}}{2M^{2}}u_{T}^{h}\Big{)}\] \[-\frac{2iM}{P\cdot q}\tilde{S}_{\perp}^{[\mu}\bar{q}^{\nu]}\Big{(} l_{T}-\frac{\vec{P}_{h\perp}^{2}}{2M^{2}}l_{T}^{h}\Big{)}-\frac{2S_{L}}{P \cdot q}\tilde{P}_{h\perp}^{\{\mu}\bar{q}^{\nu\}}u_{L}^{h}-\frac{2iS_{L}}{P \cdot q}\tilde{P}_{h\perp}^{[\mu}\bar{q}^{\nu]}l_{L}^{h}, \tag{31}\] where \(A^{\{\mu}B^{\nu\}}\equiv A^{\mu}B^{\nu}+A^{\nu}B^{\mu}\) and \(A^{[\mu}B^{\nu]}\equiv A^{\mu}B^{\nu}-A^{\nu}B^{\mu}\). We have also used the shorthand notation \(\bar{q}^{\mu}\equiv q^{\mu}+2x_{B}P^{+}\bar{\mu}^{\mu}\). The first line in Eq. (31) is of twist-2 contributions, and the remains are of twist-3 contributions. Because \(q^{\mu}\) has only longitudinal components and also \(q\cdot\bar{q}=0\), we see explicitly that the hadronic tensor of Eq. (31) satisfies the \(U(1)\)-gauge invariance or the current conservation, i.e., \(q_{\mu}W^{\mu\nu}=q_{\nu}W^{\mu\nu}=0\). ## IV The results of structure functions and azimuthal or spin asymmetries ### The results of structure functions Substituting the hadronic tensor result of Eq. (31) into Eq. (9), we obtain the differential cross section. Comparing with the cross section expressed by structure functions in Eq. (12), we obtain the results of structure functions in terms of the collinear fracture functions. Four structure functions are at twist-2, which are expressed in terms of the four twist-2 fracture functions, i.e., \[F_{UU,T}=x_{B}u_{1},\qquad F_{UT,T}^{\sin(\phi_{h}-\phi_{S})}= \frac{|\vec{P}_{h\perp}|}{M}x_{B}u_{1T}^{h}, \tag{32}\] \[F_{LL}=x_{B}l_{1L},\qquad F_{LT}^{\cos(\phi_{h}-\phi_{S})}=\frac{ |\vec{P}_{h\perp}|}{M}x_{B}l_{1T}^{h}. \tag{33}\] The summation over quark flavors, i.e., \(\sum_{q}e_{q}^{2}\cdots\), is implicit on the right-hand side of the equations. This twist-2 result has been obtained in [61]. There are eight structure functions that have contributions starting from twist-3. They are expressed with eight different twist-3 fracture functions, i.e., \[F_{UU}^{\cos\phi_{h}} =-\frac{2|\vec{P}_{h\perp}|}{Q}x_{B}^{2}u^{h},\qquad F_{LU}^{\sin \phi_{h}} =\frac{2|\vec{P}_{h\perp}|}{Q}x_{B}^{2}l^{h}, \tag{34}\] \[F_{UL}^{\sin\phi_{h}} =-\frac{2|\vec{P}_{h\perp}|}{Q}x_{B}^{2}u_{L}^{h},\qquad F_{LL}^{ \cos\phi_{h}} =-\frac{2|\vec{P}_{h\perp}|}{Q}x_{B}^{2}l_{L}^{h},\] (35) \[F_{UT}^{\sin\phi_{S}} =-\frac{2M}{Q}x_{B}^{2}u_{T},\qquad\qquad F_{LT}^{\cos\phi_{S}}=- \frac{2M}{Q}x_{B}^{2}l_{T},\] (36) \[F_{UT}^{\sin(2\phi_{h}-\phi_{S})} =-\frac{\vec{P}_{h\perp}^{2}}{QM}x_{B}^{2}u_{T}^{h},\quad F_{LT}^{ \cos(2\phi_{h}-\phi_{S})}=-\frac{\vec{P}_{h\perp}^{2}}{QM}x_{B}^{2}l_{T}^{h}. \tag{37}\] The remaining six structure functions are all zero up to twist-3. We see that half of the eight twist-3 structure functions are related to the transverse polarization-dependent fracture functions. ### Azimuthal or spin asymmetries In addition to structure functions, one can also construct various azimuthal or spin asymmetries by \[\langle\mathcal{F}\rangle_{\mathcal{P}_{x}\mathcal{P}_{N}}\equiv\int\frac{d \sigma}{dxdyd\xi_{h}d\psi d^{2}P_{h\perp}}\mathcal{F}d\phi_{h}d\psi\bigg{/} \int\frac{d\sigma}{dxdyd\xi_{h}d\psi d^{2}P_{h\perp}}d\phi_{h}d\psi, \tag{38}\] where the subscripts \(\mathcal{P}_{e}=U\) or \(L\) and \(\mathcal{P}_{N}=U\), \(L\) or \(T\) denote the polarization states of the electron and the nucleon target. From our results of the structure functions, we see clearly that there are two spin-dependent azimuthal asymmetries at twist-2. They both depend on the nucleon transverse polarization and are given by \[\langle\sin(\phi_{h}-\phi_{S})\rangle_{UT} =\frac{|\vec{P}_{h\perp}|}{2M}\frac{u_{1T}^{h}(x_{B},\xi_{h},P_{h \perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})}, \tag{39}\] \[\langle\cos(\phi_{h}-\phi_{S})\rangle_{LT} =\frac{|\vec{P}_{h\perp}|C(y)}{2MA(y)}\frac{h_{1T}^{h}(x_{B},\xi_ {h},P_{h\perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})}. \tag{40}\] Here and in the below, a summation over quark flavors, i.e., \(\sum_{q}e_{q}^{2}\cdots\), is implicit both in the numerators and the denominators. We note that the asymmetry \(\langle\sin(\phi_{h}-\phi_{S})\rangle_{UT}\) is of Sivers-type [89] and it does not depend on \(y\) because of the cancellation of the common \(A(y)\) factors associated with \(F_{UT,T}^{\sin(\phi_{h}-\phi_{S})}\) and \(F_{UU,T}\) in the cross-section. We have in particular eight azimuthal or spin asymmetries at twist-3 associated with the eight twist-3 structure functions in Eqs. (34)-(37), i.e., \[\langle\cos\phi_{h}\rangle_{UU} =-\frac{|\vec{P}_{h\perp}|}{Q}\frac{B(y)}{A(y)}\frac{x_{B}u^{h}(x _{B},\xi_{h},P_{h\perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})}, \tag{41}\] \[\langle\sin\phi_{h}\rangle_{LU} =\frac{|\vec{P}_{h\perp}|}{Q}\frac{D(y)}{A(y)}\frac{x_{B}l^{h}(x _{B},\xi_{h},P_{h\perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})},\] (42) \[\langle\sin\phi_{h}\rangle_{UL} =-\frac{|\vec{P}_{h\perp}|}{Q}\frac{B(y)}{A(y)}\frac{x_{B}u_{L}^{ h}(x_{B},\xi_{h},P_{h\perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})},\] (43) \[\langle\cos\phi_{h}\rangle_{LL} =-\frac{|\vec{P}_{h\perp}|}{Q}\frac{D(y)}{A(y)}\frac{x_{B}l^{h}_{ L}(x_{B},\xi_{h},P_{h\perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})},\] (44) \[\langle\sin\phi_{S}\rangle_{UT} =-\frac{M}{Q}\frac{B(y)}{A(y)}\frac{x_{B}u_{T}(x_{B},\xi_{h},P_{h \perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})},\] (45) \[\langle\cos\phi_{S}\rangle_{LT} =-\frac{M}{Q}\frac{D(y)}{A(y)}\frac{x_{B}l_{T}(x_{B},\xi_{h},P_{h \perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})},\] (46) \[\langle\sin(2\phi_{h}-\phi_{S})\rangle_{UT} =-\frac{\vec{P}_{h\perp}^{2}}{2MQ}\frac{B(y)}{A(y)}\frac{x_{B}u_{T }^{h}(x_{B},\xi_{h},P_{h\perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})},\] (47) \[\langle\cos(2\phi_{h}-\phi_{S})\rangle_{LT} =-\frac{\vec{P}_{h\perp}^{2}}{2MQ}\frac{D(y)}{A(y)}\frac{x_{B}l^{ h}_{T}(x_{B},\xi_{h},P_{h\perp})}{u_{1}(x_{B},\xi_{h},P_{h\perp})}. \tag{48}\] One can see that at the order we are considering, each azimuthal or spin asymmetry in the TFR is only generated by a specific fracture function. This suggests that interpretations of these functions from experimental data may be simpler and more straightforward compared to the CFR at small \(P_{h\perp}\), where multiple TMD PDFs and FFs are typically involved and intertwined in the asymmetry [68]. Some of the twist-3 asymmetries, such as \(\langle\sin\phi_{h}\rangle_{UL}\) and \(\langle\sin\phi_{h}\rangle_{LU}\), have already been measured in the TFR by CLAS12 at JLab [90]. Of particular interest is the beam-spin asymmetry \(\langle\sin\phi_{h}\rangle_{LU}\) in Eq. (42), which is related to a twist-3 longitudinal quark fracture function \(l^{h}\). A preliminary analysis shows that it undergoes a clear sign flip from the TFR to the CFR and could serve as an efficient tool to understand the transition between the production mechanisms (Sec. 5.3 in [71]). Further experimental measurements will provide us with more information about the relevant fracture functions, especially the twist-3 ones. ## V Summary In summary, we have derived the hadronic tensor up to twist-3 level for SIDIS with hadron production in the target fragmentation region. The hadronic tensor at the considered order is shown to be expressed by gauge-invariant fracture functions defined with two-parton correlations. Based on the obtained hadronic tensor, the results for structure functions are derived for both polarized lepton beam and polarized nucleon target. At the tree level, there are four structure functions at twist-2 and eight structure functions at twist-3. Azimuthal or spin asymmetries are given based on the results of the structure functions. These observables are all expressed using twist-2 or twist-3 collinear fracture functions. Possible connections to experimental measurements are discussed. Future SIDIS experiments measuring these azimuthal or spin asymmetries will provide opportunities to extract the corresponding fracture functions. ## Acknowledgements We would like to thank Harut Avakian for bringing our attention to this project and insightful discussions. We also thank Timothy Hayward for useful discussions. The work is supported by National Natural Science Foundation of P.R. China (Nos.12075299,11821505, 11847612 and 11935017) and by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant No. XDB34000000. K.B. Chen is supported by National Natural Science Foundation of China (Nos. 12005122, 11947055) and Shandong Province Natural Science Foundation (No. ZR2020QA082). X.B. Tong is supported by the CUHK-Shenzhen university development fund (No. UDF01001859) and the China Postdoctoral Science Foundation (No. 2022M723065).
2307.10912
WeakPolyp: You Only Look Bounding Box for Polyp Segmentation
Limited by expensive pixel-level labels, polyp segmentation models are plagued by data shortage and suffer from impaired generalization. In contrast, polyp bounding box annotations are much cheaper and more accessible. Thus, to reduce labeling cost, we propose to learn a weakly supervised polyp segmentation model (i.e., WeakPolyp) completely based on bounding box annotations. However, coarse bounding boxes contain too much noise. To avoid interference, we introduce the mask-to-box (M2B) transformation. By supervising the outer box mask of the prediction instead of the prediction itself, M2B greatly mitigates the mismatch between the coarse label and the precise prediction. But, M2B only provides sparse supervision, leading to non-unique predictions. Therefore, we further propose a scale consistency (SC) loss for dense supervision. By explicitly aligning predictions across the same image at different scales, the SC loss largely reduces the variation of predictions. Note that our WeakPolyp is a plug-and-play model, which can be easily ported to other appealing backbones. Besides, the proposed modules are only used during training, bringing no computation cost to inference. Extensive experiments demonstrate the effectiveness of our proposed WeakPolyp, which surprisingly achieves a comparable performance with a fully supervised model, requiring no mask annotations at all.
Jun Wei, Yiwen Hu, Shuguang Cui, S. Kevin Zhou, Zhen Li
2023-07-20T14:34:08Z
http://arxiv.org/abs/2307.10912v1
# WeakPolyp: You Only Look Bounding Box for Polyp Segmentation ###### Abstract Limited by expensive pixel-level labels, polyp segmentation models are plagued by data shortage and suffer from impaired generalization. In contrast, polyp bounding box annotations are much cheaper and more accessible. Thus, to reduce labeling cost, we propose to learn a weakly supervised polyp segmentation model (_i.e._,WeakPolyp) completely based on bounding box annotations. However, coarse bounding boxes contain too much noise. To avoid interference, we introduce the mask-to-box (M2B) transformation. By supervising the outer box mask of the prediction instead of the prediction itself, M2B greatly mitigates the mismatch between the coarse label and the precise prediction. But, M2B only provides sparse supervision, leading to non-unique predictions. Therefore, we further propose a scale consistency (SC) loss for dense supervision. By explicitly aligning predictions across the same image at different scales, the SC loss largely reduces the variation of predictions. Note that our WeakPolyp is a plug-and-play model, which can be easily ported to other appealing backbones. Besides, the proposed modules are only used during training, bringing no computation cost to inference. Extensive experiments demonstrate the effectiveness of our proposed WeakPolyp, which surprisingly achieves a comparable performance with a fully supervised model, requiring no mask annotations at all. Codes are available at [https://github.com/weijun88/WeakPolyp](https://github.com/weijun88/WeakPolyp). Keywords:Poly segmentation Weak Supervision Colorectal cancer ## 1 Introduction Colorectal Cancer (CRC) has become a major threat to health worldwide. Since most CRCs originate from colorectal polyps, early screening for polyps is necessary. Given its significance, automatic polyp segmentation models [5, 8, 16, 18] have been designed to aid in screening. For example, ACSNet [21], HRENet [14], LDNet [20] and CCBANet [11] propose to use convolutional neural networks to extract multi-scale contexts for robust predictions. LODNet [2], PraNet [5], and MSNet [23] aim to improve the model's discrimination of polyp boundaries. SANet [19] eliminates the distribution gap between the training set and the testing set, thus improving the model generalization. Recently, TGANet [15] introduces text embeddings to enhance the model's discrimination. Furthermore, Transfuse [22], PPFormer [1], and Polyp-Pvt [3] introduce the Transformer [4] backbone to extract global contexts, achieving a significant performance gain. All above models are fully supervised and require pixel-level annotations. However, pixel-by-pixel labeling is time-consuming and expensive, which hampers practical clinical usage. Besides, many polyps do not have well-defined boundaries. Pixel-level labeling inevitably introduces subjective noise. To address the above limitations, a generalized polyp segmentation model is urgently needed. In this paper, we achieve this goal by a weakly supervised polyp segmentation model (named **WeakPolyp**) that only uses coarse bounding box annotations. Fig. 1(a) shows the differences between our WeakPolyp and fully supervised models. Compared with fully supervised ones, WeakPolyp requires only a bounding box for each polyp, thus dramatically reducing the labeling cost. More meaningfully, WeakPolyp can take existing large-scale polyp detection datasets to assist the polyp segmentation task. Finally, WeakPolyp does not require the labeling for polyp boundaries, avoiding the subjective noise at source. All these advantages make WeakPolyp more clinically practical. However, bounding box annotations are much coarser than pixel-level ones, which can not describe the shape of polyps. Simply adopting these box annotations as supervision introduces too much background noise, thereby leading to suboptimal models. As a solution, BoxPolyp [18] only supervises the pixels with high certainty. However, it requires a fully supervised model to predict the uncertainty map. Unlike BoxPolyp, our WeakPolyp completely follows the weakly supervised form that requires no additional models or annotations. Surprisingly, just by redesigning the supervision loss without any changes to the model structure, WeakPolyp achieves comparable performance to its fully supervised counterpart. Fig. 1(b) visualizes some predicted results by WeakPolyp. Figure 1: (a) Comparison between the fully supervised model and our proposed WeakPolyp using box mask only. (b) Visualization of prediction from WeakPolyp. WeakPolyp is mainly enabled by two novel components: mask-to-box (M2B) transformation and scale consistency (SC) loss. In practice, M2B is applied to transform the predicted mask into a box-like mask by projection and back-projection. Then, this transformed mask is supervised by the bounding box annotation. This indirect supervision avoids the misleading of box-shape bias of annotations. However, many regions in the predicted mask are lost in the projection and therefore get no supervision. To fully explore these regions, we propose the SC loss to provide a pixel-level self-supervision while requiring no annotations at all. Specifically, the SC loss explicitly reduces the distance between predictions of the same image at different scales. By forcing feature alignment, it inhibits the excessive diversity of predictions, thus improving the model generalization. In summary, our contributions are three-fold: (1) We build the WeakPolyp model completely based on bounding box annotations, which largely reduces the labeling cost and achieves a comparable performance to full supervision. (2) We propose the M2B transformation to mitigate the mismatch between the prediction and the supervision, and design the SC loss to improve the robustness of the model against the variability of the predictions. (3) Our proposed WeakPolyp is a plug-and-play option, which can boost the performances of polyp segmentation models under different backbones. ## 2 Method **Model Components.** Fig. 2 depicts the components of WeakPolyp, including the segmentation phase and the supervision phase. For the segmentation phase, we adopt Res2Net [6] as the backbone. For input image \(I\in R^{H\times W}\), Res2Net extracts four scales of features \(\{f_{i}|i=1,...,4\}\) with the resolutions \([\frac{H}{2^{i+1}},\frac{W}{2^{i+1}}]\). Considering the computational cost, only \(f_{2},f_{3}\) and \(f_{4}\) are utilized. To fuse them, we first apply a \(1\times 1\) convolutional layer to unify the channels of \(f_{2},f_{3},f_{4}\) and then use the bilinear upsampling to unify their resolutions. After being transformed to the same size, \(f_{2},f_{3},f_{4}\) are added together and fed into one \(1\times 1\) convolutional layer for final prediction. Instead of the segmentation phase, our contributions primarily lie in the supervision phase, including mask-to-box (M2B) transformation and scale consistency (SC) loss. Notably, both M2B and SC are independent of the specific model structure. **Model Pipeline.** For each input image \(I\), we first resize it into two different scales: \(I_{1}\in R^{s_{1}\times s_{1}}\) and \(I_{2}\in R^{s_{2}\times s_{2}}\). Then, \(I_{1}\) and \(I_{2}\) are sent to the segmentation model and get two predicted masks \(P_{1}\) and \(P_{2}\), both of which have been resized to the same size. Next, an SC loss is proposed to reduce the distance between \(P_{1}\) and \(P_{2}\), which helps suppress the variation of the prediction. Finally, to fit the bounding box annotations (\(B\)), \(P_{1}\) and \(P_{2}\) are sent to M2B and converted into box-like masks \(T_{1}\) and \(T_{2}\). With \(T_{1}/T_{2}\) and \(B\), we calculate the binary cross entropy (BCE) loss and Dice loss, without worrying about noise interference. ### Mask-to-Box (M2B) Transformation One naive method to achieve the weakly supervised polyp segmentation is to use the bounding box annotation \(B\) to supervise the predicted mask \(P_{1}/P_{2}\). Unfortunately, models trained in this way show poor generalization. Because there is a strong box-shape bias in \(B\). Training with this bias, the model is forced to predict the box-shape mask, unable to maintain the polyp's contours. To solve this, we innovatively use \(B\) to supervise the bounding box mask (_i.e.,\(T_{1}/T_{2}\)_) of \(P_{1}/P_{2}\), rather than \(P_{1}/P_{2}\) itself. This indirect supervision separates \(P_{1}/P_{2}\) from \(B\) so that \(P_{1}/P_{2}\) is not affected by the shape bias of \(B\) while obtaining the position and extent of polyps. But how to implement the transformation from \(P_{1}/P_{2}\) to \(T_{1}/T_{2}\)? We design the M2B module, which consists of two steps: projection and back-projection, as shown in Fig. 2. **Projection.** As shown in Eq. 1, given a predicted mask \(P\in[0,1]^{H\times W}\), we project it horizontally and vertically into two vectors \(P_{w}\in[0,1]^{1\times W}\) and \(P_{h}\in[0,1]^{H\times 1}\). In this projection, instead of using mean pooling, we use max pooling to pick the maximum value for each row/column in \(P\). Because max pooling can completely remove the shape information of the polyp. After projection, only the position and scope of the polyp are stored in \(P_{w}\) and \(P_{h}\). \[P_{w}=\max(P,\text{axis}=0)\in[0,1]^{1\times W},\quad P_{h}=\max(P,\text{axis }=1)\in[0,1]^{H\times 1} \tag{1}\] Figure 2: The framework of our proposed WeakPolyp model, which consists of the segmentation phase and the supervision phase. The segmentation phase predicts the polyp mask for each input firstly, and the supervision phase uses the coarse box annotation to guide previous predicted mask. Note that our contributions mainly lie in the supervision phase, where the proposed M2B transformation converts the predicted mask into a box mask to accommodate the bounding box annotation. Besides, another proposed SC loss is introduced to provide dense supervision from multi-scales, which improves the consistency of predictions. **Back-projection.** Based on \(P_{w}\) and \(P_{h}\), we construct the bounding box mask of the polyp by back-projection. As shown in Eq. 2, \(P_{w}\) and \(P_{h}\) are first repeated into \(P_{w}^{{}^{\prime}}\) and \(P_{h}^{{}^{\prime}}\) with the same size as \(P\). Then, we element-wisely take the minimum of \(P_{w}^{{}^{\prime}}\) and \(P_{h}^{{}^{\prime}}\) to achieve the bounding box mask \(T\). As shown in Fig. 2, \(T\) no longer contains the contours of the polyp. \[\begin{split} P_{w}^{{}^{\prime}}&=\text{repeat}(P_ {w},H,\text{axis}=0)\in[0,1]^{H\times W}\\ P_{h}^{{}^{\prime}}&=\text{repeat}(P_{h},W, \text{axis}=1)\in[0,1]^{H\times W}\\ T&=\text{min}(P_{w}^{{}^{\prime}},P_{h}^{{}^{ \prime}})\in[0,1]^{H\times W}\end{split} \tag{2}\] **Supervision.** By M2B, \(P_{1}\) and \(P_{2}\) are transformed into \(T_{1}\) and \(T_{2}\), respectively. Because both \(T_{1}/T_{2}\) and \(B\) are box-like masks, we directly calculate the supervision loss between them without worrying about the misguidance of box-shape bias. Specifically, we follow [19, 5] to adopt BCE loss \(\mathcal{L}_{BCE}\) and Dice loss \(\mathcal{L}_{Dice}\) for model supervision, as shown in Eq. 3. \[\mathcal{L}_{Sum}=\frac{\mathcal{L}_{BCE}(T_{1},B)+\mathcal{L}_{BCE}(T_{2},B)} {2}+\frac{\mathcal{L}_{Dice}(T_{1},B)+\mathcal{L}_{Dice}(T_{2},B)}{2} \tag{3}\] **Priority.** By simple transformation, M2B turns the noisy supervision into a noise-free one, so that the predicted mask is able to preserve the contours of the polyp. Notably, M2B is differentiable, which can be easily implemented with PyTorch and plugged into the model to participate in gradient backpropagation. ### Scale Consistency (SC) Loss In M2B, most pixels in \(P\) are ignored in the projection, thus only a few pixels with high response values are involved in the supervision loss. This sparse supervision may lead to non-unique predictions. As shown in Fig. 3, after M2B projection, five predicted masks with different response values can be transformed into the same bounding box mask. Therefore, we consider introducing the SC loss to achieve dense supervision without annotations, which reduces the degree of freedom of predictions. **Method.** As shown in Fig. 2, due to the non-uniqueness of the prediction and the scale difference between \(I_{1}\) and \(I_{2}\), \(P_{1}\) and \(P_{2}\) differ in response values. But \(P_{1}\) and \(P_{2}\) come from the same image \(I_{1}\). They should be exactly the same. Given this, as shown in Eq. 4, we build the dense supervision \(\mathcal{L}_{SC}\) by explicitly Figure 3: Different predictions may correspond to the same bounding box mask. reducing the distance between \(P_{1}\) and \(P_{2}\), where \((i,j)\) is the pixel coordinates. Note that only pixels inside bounding box are involved in \(\mathcal{L}_{SC}\) to emphasize more on polyp regions. Despite its simplicity, \(\mathcal{L}_{SC}\) brings pixel-level constraints to compensate for the sparsity of \(\mathcal{L}_{Sum}\), thus reducing the variety of predictions. \[\mathcal{L}_{SC}=\frac{\sum_{(i,j)\in box}|P_{1}^{i,j}-P_{2}^{i,j}|}{\sum_{(i,j) \in box}1} \tag{4}\] ### Total Loss As shown in Eq. 5, combining \(\mathcal{L}_{Sum}\) and \(\mathcal{L}_{SC}\) together, we get WeakPolyp model. Note that WeakPolyp simply replaces the supervision loss without making any changes to the model structure. Therefore, it is general and can be ported to other models. Besides, \(\mathcal{L}_{Sum}\) and \(\mathcal{L}_{SC}\) are only used during training. In inference, they will be removed, thus having no effect on the speed of the model. \[\mathcal{L}_{Total}=\mathcal{L}_{Sum}+\mathcal{L}_{SC} \tag{5}\] ## 3 Experiments **Datasets.** Two large polyp datasets are adopted to evaluate the model performance, including SUN-SEG [9] and POLYP-SEG. SUN-SEG originates from [7, 10], which consists of 19,544 training images, 17,070 easy testing images, and 12,522 hard testing images. POLYP-SEG is our private polyp segmentation dataset, \begin{table} \begin{tabular}{c|c|c c c c c c c c c} \hline \hline \multirow{3}{*}{**Bac.**} & \multirow{3}{*}{**Sup.**} & \multicolumn{8}{c}{**SUN-SEG**} & \multicolumn{4}{c}{**POLYP-SEG**} \\ \cline{3-11} & & & Easy & Testing & Hard & Testing & Training & Testing & Training \\ \cline{3-11} & & & Dice & IoU & Dice & IoU & Dice & IoU & Dice & IoU \\ \hline \multirow{4}{*}{Res.} & gt &.772 &.693 &.798 &.716 &.931 &.876 &.761 &.684 &.936 &.884 \\ & grabcut &.595 &.514 &.617 &.530 &.706 &.608 &.660 &.579 &.778 &.687 \\ & box &.715 &.601 &.718 &.599 &.806 &.685 &.686 &.566 &.804 &.683 \\ & **Ours** &.792 &.715 &.807 &.727 &.899 &.826 &.760 &.680 &.909 &.842 \\ \hline \multirow{4}{*}{Pvt.} & gt &.851 &.780 &.858 &.784 &.932 &.878 &.793 &.715 &.936 &.883 \\ & grabcut &.741 &.648 &.747 &.649 &.766 &.670 &.644 &.559 &.780 &.683 \\ \cline{1-1} & box &.769 &.652 &.770 &.648 &.804 &.681 &.734 &.611 &.824 &.705 \\ \cline{1-1} & **Ours** &.853 &.781 &.854 &.777 &.907 &.839 &.792 &.707 &.922 &.859 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison between different baselines and our WeakPolyp, involving two datasets (SUN-SEG and POLYP-SEG) and two backbones (Res2Net-50 [6] and PVTv2-B2 [17]). The **gt** row is the performance upper bound. The **box** row is the performance lower bound. **’Bac.’** means backbone. **’Sup.’** means supervision. The highest and second-highest scores are marked in red and blue, respectively which contains 15,916 training images and 4,040 testing images. Note that, during training, only bounding box annotations are adopted in our WeakPolyp. **Training Settings.** WeakPolyp is implemented using PyTorch. All input images are uniformly resized to 352\(\times\)352. For data augmentation, random flip, random rotation, and multi-scale training are adopted. The whole network is trained in an end-to-end way with an AdamW optimizer. Initial learning rate and batch size are set to 1e-4 and 16, respectively. We train the entire model for 16 epochs. **Quantitative Comparison.** Table. 1 compares the model performance under different supervisions, backbones, and datasets. The overall performance order is _gt> WeakPolyp>box>grabcut_. The model supervised by grabcut [13] masks performs the worst, because the foreground and background of polyp images are similar. Grabcut can not well distinguish between them, resulting in poor masks. Our WeakPolyp predictably outperforms the model supervised by box masks because it is not affected by the box-shape bias of the annotations. Interestingly, WeakPolyp even surpasses the fully supervised model on SUN-SEG, which indicates that there is a lot of noise in the pixel-level annotations. But WeakPolyp does not require pixel-level annotations so it avoids noise interference. **Visual Comparison.** Fig. 4 visualizes some predictions based on different supervisions. Compared with other counterparts, WeakPolyp not only highlights \begin{table} \begin{tabular}{l|c c c c c c c c} \hline \hline \multirow{2}{*}{**Modules**} & \multicolumn{4}{c}{**Res2Net-50**} & \multicolumn{4}{c}{**PVTv2-B2**} \\ \cline{2-10} & Easy & Testing & Hard & Testing & Easy & Testing & Hard & Testing \\ \cline{2-10} & Dice & IoU & Dice & IoU & Dice & IoU & Dice & IoU \\ \hline Base &.715 &.601 &.718 &.599 &.769 &.652 &.770 &.648 \\ Base+M2B &.748 &.654 &.768 &.673 &.822 &.738 &.822 &.735 \\ Base+M2B+SC &.792 &.715 &.807 &.727 &.853 &.781 &.854 &.777 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation studies on the SUN-SEG testing set under different backbones. Figure 4: Visualization comparison between predictions based on different supervisions. the polyp shapes but also suppresses the background noise. Even for challenging scenarios, WeakPolyp still handles well and generates accurate masks. **Ablation Study.** To investigate the importance of each component in WeakPolyp, we evaluate the model on both Res2Net-50 and PVTv2-B2 for ablation studies. As shown in Table 2, all proposed modules are beneficial for the final predictions. Combining all these modules, our model achieves the highest performance. **Compared with Fully Supervised Methods.** Table. 3 shows our WeakPolyp is even superior to many previous fully supervised methods: PraNet [5], SANet [19], 2/3D [12] and PNS+ [9], which shows the excellent application prospect of weakly supervised learning in the polyp field. ## 4 Conclusion Limited by expensive labeling cost, pixel-level annotations are not readily available, which hinders the development of the polyp segmentation field. In this paper, we propose the WeakPolyp model completely based on bounding box annotations. WeakPolyp requires no pixel-level annotations, thus avoiding the interference of subjective noise labels. More importantly, WeakPolyp even achieves a comparable performance to the fully supervised models, showing the great potential of weakly supervised learning in the polyp segmentation field. In future, we will introduce temporal information into weakly supervised polyp segmentation to further reduce the model's dependence on labeling. ## 5 Acknowledgement This work was supported in part by Shenzhen General Program No. JCYJ20220530143600001, by the Basic Research Project No. HZQB-KCZYZ-2021067 of Hetao Shenzhen HK S&T Cooperation Zone, by Shenzhen-Hong Kong Joint Funding No. SGDX20211123112401002, by Shenzhen Outstanding Talents Training Fund, by Guangdong Research Project No. 2017ZT07X152 and No. 2019CX01X104, by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No. 2022B1212010001), by the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong \begin{table} \begin{tabular}{l|c|c|c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Conference**} & \multirow{2}{*}{**Backbone**} & \multicolumn{2}{c}{**Easy Testing**} & \multicolumn{2}{c}{**Hard Testing**} \\ \cline{3-7} & & & Dice & IoU & Dice & IoU \\ \hline PraNet [5] & MICCAI 2020 & Res2Net-50 &.689 &.608 &.660 &.569 \\ 2/3D [12] & MICCAI 2020 & ResNet-101 &.755 &.668 &.737 &.643 \\ SANet [19] & MICCAI 2021 & Res2Net-50 &.693 &.595 &.640 &.543 \\ PNS+ [9] & MIR 2022 & Res2Net-50 &.787 &.704 &.770 &.679 \\ \hline **Ours** & & Res2Net-50 & **.792** & **.715** & **.807** & **.727** \\ **Ours** & & PVTv2-B2 & **.853** & **.781** & **.854** & **.777** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison with previous fully supervised models on SUN-SEG. Kong, Shenzhen, by the NSFC 61931024&81922046, by zelixir biotechnology company Fund, by Tencent Open Fund.
2303.04248
TRACT: Denoising Diffusion Models with Transitive Closure Time-Distillation
Denoising Diffusion models have demonstrated their proficiency for generative sampling. However, generating good samples often requires many iterations. Consequently, techniques such as binary time-distillation (BTD) have been proposed to reduce the number of network calls for a fixed architecture. In this paper, we introduce TRAnsitive Closure Time-distillation (TRACT), a new method that extends BTD. For single step diffusion,TRACT improves FID by up to 2.4x on the same architecture, and achieves new single-step Denoising Diffusion Implicit Models (DDIM) state-of-the-art FID (7.4 for ImageNet64, 3.8 for CIFAR10). Finally we tease apart the method through extended ablations. The PyTorch implementation will be released soon.
David Berthelot, Arnaud Autef, Jierui Lin, Dian Ang Yap, Shuangfei Zhai, Siyuan Hu, Daniel Zheng, Walter Talbott, Eric Gu
2023-03-07T21:46:15Z
http://arxiv.org/abs/2303.04248v1
# TRACT: Denoising Diffusion Models with Transitive Closure Time-Distillation ###### Abstract Denoising Diffusion models have demonstrated their proficiency for generative sampling. However, generating good samples often requires many iterations. Consequently, techniques such as binary time-distillation (BTD) have been proposed to reduce the number of network calls for a fixed architecture. In this paper, we introduce TRAnsitive Closure Time-distillation (TRACT), a new method that extends BTD. For single step diffusion, TRACT improves FID by up to 2.4\(\times\) on the same architecture, and achieves new single-step Denoising Diffusion Implicit Models (DDIM) state-of-the-art FID (7.4 for ImageNet64, 3.8 for CIFAR10). Finally we tease apart the method through extended ablations. The PyTorch [37] implementation will be released soon. ## 1 Introduction Diffusion models [45; 47; 15] represent state-of-the-art generative models for many domains and applications. They work by learning to estimate the score of a given data distribution, which in practice can be implemented with a denoising autoencoder following a noise schedule. Training a diffusion model is arguably much simpler compared to many alternative generative modeling approaches, e.g., GANs [10], normalizing flows [7] and auto-regressive models [3]. The loss is well-defined and stable; there is a large degree of flexibility to design the architecture; and it directly works with continuous inputs without the need for discretization. These properties make diffusion models demonstrate excellent scalability to large models and datasets, as shown in recent works in diverse domains such as image generation [16; 6], image or audio super-resolution [27; 13; 28; 43], audio and music synthesis [31; 35; 25; 4; 36], language models [29; 9; 19; 2], and cross-domain applications such as text-to-image and text-to-speech [40; 42; 22; 39; 41] Despite the empirical success, inference efficiency remains a major challenge for diffusion models. As shown in [48], the inference process of diffusion models can be cast as solving a neural ODE [5], where the sampling quality improves as the discretization error decreases. As a result, up to thousands of denoising steps are used in practice in order to achieve high sampling quality. This dependency on a large number of inference steps makes diffusion models less favorable compared to one-shot sampling methods, e.g., GANs, especially in resource-constrained deployment settings. Existing efforts for speeding up inference of diffusion models can be categorized into three classes: (1) reducing the dimensionality of inputs [41; 50; 12]; (2) improving the ODE solver [24; 32]; and (3) progressively distilling the output of a teacher diffusion model to a student model with fewer steps [44; 34]. Among these, the progressive distillation approach is of special interest to us. It uses the fact that with a Denoising Diffusion Implicit Model (DDIM) inference schedule [46], there is a deterministic mapping between the initial noise and the final generated result. This allows one to learn an efficient student model that approximates a given teacher model. A naive implementation of such distillation would be prohibitive, as for each student update, the teacher network needs to be called \(T\) times (where \(T\) is typically large) for each student network update. Salimans and Ho [44] bypass this issue by performing progressive binary time distillation (BTD). In BTD, the distillation is divided into \(\log_{2}(T)\) phases, and in each phase, the student model learns the inference result of two consecutive teacher model inference steps. Experimentally, BTD can reduce the inference steps to four with minor performance loss on CIFAR10 and 64x64 ImageNet. In this paper, we aim to push the inference efficiency of diffusion models to the extreme: one-step inference with high quality samples. We first identify critical drawbacks of BTD that prevent it from achieving this goal: 1) objective degeneracy, where the approximation error accumulates from one distillation phase to the next, and 2) the prevention of using aggressive stochastic weights averaging (SWA) [21] to achieve good generalization, due to the fact that the training course is divided into \(\log_{2}(T)\) distinct phases. Motivated by these observations, we propose a novel diffusion model distillation scheme named TRAnsitive Closure Time-Distillation (TRACT). In a nutshell, TRACT trains a student model to distill the output of a teacher model's inference output from step \(t\) to \(t^{\prime}\) with \(t^{\prime}<t\). The training target is computed by performing one step inference update of the teacher model to get \(t\to t-1\), followed by calling the student model to get \(t-1\to t^{\prime}\), in a bootstrapping fashion. At the end of distillation, one can perform one-step inference with the student model by setting \(t=T\) and \(t^{\prime}=0\). We show that TRACT can be trained with only one or two phases, which avoids BTD's objective degeneracy and incompatibility with SWA. Experimentally, we verify that TRACT drastically improves upon the state-of-the-art results with one and two steps of inference. Notably, it achieves single-step FID scores of 7.4 and 3.8 for 64x64 ImageNet and CIFAR10 respectively. ## 2 Related Work BackgroundDDIMs [46] are a subclass of Denoising Diffusion Probabilistic Models (DDPM) [15] where the original noise is reused at every step \(t\) of the inference process. Typically DDIMs use a \(T\)-steps noise schedule \(\gamma_{t}\in[0,1)\) for \(t\in\{1,\dots,T\}\). By convention, \(t=0\) denotes the noise-free step and therefore \(\gamma_{0}=1\). In the variance preserving (VP) noisification setting, a noisy sample \(x_{t}\) is produced from the original sample \(x_{0}\) and some Gaussian noise \(\epsilon\) as follows: \[x_{t}=x_{0}\sqrt{\gamma_{t}}+\epsilon\sqrt{1-\gamma_{t}} \tag{1}\] A neural network \(f_{\theta}\) is trained to predict either the signal, the noise or both. The estimations of \(x_{0}\) and \(\epsilon\) at step \(t\) are denoted as \(x_{0|t}\) and \(\epsilon_{|t}\). For the sake of conciseness, we only detail the signal prediction case. During the denoisification phase, the predicted \(x_{0|t}\) is used to estimate \(\epsilon_{|t}\) by substitution in Equation (1): \[x_{0|t}\coloneqq f_{\theta}(x_{t},t)\text{ and }\epsilon_{|t}=\frac{x_{t}-x_{0|t} \sqrt{\gamma_{t}}}{\sqrt{1-\gamma_{t}}}\] These estimates allow inference, by substitution in Equation (1), of \(x_{t^{\prime}}\) for any \(t^{\prime}\in\{0,\dots,T\}\): \[x_{t^{\prime}}=\delta(f_{\theta},x_{t},t,t^{\prime})\coloneqq x_{t}\frac{ \sqrt{1-\gamma_{t^{\prime}}}}{\sqrt{1-\gamma_{t}}}+f_{\theta}(x_{t},t)\frac{ \sqrt{\gamma_{t^{\prime}}(1-\gamma_{t})}-\sqrt{\gamma_{t}(1-\gamma_{t^{\prime} })}}{\sqrt{1-\gamma_{t}}} \tag{2}\] Here we introduced the step function \(\delta(f_{\theta},x_{t},t,t^{\prime})\) to denote DDIM inference from \(x_{t}\) to \(x_{t^{\prime}}\). Advanced ODE solversA common framework in the denoisification process is to use stochastic differential equations (SDEs) that maintain the desired distribution \(p\) as the sample \(x\) evolves over time [24; 48]. Song et. al. presented a corresponding probability flow ordinary differential equation (ODE) with the initial generated noise as the only source of stochasticity. Compared to SDEs, ODEs can be solved with larger step sizes as there is no randomness between steps. Another advantage of solving probability flow ODEs is that we can use existing numerical ODE solvers to accelerate sampling in the denoisification phase. However, solving ODEs numerically approximates the true solution trajectory due to the truncation error from the solver. Popular numerical ODE solvers include first-order Euler's method and higher-order methods such as Runge-Kutta (RK) [49]. Karras et. al. apply Heun's 2nd order method [1] in the family of explicit second-order RK to maintain a tradeoff between truncation error and number of function evaluations (NFEs) [24; 23; 8]. However, existing ODE solvers are unable to generate high-quality samples in the few-step sampling regime (we loosely define few-steps regime in \(\approx 5\) steps). RK methods may suffer from numerical issues with large step sizes [18; 17]. Our work provides an orthogonal direction to these ODE solvers, and TRACT outputs can be further refined with higher-order methods. Diffusion model distillationThe idea of distilling a pretrained diffusion model to a single step student is first introduced in [33]. Despite encouraging results, it suffers from high training costs and sampling quality degradation. This idea is later extended in [44; 20; 34], where one progressively distills a teacher model to a student by reducing its total steps by a factor of two. Specifically, in Binary Time-Distillation (BTD) [44], a student network \(g_{\phi}\) is trained to replace two denoising steps of the teacher \(f_{\theta}\). Using the step function notation, \(g_{\phi}\) is modeled to hold this equality: \[\delta(g_{\phi},x_{t},t,t-2)\approx x_{t-2}\coloneqq\delta(f_{\theta},\delta( f_{\theta},x_{t},t,t-1),t-1,t-2) \tag{3}\] From this definition, we can determine the target \(\hat{x}\) that makes the equality hold (see Appendix A.1): \[\hat{x}=\frac{x_{t-2}\sqrt{1-\gamma_{t}}-x_{t}\sqrt{1-\gamma_{t-2}}}{\sqrt{ \gamma_{t-2}}\sqrt{1-\gamma_{t}}-\sqrt{\gamma_{t}}\sqrt{1-\gamma_{t-2}}} \tag{4}\] The signal loss is inferred by rewriting the noise prediction error (see Appendix A.2), yielding: \[\mathcal{L}(\phi)=\frac{\gamma_{t}}{1-\gamma_{t}}\left\|g_{\phi}(x_{t},t)- \hat{x}\right\|_{2}^{2} \tag{5}\] Once a student has been trained to completion, it becomes the teacher and the process is repeated until the final model has the desired number of steps. \(\log_{2}T\) training phases are required to distill a \(T\)-steps teacher to a single-step model and each trained student requires half the sampling steps of its teacher to generate high-quality samples. ## 3 Method We propose TRAnsitive Closure Time-Distillation (TRACT), an extension of BTD, that reduces the number of distillation phases from \(\log_{2}T\) to a small constant, typically \(1\) or \(2\). We focus on the VP setting used in BTD first, but the method itself is independent of it and we illustrate it in the Variance Exploding (VE) setting at the end of the section. While TRACT also works for noise-predicting objectives, we demonstrate it on signal-prediction where the neural network predicts an estimate of \(x_{0}\). ### Motivation We conjecture that the final quality of samples from a distilled model is influenced by the number of distillation phases and the length of each phase. As later validated in the experiments section, we consider two potential explanations as to why it is the case. #### Objective degeneracy In BTD, the student in the previous distillation phase becomes the teacher for the next phase. The student from the previous phase has a positive loss which yields an imperfect teacher for the next phase. These imperfections accumulate over successive generations of students which leads to objective degeneracy. #### Generalization Stochastic Weight Averaging (SWA) has been used to improve the performance of neural networks trained for DDPMs [15]. With Exponential Moving Average (EMA), the momentum parameter is limited by the training length: high momentum yields high-quality results but leads to over-regularized models if the training length is too short. This ties in with the time-distillation problem since the total training length is directly proportional to the number of training phases. TRACT is a multi-phase method where each phase distills \(T\)-steps schedule to \(T^{\prime}<T\) steps, and is repeated until the desired number of steps is reached. In a phase, the \(T\)-steps schedule is partitioned into \(T^{\prime}\) contiguous groups. The partitioning strategy is left open; for example, in our experiments we used equally-sized groups as demonstrated in Algorithm (1). Our method can be seen as an extension of BTD which is not constrained by \(T^{\prime}=T/2\). However, computational implications arise from the relaxation of this constraint, such as the estimation of \(x_{t^{\prime}}\) from \(x_{t}\) for \(t^{\prime}<t\). For a contiguous segment \(\{t_{i},\ldots,t_{j}\}\), we model the student \(g_{\phi}\) to jump to step \(t_{i}\) from any step \(t_{i}<t\leq t_{j}\) as illustrated in Figure (1): \[\delta(g_{\phi},x_{t},t,t_{i})=\delta(f_{\theta},\delta(f_{\theta},\ldots \delta(f_{\theta},x_{t},t,t-1),\ldots),t_{i+1},t_{i}) \tag{6}\] The student \(g\) is specified to encompass \((t_{j}-t_{i})\) denoising steps of \(f\). However, this formulation could require multiple calls of \(f\) during training, leading to prohibitive computational costs. To resolve this issue, we use a self-teacher whose weights are an exponential moving average (EMA) [21] of the student \(g\). This approach is inspired from semi-supervised learning [26], reinforcement learning [30] and representation learning [11]. For a student network \(g\) with weights \(\phi\), we denote the EMA of its weights as \(\tilde{\phi}=\texttt{EMA}(\phi,\mu_{S})\) where \(\mu_{S}\in[0,1]\), the momentum, is an hyper-parameter. The transitive closure operator can now be modeled with self-teaching by rewriting the closure in Equation (6) as a recurrence: \[\delta(g_{\phi},x_{t},t,t_{i})\approx x_{t_{i}}\coloneqq\delta(g_{\tilde{\phi }},\delta(f_{\theta},x_{t},t,t-1),t-1,t_{i}) \tag{7}\] From this definition, we can determine the target \(\hat{x}\) that makes the equality hold using the same method as for Equation (4), see Appendix A.1 for details: \[\hat{x}=\frac{x_{t_{i}}\sqrt{1-\gamma_{t}}-x_{t}\sqrt{1-\gamma_{t_{i}}}}{ \sqrt{\gamma_{t_{i}}}\sqrt{1-\gamma_{t}}-\sqrt{\gamma_{t}}\sqrt{1-\gamma_{t_{ i}}}} \tag{8}\] For the special case \(t_{i}=t-1\), we have \(\hat{x}=f_{\theta}(x_{t},t)\). The loss is the standard signal-predicting DDIM distillation training loss, e.g. for a target value \(\hat{x}\): \[\mathcal{L}(\phi)=\frac{\gamma_{t}}{1-\gamma_{t}}\left\|g_{\phi}(x_{t},t)- \hat{x}\right\|_{2}^{2} \tag{9}\] ### Adapting TRACT to a Runge-Kutta teacher and Variance Exploding noise schedule To illustrate its generality, we apply TRACT to teachers from Elucidating the Design space of diffusion Models (EDM) [24] that use a VE noise schedule and an RK sampler. VE noise schedulesA VE noisification process is parameterized by a sequence of noise standard deviations \(\sigma_{t}\geq 0\) for \(t\in\{1,...,T\}\) with \(\sigma_{1}=\sigma_{min}\leq\sigma_{t}\leq\sigma_{T}=\sigma_{max}\), and \(t=0\) denotes the noise-free step \(\sigma_{0}=0\). A noisy sample \(x_{t}\) is produced from an original sample \(x_{0}\) and Gaussian noise \(\epsilon\) as follows: \[x_{t}=x_{0}+\sigma_{t}\epsilon \tag{10}\] Figure 1: Transitive Closure Distillation of a group \(\{t_{i},\ldots,t_{j}\}\). RK step functionFollowing on the EDM approach, we use an RK sampler for the teacher and distill it to a DDIM sampler for the student. The corresponding step functions are \(\delta_{RK}\) and \(\delta_{DDIM-VE}\), respectively. The \(\delta_{RK}\) step function to estimate \(x_{t^{\prime}}\) from \(x_{t}\), \(t>0\), is defined as: \[\delta_{RK}(f_{\theta},x_{t},t,t^{\prime})\coloneqq\begin{cases}x_{t}+(\sigma_ {t^{\prime}}-\sigma_{t})\epsilon(x_{t},t)&\text{if }t^{\prime}=0\\ x_{t}+\frac{1}{2}(\sigma_{t^{\prime}}-\sigma_{t})\left[\epsilon(x_{t},t)+ \epsilon(x_{t}+(\sigma_{t^{\prime}}-\sigma_{t})\epsilon(x_{t},t),t)\right]& \text{otherwise}\end{cases} \tag{11}\] where \(\epsilon(x_{t},t)\coloneqq\frac{x_{t}-f_{\theta}(x_{t},t)}{\sigma_{t}}\). The \(\delta_{DDIM-VE}\) step function to estimate \(x_{t^{\prime}}\) from \(x_{t}\), \(t>0\), is defined as: \[\delta_{DDIM-VE}(f_{\theta},x_{t},t,t^{\prime})\coloneqq f_{\theta}(x_{t},t) \left(1-\frac{\sigma_{t^{\prime}}}{\sigma_{t}}\right)+\frac{\sigma_{t^{\prime }}}{\sigma_{t}}x_{t} \tag{12}\] Then, learning the transitive closure operator via self-teaching requires: \[\delta_{DDIM-VE}(g_{\phi},x_{t},t,t_{i})\approx x_{t_{i}}\coloneqq\delta_{ DDIM-VE}(g_{\tilde{\phi}},\delta_{RK}(f_{\theta},x_{t},t,t-1),t-1,t_{i}) \tag{13}\] From this definition, we can again determine the target \(\hat{x}\) that makes the equality hold: \[\hat{x}=\frac{\sigma_{t}x_{t_{i}}-\sigma_{t_{i}}x_{t}}{\sigma_{t}-\sigma_{t_{i }}} \tag{14}\] The loss is then a weighted loss between the student network prediction and the target. We follow the weighting and network preconditioning strategies introduced in the EDM paper [24]: \[\mathcal{L}(\phi)=\lambda(\sigma_{t})\|g_{\phi}(x_{t},t)-\hat{x}\|_{2}^{2} \tag{15}\] The resulting distillation algorithm and details on the derivation of \(\delta_{RK}\), \(\delta_{DDIM-VE}\) as well as the training target \(\hat{x}\) can be found in Appendix A.8. ## 4 Experiments We present results with TRACT on two image generation benchmarks: CIFAR-10 and class-conditional 64x64 ImageNet. On each dataset, we measure the performance of our distilled models using the Frechet Inception Distance (FID)[14], computed from 50,000 generated samples. We run each experiment with three seeds to compute the mean and standard deviation. 1-step TRACT models improve FID from 9.1 to 4.5 on CIFAR-10 and from 17.5 to 7.4 on 64x64 ImageNet compared to their BTD [44] counterparts, using the exact same architecture and teacher models. We also present results with TRACT when distilling EDM teacher models [24] using a RK sampler and VE noise schedule: they further improve our FID results to 3.8 on CIFAR-10, see Table (1). We follow up with ablations of the key components of our method: momentums for self-teaching and inference EMAs, and distillation schedules. ### Image generation results with BTD teachers The teacher model in each TRACT distillation experiment is initialized from teacher checkpoints of the BTD paper [44]2 so as to be directly comparable to them. Footnote 2: [https://github.com/google-research/google-research/tree/master/diffusion_distillation](https://github.com/google-research/google-research/tree/master/diffusion_distillation) We use a two-phase \(T:1024\to 32\to 1\) distillation schedule. At the start of each phase, the student's weights are initialized from the current teacher being distilled. In the first phase, the teacher model uses a 1024-step sampling schedule and the student learns to generate samples in 32 steps. In the second phase, the teacher is initialized as the student from the previous phase, and the student learns to generate images in a single step. Cifar-10We experimented with two training lengths: 96M samples to match the BTD [44] paper, and 256M samples to showcase the benefits of longer training with TRACT. Our 1-step TRACT-96M model obtains an FID of 5.02 that cuts in almost half the previous state-of-the-art of 9.12 [44] with the same architecture and training budget. TRACT-256M further improves our 1-step FID results to 4.45. For both training budgets, we also run distillation experiments ending with a larger number of steps: \(T:1024\to 32\to K\) with \(K\in\{2,4,8\}\) and obtain state-of-the-art models at all steps. 1 and 2 step results are presented on Table 1 while 4 and 8 step results are presented on Table 7. More experimental details can be found in Appendix A.3. 64x64 ImageNetOn class-conditional 64x64 ImageNet, our single-step TRACT-96M student achieves a FID of 7.43, which improves our BTD counterpart by 2.4x. Due to resource constraints, we did not distill a TRACT model with as many training samples (1.2B) as BTD [44]. Therefore, the new state-of-the-art that we set on the same model architecture is obtained with a tenth of the training budget. 1 and 2 step results are presented in Table 2 while 4 and 8 step results are presented on Table 8. More experimental details can be found in Appendix A.3. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & NFEs & FID & Parameters \\ \hline TRACT-EDM-256M\({}^{*}\) & 1 & **3.78**\(\pm 0.01\) & 56M \\ DFNO [51]\({}^{\dagger}\) & & 4.12 & 65.8M \\ TRACT-EDM-96M\({}^{*}\) & & 4.17 \(\pm 0.03\) & 56M \\ TRACT-256M & & 4.45 \(\pm 0.05\) & 60M \\ TRACT-96M & & 5.02 \(\pm 0.04\) & 60M \\ BTD-96M [44] & & 9.12 & 60M \\ \hline TRACT-256M & 2 & **3.32**\(\pm 0.02\) & 60M \\ TRACT-96M & & 3.53 \(\pm 0.03\) & 60M \\ TRACT-EDM-256M\({}^{*}\) & & 3.55 \(\pm 0.01\) & 56M \\ TRACT-EDM-96M\({}^{*}\) & & 3.75 \(\pm 0.02\) & 56M \\ BTD-96M [44] & & 4.51 & 60M \\ \hline \hline \end{tabular} \end{table} Table 1: FID results on CIFAR-10. \(\dagger\) Diffusion Fourier Neural Operators (DFNO) use a different model architecture for the student network and require generating a synthetic dataset for training. \(*\) TRACT-EDM models use better teachers. ### Image generation results with EDM teachers EDM models [24] are initialized from checkpoints released with the paper3, which are based off NCSN++ architecture[48] for CIFAR-10, and ADM architecture[6] for 64x64 ImageNet. Results for TRACT-EDM models are presented on Table 1 and 7 for CIFAR-10 as well as Table 2 and 8 for 64x64 ImageNet. Experimental details can be found in Appendix A.4. Footnote 3: [https://nvlabs-fi-cdn.nvidia.com/odm/pretrained/](https://nvlabs-fi-cdn.nvidia.com/odm/pretrained/) ### Stochastic Weight Averaging ablations TRACT uses two different EMAs: one for the self-teacher and one for the student model used at inference time. The self-teacher uses a fast-moving (low momentum) EMA with momentum \(\mu_{S}\) and the inference model uses a slow-moving (high momentum) EMA with momentum \(\mu_{I}\). We study both momentums across ablations on CIFAR-10. ImplementationWe use a bias-corrected EMA for our experiments. We detail this implementation for the self-teacher weights \(\tilde{\phi}=\texttt{EMA}(\phi,\mu_{S})\). At the start of training \(\tilde{\phi}_{0}=\phi_{0}\), and it is updated at each training step \(i>0\) with: \[\tilde{\phi}_{i}=\left(1-\frac{1-\mu_{S}}{1-\mu_{S}^{i}}\right)\tilde{\phi}_{ i-1}+\frac{1-\mu_{S}}{1-\mu_{S}^{i}}\phi_{i}, \tag{16}\] We use the same implementation for the inference model weights \(\texttt{EMA}(\phi,\mu_{I})\) Self-teaching EMAThe momentum parameter \(\mu_{S}\) for the self-teaching EMA strikes a balance between convergence speed and training stability. With low \(\mu_{S}\), the self-teacher weights adapt rapidly to training updates but incorporate noise from the optimization process, leading to unstable self-teaching. On the other hand, higher \(\mu_{S}\) values yield stable self-teaching targets but introduce latency between the student model state and that of its self-teacher. This, in turn, results in outdated self-teacher targets yielding slower convergence. For the ablation study of \(\mu_{S}\), we fixed the distillation schedule to \(T:1024\to 32\to 1\), the training length to 48M samples per phase and \(\mu_{I}\) to 0.99995. Results are presented in Table 34. Performance decreases monotonically as the self-teaching EMA grows above a certain threshold (about \(0.9\) in this setting), which supports the slower convergence hypothesis for high values of this parameter. Results are equally worse for values at or below 0.01 and present a high variance. Similarly to observations made in BYOL [11], we found that a wide range of momentum parameter \(\mu_{S}\in[0.1,0.9]\) values gives good performance. In light of this, we set \(\mu_{S}=0.5\) for all other experiments. Footnote 4: The best result in the table does not match our best: throughout ablations, for simplicity and at the cost of performance, we did not allocate a larger share of the training budget to the \(32\to 1\) distillation phase Inference EMAWe use a slow-moving EMA of student weights at inference time, which has been shown empirically to yield better test time performance [21]. For the ablation study of \(\mu_{I}\), we fix the distillation schedule to \(T:1024\to 32\to 1\), training length per phase to 48M samples and \(\mu_{S}=0.5\). Results are presented in Table 4, we observe that values of \(\mu_{I}\) strongly affect performance. In A.7 \begin{table} \begin{tabular}{l c c c} \hline Method & NFEs & FID & Parameters \\ \hline TRACT-96M & 1 & **7.43**\(\pm\) 0.07 & 296M \\ TRACT-EDM-96M\({}^{*}\) & & 7.52 \(\pm\) 0.05 & 296M \\ DFNO [51]\({}^{\dagger}\) & & 8.35 & 329M \\ BTD-1.2B [44] & & 17.5 & 296M \\ \hline TRACT-EDM-96M\({}^{*}\) & 2 & **4.97**\(\pm\) 0.03 & 296M \\ TRACT-96M & & 5.24 \(\pm\) 0.02 & 296M \\ BTD-1.2B [44] & & 7.2 & 296M \\ \hline \end{tabular} \end{table} Table 2: FID results on 64x64 ImageNet. \(\dagger\) Diffusion Fourier Neural Operators (DFNO) use a different model architecture for the student network and require generating a synthetic dataset for training. \(*\) TRACT-EDM models use better teachers. we share a heuristic to compute \(\mu_{I}\) values yielding high quality results across experiments and for varying training lengths. ### Influence of the number of distillation phases In the VP setting, we find that TRACT performs best when using a 2-phase \(T:1024\to 32\to 1\) distillation schedule. Confirming our original conjecture, we observe that schedules with more phases suffer more from _objective degeneracy_. However, we observe the worst results were obtained with a single-phase distillation \(T:1024\to 1\). In that case, we suspect that due to the long chain of time steps, a phenomenon similar to gradient vanishing is happening. We present ablation results on CIFAR-10 with distillation schedules of increasing number of phases from 1 to 5: \(T:1024\to 1,\ T:1024\to 32\to 1,\ T:4096\to 256\to 16\to 1,\ T:4096\to 512\to 64\to 8\to 1,\ T:1025\to 256\to 64\to 16\to 4\to 1\). Fixed overall training lengthWe set \(\mu_{I}=0.99995\), \(\mu_{S}=0.5\) and the overall training length to 96M samples. Single-step FID results are presented in Table 5. Results clearly get worse with more distillation phases, providing support to the objective degeneracy hypothesis. Fixed training length per phaseTRACT with 3, 4 and 5 phase distillation schedules is trained again with an increased training budget, now set to 48M samples _per phase_. 1-step FID results are presented in Table 6. Many-phase schedules improve their performance but FID scores are still worse than with the 2-phase schedule, despite leveraging the same training budget per distillation phase. This suggests that the objective degeneracy problem cannot be fully solved at the cost of a reasonably higher training budget. Meanwhile, as seen in previous experiments (see Table (1)), 2-phase results with 256M samples improved markedly over 96M samples. Therefore, with a fixed training budget, distilling a 2-phase TRACT for longer might be the best choice. \begin{table} \begin{tabular}{c c|c|c} Distillation schedule & Phases & Training length & FID \\ \hline 1024, 32, 1 & 2 & 96M & **5.24** \\ 4096, 256, 16, 1 & 3 & 144M & 5.76 \\ 4096, 512, 64, 8, 1 & 4 & 192M & 6.83 \\ 1024, 256, 64, 16, 4, 1 & 5 & 240M & 7.04 \\ \end{tabular} \end{table} Table 6: Time Schedule ablations with fixed training length per phase on CIFAR-10. \begin{table} \begin{tabular}{c c} Self-teaching EMA & 1 step FID \\ \hline 0.0 & 6.32 \\ 0.001 & 6.38 \\ 0.01 & 7.29 \\ 0.1 & 5.34 \\ 0.5 & **5.24** \\ 0.9 & 6.04 \\ 0.99 & 7.61 \\ 0.999 & 8.30 \\ \end{tabular} \begin{tabular}{c c} Inference EMA & 1 step FID \\ \hline 0.999 & 6.91 \\ 0.9999 & 5.5 \\ 0.99995 & **5.24** \\ 0.9999 & 8.73 \\ \end{tabular} \end{table} Table 4: Inference time EMA ablation results on CIFAR-10. \begin{table} \begin{tabular}{c|c|c|c} Distillation schedule & Phases & Training length & 1 step FID \\ \hline 1024, 1 & 1 & 96M & 14.40 \\ 1024, 32, 1 & 2 & 96M & **5.24** \\ 4096, 256, 16, 1 & 3 & 96M & 6.06 \\ 4096, 512, 64, 8, 1 & 4 & 96M & 7.27 \\ 1024, 256, 64, 16, 4, 1 & 5 & 96M & 8.33 \\ \end{tabular} \end{table} Table 5: Time Schedule ablations with fixed overall training length on CIFAR-10. Binary Distillation comparisonTo further confirm that objective degeneracy is the reason why TRACT outperforms BTD [44], we compare BTD to TRACT on the same BTD-compatible schedule: the 10 phases \(T:1024\to 512\to 256\rightarrow...\to 2\to 1\). We set \(\mu_{I}=0.99995\) and 48M training samples per distillation phase for both experiments. In this setting, BTD outperforms TRACT with an FID of 5.95 versus 6.8. This is additional confirmation that BTD's inferior overall performance may come from its inability to leverage 2-phase distillation schedules. Besides the schedule, the other difference between the BTD and TRACT is the use of self-teaching by TRACT. This experiment also suggests that self-teaching may result in less efficient objectives than supervised training. ### Beyond time distillation In addition to reducing quality degradation with fewer sampling steps, TRACT can be used for knowledge distillation to other architectures, in particular smaller ones. Compared to TRACT-96M, we show a degradation from 5.02 to 6.47 FID at 1 sampling step on CIFAR-10 by distilling a model from 60.0M parameters to 19.4M. For more details, refer to A.9. ## 5 Conclusion Generating samples in a single step can greatly improve the tractability of diffusion models. We introduce TRAnsitive Closure Time-distillation (TRACT), a new method that significantly improves the quality of generated samples from a diffusion model in a few steps. This result is achieved by distilling a model in fewer phases and with stronger stochastic weight averaging than prior methods. Experimentally, we show that without architecture changes to prior work, TRACT improves single-step FID by up to 2.4\(\times\). Further experiments demonstrate that TRACT can also effectively distill to other architectures, in particular to smaller student architectures. While demonstrated on images datasets, our method is general and makes no particular assumption about the type of data. It is left to future work to apply it to other types of data. An interesting extension of TRACT could further improve the quality-efficiency trade-off: tpically, distillation steps in DDIMs/DDPMs have maxed out at 8192 due to computational costs of sampling. Since TRACT allows arbitrary reductions in steps between training phases, we could feasibly distill from much higher step counts teachers, where prior methods could not. This unexplored avenue could open new research into difficult tasks where diffusion models could not previously be applied. ### Acknowledgements We would like to thank Josh Susskind, Xiaoying Pang, Miguel Angel Bautista Martin and Russ Webb for their feedback and suggestions. ### Contributions Here are the authors contributions to the work: David Berthelot led the research and came up with the transitive closure method and working code prototypes. Arnaud Autef obtained CIFAR-10 results, designed and ran ablation experiments, set up multi-gpu and multi-node training via DDP. Walter Talbot helped with ablation experiments and with writing. Daniel Zheng worked on cloud compute infrastructure, set up multi-gpu and multi-node training via DDP, and ran experiments. Siyuan Hu implemented the FID, integrated the BTD paper's model into transitive closure framework and conducted the experiments of distillation to smaller architectures. Jierui Lin finalized data, training and evaluation pipeline, obtained 64x64 ImageNet results, integrated BTD's teacher models and noise schedule to our pipeline, reproduced binary distillation and its variants for ablation. Dian Ang Yap implemented EDM variants, and designed experiments for TRACT (VE-EDM) on CIFAR-10 and ImageNet. Shuangfei Zhai contributed to the discussions, writing and ablation studies. Eric Gu contributed to writing and conducted experiments for distillation to smaller architectures.
2303.05156
Local Implicit Normalizing Flow for Arbitrary-Scale Image Super-Resolution
Flow-based methods have demonstrated promising results in addressing the ill-posed nature of super-resolution (SR) by learning the distribution of high-resolution (HR) images with the normalizing flow. However, these methods can only perform a predefined fixed-scale SR, limiting their potential in real-world applications. Meanwhile, arbitrary-scale SR has gained more attention and achieved great progress. Nonetheless, previous arbitrary-scale SR methods ignore the ill-posed problem and train the model with per-pixel L1 loss, leading to blurry SR outputs. In this work, we propose "Local Implicit Normalizing Flow" (LINF) as a unified solution to the above problems. LINF models the distribution of texture details under different scaling factors with normalizing flow. Thus, LINF can generate photo-realistic HR images with rich texture details in arbitrary scale factors. We evaluate LINF with extensive experiments and show that LINF achieves the state-of-the-art perceptual quality compared with prior arbitrary-scale SR methods.
Jie-En Yao, Li-Yuan Tsao, Yi-Chen Lo, Roy Tseng, Chia-Che Chang, Chun-Yi Lee
2023-03-09T10:20:07Z
http://arxiv.org/abs/2303.05156v3
# Local Implicit Normalizing Flow for Arbitrary-Scale Image Super-Resolution ###### Abstract Flow-based methods have demonstrated promising results in addressing the ill-posed nature of super-resolution (SR) by learning the distribution of high-resolution (HR) images with the normalizing flow. However, these methods can only perform a predefined fixed-scale SR, limiting their potential in real-world applications. Meanwhile, arbitrary-scale SR has gained more attention and achieved great progress. Nonetheless, previous arbitrary-scale SR methods ignore the ill-posed problem and train the model with per-pixel L1 loss, leading to blurry SR outputs. In this work, we propose "Local Implicit Normalizing Flow" (LINF) as a unified solution to the above problems. LINF models the distribution of texture details under different scaling factors with normalizing flow. Thus, LINF can generate photo-realistic HR images with rich texture details in arbitrary scale factors. We evaluate LINF with extensive experiments and show that LINF achieves the state-of-the-art perceptual quality compared with prior arbitrary-scale SR methods. + Footnote †: * and \(\dagger\) indicate equal contribution. This work was developed during the internship of Jie-En Yao and Li-Yuan Tsao at MediaTek Inc. ## 1 Introduction Arbitrary-scale image super-resolution (SR) has gained increasing attention recently due to its tremendous application potential. However, this field of study suffers from two major challenges. First, SR aims to reconstruct high-resolution (HR) image from a low-resolution (LR) counterpart by recovering the missing high-frequency information. This process is inherently ill-posed since the same LR image can yield many plausible HR solutions. Second, prior deep learning based SR approaches typically apply upsampling with a pre-defined scale in their network architectures, such as squeeze layer [1], transposed convolution [2], and sub-pixel convolution [3]. Once the upsampling scale is determined, they are unable to further adjust the output resolutions without modifying their model architecture. This causes inflexibility in real-world applications. As a result, discovering a way to perform arbitrary-scale SR and produce photo-realistic HR images from an LR image with a single model has become a crucial research direction. A natural approach to addressing the one-to-many inverse problem in SR is to consider the solution as a distribution. Consequently, a number of generative-based SR methods [1, 4, 5, 6, 7, 8] have been proposed to tackle this ill-posed problem. Among them, flow-based SR methods show promise, as normalizing flow [9, 10, 11, 12] offers several advantages over other generative models. For instance, flow does not suffer from the training instability and mode collapse issues present in generative adversarial networks (GANs) [13]. Moreover, flow-based methods are computationally efficient compared to diffusion [14] and autoregressive (AR) [15, 16] models. Representative flow-based models, such as SRFlow [1] and HCFlow [7], are able to generate high-quality SR images and achieve state-of-the-art results on the benchmarks. However, these methods are restricted to fixed-scale SR, limiting their applicability. Another line of research focuses on arbitrary-scale SR. LIIF [17] employs local implicit neural representation to represent images in a continuous domain. It achieves arbitrary-scale SR by replacing fixed-scale upsample mod Figure 1: A comparison of the previous arbitrary-scale SR approaches and LINF. LINF models the distribution of texture details in HR images at arbitrary scales. Therefore, unlike the prior methods that tend to produce blurry images, LINF is able to generate arbitrary-scale HR images with rich and photo-realistic textures. ules with an MLP to query the pixel value at any coordinate. LTE [18] further estimates the Fourier information at a given coordinate to make MLP focus on learning high-frequency details. However, these works did not explicitly account for the ill-posed nature of SR. They adopt a per-pixel \(L1\) loss to train the model in a regression fashion. The reconstruction error favors the averaged output of all possible HR images, leading the model to generate blurry results. Based on the observation above, combining flow-based SR model with the local implicit module is a promising direction in which flow can account for the ill-posed nature of SR, and the local implicit module can serve as a solution to the arbitrary-scale challenge. Recently, LAR-SR [8] claimed that details in natural images are locally correlated without long-range dependency. Inspired by this insight, we formulated SR as a problem of learning the distribution of local texture patch. With the learned distribution, we perform super-resolution by generating the local texture separately for each non-overlapping patch in the HR image. With the new problem formulation, we present Local Implicit Normalizing Flow (LINF) as the solution. Specifically, a coordinate conditional normalizing flow models the local texture patch distribution, which is conditioned on the LR image, the central coordinate of local patch, and the scaling factor. To provide the conditional signal for the flow model, we use the local implicit module to estimate Fourier information at each local patch. LINF excels the previous flow-based SR methods with the capability to upscale images with arbitrary scale factors. Different from prior arbitrary-scale SR methods, LINF explicitly addresses the ill-posed issue by learning the distribution of local texture patch. As shown in Fig 1, hence, LINF can generate HR images with rich and reasonable details instead of the over-smoothed ones. Furthermore, LINF can address the issue of unpleasant generative artifacts, a common drawback of generative models, by controlling the sampling temperature. Specifically, the sampling temperature in normalizing flow controls the trade-off between PSNR (fidelity-oriented metric) and LPIPS [19] (perceptual-oriented metric). The contributions of this work can be summarized as follows: * We proposed a novel LINF framework that leverages the advantages of a local implicit module and normalizing flow. To the best of our knowledge, LINF is the first framework that employs normalizing flow to generate photo-realistic HR images at arbitrary scales. * We validate the effectiveness of LINF to serve as a unified solution for the ill-posed and arbitrary-scale challenges in SR via quantitative and qualitative evidences. * We examine the trade-offs between the fidelity- and perceptual-oriented metrics, and show that LINF does yield a better trade-off than the prior SR approaches. ## 2 Related Work In this section, we briefly review the previous deep learning based fixed-scale and arbitrary-scale SR methodologies. ### Fixed-Scale Super-Resolution A number of previous approaches have been proposed in the literature with an aim to learn mapping functions from given LR images to fixed-scale HR ones. These approaches can be broadly categorized into PSNR-oriented methods [20, 21, 22, 23, 2, 3] and generative model based methods [1, 4, 5, 6, 7, 8, 24, 25, 26, 27, 28]. The former category deterministically maps an LR image to an HR one using the standard L1 or L2 losses as the learning objectives. Despite the promising performance on the PSNR metric, the L1 or L2 losses adopted by such methods usually drives the models to predict the average of all plausible HR images [1, 24, 29, 30], leading to an over-smoothed one. On the other hand, the latter category seeks to address the ill-posed nature of the SR problem by learning the distribution of possible HR images. Such methods include GAN-based SR, diffusion-based SR, flow-based SR, and AR-based SR. GAN-based SR methods [4, 5, 24, 25] train their SR models with adversarial loss, and are able to generate sharp and natural SR images. However, they sometimes suffer from training instability, mode collapse, and over-sharpen artifacts. Diffusion-based SR methods [6, 26] generate an HR image by iteratively refining a Gaussian noise using a denoising model conditioned on the corresponding LR image. These methods are promising and effective, nevertheless, the slow iterative denoise processes limit their practical applications. Flow-based SR methods [1, 7, 27, 28] utilize invertible normalizing flow models to parameterize a distribution. They are promising and achieve state-of-the-art results on the benchmark as they possess several advantages over other generative models, as discussed in Section 1. Among these methods, SRFlow [1] first pioneered the flow-based SR domain. It was then followed by HCFlow [7], which designed a hierarchical conditional mechanism in the flow framework and achieved better performance than SRFlow. Recently, LAR-SR [8] introduced the first AR-based SR model. It divides an image into non-overlapping patches, and learns to generate local textures in these patches using a local autoregressive model. ### Arbitrary-Scale Super-Resolution Despite the successes, the approaches discussed in Section 2.1 are only able to super-resolve LR images with pre-defined upsampling scales, which are usually restricted to certain integer values (e.g., \(2\times\sim 4\times\)). Meta-SR [31] first attempted to address this limitation by introducing a meta-learning based method to adaptively predict the weights of the upscaling filters for each scaling factor. This avenue is then explored by a number of follow-up endeav ors [17, 18, 32, 33, 34, 35, 36, 37]. RSAN [32] proposed a scale attention module to learn informative features according to the specified scaling factor. ArbSR [33] employed a plug-in module to perform scale-aware feature adaptation and scale-aware upsampling. Recently, LIIF [17] introduced the concept of local implicit neural representation. Given necessary feature embeddings and a coordinate in the real coordinate space \(\mathbb{R}^{2}\), LIIF enables the RGB value of that pixel coordinate to be decoded by a multilayer perceptron (MLP). Inspired by [38, 39, 40, 41], UltraSR [34] and IPE [35] enhanced LIIF by introducing positional encoding to the framework, allowing it to focus more on high-frequency details. The authors of LTE [18] further introduced the use of Fourier features in their local texture estimator for estimating the dominant frequencies of an image. ## 3 Methodology In this section, we first formally define the SR problem concerned by this paper, and provide an overview of the proposed framework. Then, we elaborate on the details of its modules, followed by a discussion of our training scheme. **Problem definition.** Given an LR image \(I^{LR}\in\mathbb{R}^{H\times W\times 3}\) and an arbitrary scaling factor \(s\), the objective of this work is to generate an HR image \(I^{HR}\in\mathbb{R}^{sH\times sW\times 3}\), where \(H\) and \(W\) represent the height and width of the LR image. Different from previous works, we formulate SR as a problem of learning the distributions of _local texture patches_ by normalizing flow, where '_texture_' is defined as the residual between an HR image and the bilinearly un-sampled LR counterpart. These local texture patches are constructed by grouping \(sH\times sW\) pixels of \(I^{HR}\) into \(h\times w\) non-overlapping patches of size \(n\times n\) pixels, where \(h=\lceil sH/n\rceil,w=\lceil sW/n\rceil\). The target distribution of a local texture patch \(m_{i,j}\) to be learned can be formulated as a conditional probability distribution \(p(m_{i,j}|I^{LR},x_{i,j},s)\), where \((i,j)\) represent the patch index, and \(x_{i,j}\in\mathbb{R}^{2}\) denotes the center coordinate of \(m_{i,j}\). The predicted local texture patches are aggregated together to form \(I^{HR}_{texture}\in\mathbb{R}^{sH\times sW\times 3}\), which is then combined with a bilinearly un-sampled image \(I^{LR}\in\mathbb{R}^{sH\times sW\times 3}\) via element-wise addition to derive the final HR image \(I^{HR}\). **Overview.** Fig. 2 provides an overview of the LINF framework, which consists of two modules: (1) a local implicit module, and (2) a coordinate conditional normalizing flow (or simply "_the flow model_" hereafter). The former generates the conditional parameters for the latter, enabling LINF to take advantages of both local implicit neural representation and normalizing flow. Specifically, the former first derives the local Fourier features [18] from \(I^{LR}\), \(x_{i,j}\), and \(s\). The proposed Fourier feature ensemble is then applied on the extracted features. Finally, given the ensembled feature, the latter utilizes an MLP to generate the parameters for the flow model to approximate \(p(m_{i,j}|I^{LR},x_{i,j},s)\). We next elaborate on their details and the training strategy. ### Coordinate Conditional Normalizing Flow Normalizing flow approximates a target distribution by learning a bijective mapping \(\boldsymbol{f}_{\theta}=f_{1}\circ f_{2}\circ...\circ f_{l}\) between a target space and a latent space, where \(\boldsymbol{f}_{\theta}\) denotes a flow model parameterized by \(\theta\), and \(f_{1}\) to \(f_{l}\) represent \(l\) invertible flow layers. In LINF, the flow model approximates such a mapping between a local texture patch distribution \(p(m_{i,j}|I^{LR},x_{i,j},s)\) and a Gaussian distribution \(p_{z}(z)\) as: \[m_{i,j}=h_{0}\underset{f_{1}^{-1}}{\overset{f_{1}}{\rightleftarrows}}\ h_{1} \underset{f_{2}^{-1}}{\overset{f_{2}}{\rightleftarrows}}...\ h_{k-1} \underset{f_{k}^{-1}}{\overset{f_{k}}{\rightleftarrows}}h_{k}\...\ \underset{f_{l}^{-1}}{\overset{f_{l}}{\rightleftarrows}}h_{l}=z, \tag{1}\] where \(z\sim\mathcal{N}(0,\tau)\) is a Gaussian random variable, \(\tau\) is a temperature coefficient, \(h_{k}=f_{k}(h_{k-1})\), \(k\in[1,...,l]\), denotes a latent variable in the transformation process, and \(f_{k}^{-1}\) is the inverse of \(f_{k}\). By applying the change of variable technique, the mapping of the two distributions \(p(m_{i,j}|I^{LR},x_{i,j},s)\) and \(p_{z}(z)\) can be expressed as follows: \[\begin{split} log\ p_{\theta}(m_{i,j}|I^{LR},x_{i,j},s)& =log\ p_{z}(z)\\ &+\sum_{k=1}^{l}log\left|det\frac{\partial f_{k}(h_{k-1})}{ \partial h_{k-1}}\right|\end{split} \tag{2}\] The term \(log\ |det\frac{\partial f_{k}(h_{k-1})}{\partial h_{k-1}}|\) is the logarithm of the absolute Jacobian determinant of \(f_{k}\). As \(I^{HR}_{texture}\) (and hence, the local texture patches) can be directly derived from \(I^{HR}\), \(I^{LR}\), and \(s\) during the training phase, the flow model can be optimized by minimizing the negative log-likelihood loss. Figure 2: An illustration of the proposed LINF framework. LINF consists of two parts. The local implicit model first encodes an LR image, a local coordinate and a cell into Fourier features, which is followed by an MLP for generating the conditional parameters. The flow model then leverages these parameters to learn a bijective mapping between a local texture patch space and a latent space. During the inference phase, the flow model is used to infer local texture patches by transforming sampled \(z\)'s with \(f^{-1}\). Note that the values of \(\tau\) are different during the training and the inference phases, which are discussed in Section 4. Implementation details.Since the objective of our flow model is to approximate the distributions of local texture patches rather than an entire image, it is implemented with a relatively straightforward model architecture. The flow model is composed of ten flow layers, each of which consists of a linear layer and an affine injector layer proposed in [1]. Each linear layer \(k\) is parameterized by a learnable pair of weight matrix \(\mathcal{W}_{k}\) and bias \(\beta_{k}\). The forward and inverse operations of the linear layer can be formulated as: \[h_{k}=\mathcal{W}_{k}h_{k-1}+\beta_{k}\ \,\ \ h_{k-1}=\mathcal{W}_{k}^{-1}(h_{k}- \beta_{k}), \tag{3}\] where \(\mathcal{W}_{k}^{-1}\) is the inverse matrix of \(\mathcal{W}_{k}\). The Jacobian determinant of a linear layer is simply the determinant of the weight matrix \(\mathcal{W}_{k}\). Since the dimension of a local texture patch is relatively small (i.e., \(n\times n\) pixels), calculating the inverse and determinant of the weight matrix \(\mathcal{W}_{k}\) is feasible. On the other hand, the affine injector layers are employed to enable two conditional parameters \(\alpha\) and \(\phi\) generated from the local implicit module to be fed into the flow model. The incorporation of these layers allows the distribution of a local texture patch \(m_{i,j}\) to be conditioned on \(I^{LR}\), \(x_{i,j}\), and \(s\). The conditional parameters are utilized to perform element-wise shifting and scaling of latent \(h\), expressed as: \[h_{k}=\alpha_{k}\odot h_{k-1}+\phi_{k}\ \,\ \ h_{k-1}=(h_{k}-\phi_{k})/\alpha_{k}, \tag{4}\] where \(k\) denotes the index of a certain affine injector layer, and \(\odot\) represents element-wise multiplication. The log-determinant of an affine injector layer is computed as \(\sum log(\alpha_{k})\), which sums over all dimensions of indices [1]. ### Local Implicit Module The goal of the local implicit module is to generate conditional parameters \(\alpha\) and \(\phi\) from the local Fourier features extracted from \(I^{LR}\), \(x_{q}\), and \(s\). This can be formulated as: \[\alpha,\phi=g_{\Phi}(E_{\Psi}(v^{*},x_{q}-x^{*},c)), \tag{5}\] where \(g_{\Phi}\) represents the parameter generation function implemented as an MLP, \(x_{q}\) is the center coordinate of a queried local texture patch in \(I^{HR}\), \(v^{*}\) is the feature vector of the 2D LR coordinate \(x^{*}\) which is nearest to \(x_{q}\) in the continuous image domain [17], \(c=2/s\) denotes the cell size, and \(x_{q}-x^{*}\) is known as the relative coordinate. Following [18], the local implicit module employs a local texture estimator \(E_{\Psi}\) to extract the Fourier features given any arbitrary \(x_{q}\). This function can be expressed as follows: \[E_{\Psi}(v^{*},x_{q}-x^{*},c):A\odot\begin{bmatrix}cos(\pi F(x_{q}-x^{*})+P) \\ sin(\pi F(x_{q}-x^{*})+P)\end{bmatrix}, \tag{6}\] where \(\odot\) denotes element-wise multiplication, and \(A\), \(F\), \(P\) are the Fourier features extracted by three distinct functions: \[A=E_{a}(v^{*}),F=E_{f}(v^{*}),P=E_{p}(c), \tag{7}\] where \(E_{a}\), \(E_{f}\), and \(E_{p}\) are the functions for estimating amplitudes, frequencies, and phases, respectively. In this work, the former two are implemented with convolutional layers, while the latter is implemented as an MLP. Given the number of frequencies to be modeled as \(K\), the dimensions of these features are \(A\in\mathbb{R}^{2K}\), \(F\in\mathbb{R}^{K\times 2}\), and \(P\in\mathbb{R}^{K}\). Fourier feature ensemble.To avoid color discontinuity when two adjacent pixels select two different feature vectors, a local ensemble method was proposed in [17] to allow RGB values to be queried from the nearest four feature vectors around \(x_{q}\) and fuse them with bilinear interpolation. If this method is employed, the forward and inverse transformation of our flow model \(f_{\theta}\) would be expressed as follows: \[\begin{split} z=\sum_{j\in\Upsilon}w_{j}*f_{\theta}(patch;g_{ \Phi}(E_{\Psi}(v_{j},x_{q}-x_{j},c)))\\ patch=\sum_{j\in\Upsilon}w_{j}*f_{\theta}^{-1}(z;g_{\Phi}(E_{\Psi}(v_{j},x _{q}-x_{j},c))),\end{split} \tag{8}\] where \(\Upsilon\) is the set of four nearest feature vectors, and \(w_{j}\) is the derived weight for performing bilinear interpolation. Albeit effective, local ensemble requires four forward passes of the local texture estimator \(E_{\Psi}\), the parameter generator \(g_{\Phi}\), and the flow model \(f_{\theta}\). To deal with this drawback, our local implicit module employs a different approach named "_Fourier feature ensemble_" to streamline the computation. Instead of directly generating four RGB samples and then fuse them in the image domain, we propose to ensemble the four nearest feature vectors right after the local texture estimator \(E_{\Psi}\). More specifically, these feature vectors are concatenated to form an ensemble \(\kappa=concat(\{w_{j}*E_{\Psi}(v_{j},x_{q}-x_{j},c),\forall j\in\Upsilon\})\), in which each feature vector is weighted by \(w_{j}\) to allow the model to focus more on closer feature vectors. The proposed technique requires \(g_{\Phi}\) and \(f_{\theta}\) to perform only one forward pass to capture the same amount of information as the local ensemble method and deliver same performance. It is expressed as: \[z=f_{\theta}(patch;g_{\Phi}(\kappa));patch=f_{\theta}^{-1}(z;g_{\Phi}(\kappa)). \tag{9}\] ### Training Scheme LINF employs a two-stage training scheme. In the first stage, it is trained only with the negative log-likelihood loss \(L_{nll}\). In the second stage, it is fine-tuned with an additional L1 loss on predicted pixels \(L_{pixel}\), and the VGG perceptual loss [30] on the patches predicted by the flow model \(L_{vgg}\). The total loss function \(L\) can be formulated as follows: \[\begin{split} L=&\lambda_{1}L_{nll}(patch_{gt})+ \lambda_{2}L_{pixel}(patch_{gt},patch_{\tau=0})\\ &+\lambda_{3}L_{vgg}(patch_{gt},patch_{\tau=0.8}),\end{split} \tag{10}\] where \(\lambda_{1}\)\(\lambda_{2}\), and \(\lambda_{3}\) are the scaling parameters, \(patch_{gt}\) denotes the ground-truth local texture patch, and (\(patch_{\tau=0}\), \(patch_{\tau=0.8}\)) represent the local texture patches predicted by LINF with temperature \(\tau=0\) and \(\tau=0.8\), respectively. ## 4 Experimental Results In this section, we report the experimental results, present the ablation analyses, and discuss the implications. ### Experimental Setups In this section, we describe the experimental setups. We compare LINF with previous arbitrary-scale SR methods and generative SR models to show that LINF is able to generate photo-realistic HR images for arbitrary scaling factors. Arbitrary-scale SR.We use the DIV2K [42] dataset for training and evaluate the performance on several widely used SR benchmark datasets, including Set5 [43], Set14 [44], B100 [45], and Urban100 [46]. To compare our LINF with the prior pixel-wise SR methods [17, 18], we set the patch size \(n\) to \(1\times 1\), which models the distribution of a single pixel. We use three different encoders, EDSR-baseline [21], RDN [22], and SwinIR [23], to extract features of LR images. In the first training stage, we train the models for \(1,000\) epochs, with a learning rate of \(1\times 10^{-4}\), which is halved at epochs \([200,400,600,800]\) for EDSR-baseline and RDN, and at epochs \([500,800,900,950]\) for SwinIR. In the second stage, we fine-tune EDSR-baseline and RDN for \(1,000\) epochs, and SwinIR for \(1,500\) epochs, with a fine-tune learning rate of \(5\times 10^{-5}\), which is halved at epochs [200, 400, 600, 800] for EDSR-baseline and RDN, and at epochs [800, 1100, 1300, 1400] for SwinIR. The parameters in Eq. (10) are set by \(\lambda_{1}=5\times 10^{-4}\), \(\lambda_{2}=1\), and \(\lambda_{3}=0\). The Adam optimizer is used for training. The batch size is \(16\) for EDSR-baseline and RDN, and \(32\) for SwinIR. Generative SR.For generative SR, our models are trained on both the DIV2K [42] and Flickr2K [47] datasets, with performance evaluation conducted using the DIV2K validation set. To effectively capture the underlying texture distribution, we set the patch size \(n\) to \(3\times 3\). The RRDB architecture [4] is employed as the encoder. The training parameters, such as epoch, learning rate, batch size, and optimizer settings, are maintained in alignment with RDN. Moreover, we set the loss weighting parameters to be \(\lambda_{1}=5\times 10^{-4}\), \(\lambda_{2}=1\), and \(\lambda_{3}=2.5\times 10^{-2}\), respectively. Training strategy.In the proposed LINF methodology, the model is trained utilizing scaling factors within a continuous range from \(\times 1\) to \(\times 4\). In practice, for each data sample within a mini-batch, a scale denoted as \(s\) is obtained by sampling from a uniform distribution \(U(1,4)\). The LR image dimensions are set to \(48\times 48\) pixels. As a result, this configuration necessitates the cropping of HR images of \(48s\times 48s\) pixels from the original training images. Subsequently, these HR images are down-sampled to their corresponding \(48\times 48\) pixel LR counterparts using bicubic interpolation. The dimensions of each HR image can be interpreted as a set of coordinate-patch pairs, with a total count of \((48s)^{2}\). From this set, a fixed number of \(48^{2}\) pairs are selected as the training data to ensure consistency in the quantity of training data samples across different patches. Evaluation metrics.In our experiments, fidelity-oriented metrics, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), are reported to facilitate a fair comparison with existing methods. However, PSNR and SSIM are known to be insufficient in reflecting perceptual quality for SR tasks. Therefore, an alternative metric, referred to as LPIPS [19], is employed to evaluate perceptual quality. Moreover, a Diversity metric, defined as the pixel value standard deviation of five samples, is utilized when comparing LINF with generative SR models to highlight the diversity of the SR images generated by LINF. Inference temperature.While the flow model maps the target distribution to a standard normal distribution \(\mathcal{N}(0,1)\) during the training phase, temperature can be adjusted in the testing phase. In the deterministic setting (\(\tau=0\)), the flow model operates similarly to PSNR-oriented SR models by generating the mean of the learned distribution. In contrast, when employing random samples with \(\tau>0\), the flow model generates diverse and photo-realistic results. We report both deterministic and random sample outcomes to demonstrate the distinct characteristics of our flow model. ### Arbitrary-Scale SR Table 1 presents a quantitative comparison between our LINF and the previous arbitrary-scale SR models [17, 18, 31]. Unlike previous arbitrary-scale SR methods, which only report PSNR, we take LPIPS into consideration to reflect the perceptual quality. We report results under deterministic and random sampling settings to validate the effectiveness of our model. In the random sample setting, we set \(\tau_{0}\) to 0.5 for \(\times 2\)-\(\times 4\) SR. As the SR scale increases, we decrease the sampling temperature to obtain more stable outputs by setting \(\tau_{0}=0.4\) for \(\times 6\) SR and \(\tau_{0}=0.2\) for \(\times 8\) SR. Our observations reveal that LINF significantly outperforms the prior methods in terms of the LPIPS metric when utilizing random sampling, indicating its ability to generate images with enhanced perceptual quality. The qualitative results depicted in Fig. 3 support the above findings, indicating that LINF can generate rich texture under arbitrary scales, while the previous PSNR-oriented method generates blurrier outcomes. Moreover, LINF maintains competitive performance in terms of PSNR under the deterministic setting, validating that the learned distribution is centered around the average of all plausible HR images. ### Generative SR Quantitative and qualitative results.We compare LINF with GAN-based [4, 5], Diffusion-based [6], AR-based [8], and flow-based [1, 7] SR models in Table 2 and Fig 4. HCFlow+ and HCFlow++ are two versions of HCFlow [7]. The former employs fine-tuning with an L1 loss to enhance its PSNR performance, while the latter incorporates a VGG loss [30] and an adversarial loss to improve visual quality and LPIPS scores. In the random sampling setting, LINF outperforms all the baselines in terms of both PSNR and LPIPS, except for SRDiff and HCFlow++. Although LINF exhibits a marginally lower PSNR than SRDiff, it significantly surpasses SRDiff in LPIPS. Moreover, LINF outperforms HCFlow++ in PSNR with a comparable LPIPS score. These results suggest that LINF is a balanced model excelling in both PSNR and LPIPS, and are further corroborated by Fig 4. In the first row, SRFlow yields blurry results, while HCFlow and GAN-based models generate overshrapened artifacts. On the other hand, LINF generates rich textures and achieves high fidelity when compared to the ground truth image. This evidence validates the effectiveness of LINF as a versatile and balanced model for achieving optimal performance in both PSNR and LPIPS metrics. Fidelity-perception trade-off.Since SR presents an ill-posed problem, achieving optimal fidelity (i.e., the discrepancy between reconstructed and ground truth images) and perceptual quality simultaneously presents a considerable challenge [48]. As a result, the trade-off between fidelity and perceptual quality necessitates an in-depth ex \begin{table} \begin{tabular}{c|c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{**\(\mathbf{19.5\times 19.5}\)**} & \multicolumn{3}{c}{**\(\mathbf{20.5\times 19.5}\)**} & \multicolumn{3}{c}{**\(\mathbf{20.5\times 19.5}\)**} & \multicolumn{3}{c}{**\(\mathbf{20.5\times 19.5}\)**} \\ \cline{2-13} & \(\gamma\)2 & \(\gamma\)3 & \(\gamma\)4 & \(\gamma\)4 & \(\gamma\)4 & \(\gamma\)5 & \(\gamma\)2 & \(\gamma\)4 & \(\gamma\)5 & \(\gamma\)5 \\ \hline \hline **1283-Baseline-Mobile** & 77.76\(\pm\)0.028 & 34.38\(\pm\)0.125 & 22.07\(\pm\)0.173 & 28.04\(\pm\)0.259 & 26.07\(\pm\)0.530 & 35.62\(\pm\)0.026 & 34.29\(\pm\)0.237 & 28.57\(\pm\)0.236 & 28.37\(\pm\)0.309 & 28.40\(\pm\)0.049 \\ **1283-Baseline-Mobile** & 7.79\(\pm\)0.006 & 34.37\(\pm\)0.125 & 32.20\(\pm\)0.173 & 32.80\(\pm\)0.248 & 26.08\(\pm\)0.037 & 34.68\(\pm\)0.038 & 33.23\(\pm\)0.235 & 36.24\(\pm\)0.34 & 33.09\(\pm\)0.049 & 34.84\(\pm\)0.049 \\ **1283-Baseline-Mobile** & 7.79\(\pm\)0.006 & 34.41\(\pm\)0.125 & 32.80\(\pm\)0.173 & 28.04\(\pm\)0.125 & 26.08\(\pm\)0.132 & 33.67\(\pm\)0.008 & 33.23\(\pm\)0.235 & 36.23\(\pm\)0.234 & 33.09\(\pm\)0.049 & 34.84\(\pm\)0.049 \\ **1283-Baseline-Mobile** & 7.86\(\pm\)0.028 & 34.41\(\pm\)0.125 & 32.80\(\pm\)0.173 & 28.04\(\pm\)0.125 & 26.08\(\pm\)0.132 & 33.67\(\pm\)0.008 & 33.23\(\pm\)0.236 & 33.67\(\pm\)0.236 & 33.23\(\pm\)0.236 & 33.09\(\pm\)0.047 \\ **1283-Baseline-Mobile** & 7.76\(\pm\)0.028 & 35.96\(\pm\)0.026 & 33.00\(\pm\)0.026 & 33.00\(\pm\)0.026 & 33.00\(\pm\)0.026 & 33.23\(\pm\)0.236 & 33.41\(\pm\)0.19 & 34.54\(\pm\)0.192 & 34.54\(\pm\)0.192 \\ **1283-Baseline-Mobile** & 8.77\(\pm\)0.028 & 34.43\(\pm\)0.122 & 32.40\(\pm\)0.173 & 27.13\(\pm\)0.296 & 32.04\(\pm\)0.192 & 33.36\(\pm\)0.002 & 33.36\(\pm\)0.002 & 33.35\(\pm\)0.230 & 33.23\(\pm\)0.237 & 33.50\(\pm\)0.234 \\ **1283-Baseline-Mobile** & 8.87\(\pm\)0.028 & 34.42\(\pm\)0.122 & 32.30\(\pm\)0.173 & 28.19\(\pm\)0.240 & 27.14\(\pm\)0.299 & 33.07\(\pm\)0.008 & 33.07\(\pm\)0.029 & 33.88\(\pm\)0.237 & 33.64\(\pm\)0.037 \\ **1283-Baseline-Mobile** & 8.82\(\pm\)0.028 & 34.71\(\pm\)0.122 & 33.80\(\pm\)0.173 & 28.19\(\pm\)0.240 & 33.71\(\pm\)0.008 & 33.07\(\pm\)0.008 & 33.88\(\pm\)0.238 & 33.23\(\pm\)0.237 & 33.09\(\pm\)0.045 \\ **1283-Baseline-Mobile** & 8.82\(\pm\)0.028 & 34.71\(\pm\)0.125 & 33.80\(\pm\)0.173 & 32.74\(\pm\)0.192 & 33.71\(\pm\)0.004 & 33.07\(\pm\)0.008 & 33.88\(\pm\)0.199 & 33.85\(\pm\)0.237 & 33.65\(\pm\)0.037 & 33.45\(\pm\)0.045 \\ **1283-Baseline-Mobile** & 7.86\(\pm\)0.028 & 34.77\(\pm\)0.195 & 33.80\(\pm\)0.183 & 32.74\(\pm\)0.192 & 33.71\(\pm\)0.004 & 33.07\(\pm\)0.008 & 33.07\(\pm\)0.008 & 33.89\(\pm\)0.237 & 33.72\(\pm\)0.009 & 33.47\(\pm\)0.045 \\ **1283-Baseline-Mobile** & 7.86\(\pm\)0.028 & 34.57\(\pm\)0.175 & 34.57\(\pm\)0.192 & 33.74\(\pm\)0.192 & 33.71\(\pm\)0.004 & 33.07\(\pm\)0.008 & 33.89\(\pm\)0.008 & 33.89\(\pm\)0.023 & 33.68\(\pm\)0.037 & 33.57\(\pm\)0.045 \\ **1283-Baseline-Mobile** & 8.82\(\pm\)0.028 & 34.57\(\pm\)0.173 & 34.57\(\pm\)0.192 & 33.69\(\pm\)0.245 & 33.75\(\pm\)0.038 & 33.89\(\pm\)0.006 & 33.98\(\pm\)0.026 & 33.68\(\pm\)0.037 & 33.65\(\pm\)0.037 & 33.65\(\pm\)0.045 \\ **1283-Baseline-Mobile** & 8.82\(\pm\)0.028 & 34.58\(\pm\)0.173 & 32.80\(\pm\)0.173 & 32.80\(\pm\)0.173 & 32.80\(\pm\)0.248 & 33.75\(\pm\)0.038 & 33.82\(\pm\)0.006 & 33.98\(\pm\)0.026 & 33.86\(\pm\)0.037 & 33.65\(\pm\)0.049 \\ **1283-Baseline-Mobile** & 7.86\(\pm\)0.028 & 34.58\(\pm\)0.173 & 32.71\(\pm\)0.192 & 33.70\(\pm\)0.192 & 33.78\(\pm\)0.248 & 33.71\(\pm\)0.007 & 33.71\(\pm\)0.198 & 33.78\(\pm\)0.233 & 33.78\(\pm\)0.378 & 33.76\(\pm\)0.046 \\ **1283-Baseline-Mobile** & 7.74\(\pm\)0.028 & 33.94\(\pm\)0.024 & 31.70\(\pm\)0.024 & 34.50\(\pm\)0.023 & 34.79\(\pm\)0. ploration. By leveraging the inherent sampling property of normalizing flow, it is feasible to plot the trade-off curve between PSNR (fidelity) and LPIPS (perception) for flow-based models by adjusting temperatures, as depicted in Fig 5. This trade-off curve reveals two distinct insights. First, when the sampling temperature escalates from low to high (i.e., from the top left corner to the bottom right corner), the flow models tend to exhibit lower PSNR but improved LPIPS. However, beyond a specific temperature threshold, both PSNR and LPIPS degrade as the temperature increase. This suggests that a higher temperature does not guarantee enhanced perceptual quality, as flow models may generate noisy artifacts. Nevertheless, through appropriate control of the sampling temperature, it is possible to select the preferred trade-off between fidelity and visual quality to produce photo-realistic images, as demonstrated in Fig 6. Second, Fig 5 illustrates that the trade-off Pareto front of LINF consistently outperforms those of the prior flow-based methods except at the two extreme ends. This reveals that given an equal PSNR, LINF exhibits superior LPIPS. Conversely, when LPIPS values are identical, LINF demonstrates improved PSNR. This finding underscores that LINF attains a more favorable balance between PSNR and LPIPS in comparison to preceding techniques. Computation time.To demonstrate the advantages of the proposed Fourier feature ensemble and local texture patch \begin{table} \begin{tabular}{l|c|c|c|c} \hline Method & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & Diversity\(\uparrow\) \\ \hline \hline ESRGAN [4] & 26.22 & 0.75 & 0.124 & 0 \\ RankSGRAN [5] & 26.55 & 0.75 & 0.128 & 0 \\ SRDiff [6] & 27.41 & 0.79 & 0.136 & 6.1 \\ LAR-SR [8] & 27.03 & 0.77 & 0.114 & - \\ SRFlow \(\tau=0.9\)[1] & 27.08 & 0.76 & 0.121 & 5.6 \\ HCFlow+ \(\tau=0.9\)[7] & 27.11 & 0.76 & 0.127 & 4.7 \\ HCFlow++ \(\tau=0.9\)[7] & 26.61 & 0.74 & 0.111 & 5.4 \\ **Ours \(\tau=0.8\)** & 27.33 & 0.76 & 0.112 & 5.1 \\ \hline SRFlow \(\tau=0\)[1] & 29.05 & 0.83 & 0.251 & 0 \\ HCFlow+ \(\tau=0\)[7] & 29.25 & 0.83 & 0.262 & 0 \\ HCFlow++ \(\tau=0\)[7] & 29.04 & 0.82 & 0.258 & 0 \\ **Ours \(\tau=0\)** & 29.14 & 0.83 & 0.248 & 0 \\ \hline \end{tabular} \end{table} Table 2: The \(\times 4\) SR results on the DIV2K [42] validation set. Note that PSNR and SSIM are evaluated on the RGB space. The best and second best results are marked in red and blue, respectively. Figure 4: The \(\times 4\) SR qualitative results of generative SR methods on the DIV2K [42] validation set. Figure 5: An illustration of the trade-off between PSNR and LPIPS with varying sampling temperatures \(\tau\). The sampling temperature increases from the top left corner (\(t=0.0\)) to the bottom right corner (\(t=1.0\)). The x-axis is reversed for improved visualization. Figure 6: An example for depicting the trade-off between fidelity- and perceptual-oriented results using different temperature \(\tau\). -based generative approach in enhancing the inference speed of LINF, we compare the average inference time for a single DIV2K image with that of the contemporary generative SR models [1, 7, 8]. As shown in Table 3, the inference time of LINF is approximately \(27.2\) times faster than the autoregressive (AR)-based SR models [8] and \(2.6\) times faster than the flow-based SR models [1, 7], while concurrently achieving competitive performance in terms of the LPIPS metric. ### Ablation Study Fourier feature ensemble.As discussed in Section 3.2, LINF employs a Fourier feature ensemble mechanism to replace the local ensemble mechanism. To validate its effectiveness, we compare the two mechanisms in Table 4. The results show that the former reduces the inference time by approximately \(33\%\) compared to the latter, while maintaining a competitive performance on the SR metrics. Moreover, neglecting to scale the amplitude of the Fourier features with ensemble weights results in a slightly worse performance. This validates that scaling the amplitude of the Fourier features with ensemble weights is effective, and enables LINF to focus on the more important information. Analysis of the impact of local region size.As described in Section 3, our proposed framework aims to learn the texture distribution of an \(n\times n\) local region, where \(n\) governs the region size. As a result, our model can be categorized as either pixel-based and patch-based by setting \(n=1\) and \(n>1\), respectively. Table 4 also presents a quantitative comparison between pixel-based and patch-based models. The results reveal that a pixel-based model can generate high-fidelity images with a superior PSNR compared to a patch-based one when the temperature is set to zero. However, in the random sample setting, a patch-based model can generate higher perceptual quality images with a lower LPIPS. This phenomenon is attributed to the local-incoherent issue when sampling with pixel-based method. Specifically, pixel-wise random sampling can occasionally result in incoherent color, as illustrated in Fig 7. In contrast, a patch-based model preserves local coherency by considering the distribution of a patch, thereby achieving enhanced visual quality. In addition, while a pixel-based model requires \(H\times W\) forward passes to generate an image of shape \(H\times W\), a patch-based model necessitates only \((\lceil H/n\rceil)\times(\lceil W/n\rceil)\) forward passes, yielding greater efficiency in inference, particularly for larger values of \(n\). ## 5 Conclusion In this paper, we introduced a novel framework called LINF for arbitrary-scale SR. To the best of our knowledge, LINF is the first approach to employ normalizing flow for arbitrary-scale SR. Specifically, we formulated SR as a problem of learning the distributions of local texture patches. We utilized coordinate conditional normalizing flow to learn the distribution and a local implicit module to generate conditional signals. Through our quantitative and qualitative experiments, we demonstrated that LINF can produce photo-realistic high-resolution images at arbitrary upscaling scales while achieving the optimal balance between fidelity and perceptual quality among all methods. ## Acknowledgements The authors gratefully acknowledge the support from the National Science and Technology Council (NSTC) in Taiwan under grant numbers MOST 111-2223-E-007-004-MY3 and MOST 111-2628-E-007-010, as well as the financial support from MediaTek Inc., Taiwan. The authors would also like to express their appreciation for the donation of the GPUs from NVIDIA Corporation and NVIDIA AI Technology Center (NVAITC) used in this work. Furthermore, the authors extend their gratitude to the National Center for High-Performance Computing (NCHC) for providing the necessary computational and storage resources. \begin{table} \begin{tabular}{l|c|c|c|c} \hline Method & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & Time (s)\(\downarrow\) \\ \hline \hline Local ensemble & **29.04** & 0.82 & 0.270 & 2.16 \\ Fourier ensemble & **29.04** & 0.82 & 0.270 & 1.44 \\ Fourier ensemble (-W) & 29.03 & 0.82 & 0.271 & 1.39 \\ \hline Fourier ensemble (+P) \(\tau=0\) & 28.85 & 0.82 & 0.273 & **0.33** \\ Fourier ensemble (+P) \(\tau=0.6\) & 27.43 & 0.77 & **0.158** & **0.33** \\ \hline \end{tabular} \end{table} Table 4: The \(\times 4\) SR results on the DIV2K [42] validation set. EDSR-baseline [21] is used as the encoder, -W refers to removing the amplitude scaling, and +P indicates the usage of 3\(\times\)3 patch-based model. The computation time is evaluated on an NVIDIA TITAN X. The best results are denoted in bold and underlined. \begin{table} \begin{tabular}{l|c|c|c} \hline Method & LPIPS \(\downarrow\) & Time (s)\(\downarrow\) & \#Param \\ \hline \hline LAR-SR [8] & 0.114 & 14.70 & 62.1M \\ SRFlow \(\tau=0.9\)[1] & 0.121 & 1.43 & 39.5M \\ HCFlow++ \(\tau=0.9\)[7] & **0.111** & 1.46 & 23.2M \\ Ours \(\tau=0.8\) & 0.112 & **0.54** & **17.5M** \\ \hline \end{tabular} \end{table} Table 3: The average \(\times 4\) SR inference time of a single DIV2K [42] image. The computation time is evaluated on an NVIDIA Tesla V100. The best results are denoted in bold and underlined. Figure 7: The local incoherence issue of the pixel-based method. Note that both images are sampled with a temperature of \(\tau=0.6\).
2302.12703
Reflexive polytopes and discrete polymatroids
A classification of discrete polymatroids whose independence polytopes are reflexive will be presented.
Jürgen Herzog, Takayuki Hibi
2023-02-24T16:01:34Z
http://arxiv.org/abs/2302.12703v1
# Reflexive polytopes and discrete polymatroids ###### Abstract. A classification of discrete polymatroids whose independence polytopes are reflexive will be presented. 2010 Mathematics Subject Classification: Primary 52B20; Secondary 05E40 The second author was supported by JSPS KAKENHI 19H00637 ## Introduction The discrete polymatroid is introduced in [1]. In the present paper, as a supplement to [1], a classification of discrete polymatroids whose independence polytopes are reflexive will be presented. We refer the reader to [1] and [2] for fundamental materials on discrete polymatroids. ## 1. Reflexive polytopes A convex polytope \(\mathcal{P}\subset\mathbb{R}^{d}\) of dimension \(d\) is called a _lattice polytope_ if each of its vertices belongs to \(\mathbb{Z}^{d}\). A _reflexive polytope_ is a lattice polytope \(\mathcal{P}\subset\mathbb{R}^{d}\) of dimension \(d\) for which the origin of \(\mathbb{R}^{d}\) belongs to the interior of \(\mathcal{P}\) and the dual polytope \(\mathcal{P}^{\vee}=\{\mathbf{x}\in\mathbb{R}^{d}:\langle\mathbf{x},\mathbf{y} \rangle\leq 1,\forall\mathbf{y}\in\mathcal{P}\}\) of \(\mathcal{P}\) is a lattice polytope, where \(\langle\mathbf{x},\mathbf{y}\rangle\) stands for the canonical inner product of \(\mathbb{R}^{d}\). A lattice polytope which can be a reflexive polytope by parallel shift is also called reflexive. Let \(\mathbf{e}_{1},\ldots,\mathbf{e}_{d}\) denote the canonical basis vectors of \(\mathbb{R}^{d}\). Let \(P\subset\mathbb{Z}^{d}_{+}\) be a _discrete polymatroid_[1, Definition 2.1] on the ground set \([d]\). In what follows one assumes that each \(\mathbf{e}_{i}\) belongs to \(P\). Let \(\mathcal{P}=\mathcal{P}_{P}\subset\mathbb{R}^{d}\) denote the lattice polytope which is the convex hull of \(P\) in \(\mathbb{R}^{d}\). We call \(\mathcal{P}\) the _independence polytope_ of \(P\). One has \(\dim\mathcal{P}=d\). Let \(\rho=\rho_{P}\) denote the _ground set rank function_[1, pp. 243] of \(\mathcal{P}\). It follows from [1, Theorem 7.3] that **Lemma 1.1**.: _The independence polytope \(\mathcal{P}\) is reflexive if and only if, for each subset \(X\subset[d]\) which is \(\rho\)-closed and \(\rho\)-inseparable_[1, pp. 257-258]_, one has \(\rho(X)=|X|+1\)._ A _sublattice_ of \(2^{[d]}\) is a collection \(\mathcal{L}\) of subsets of \([d]\) with \(\emptyset\in\mathcal{L}\) and \([d]\in\mathcal{L}\) such that, for all \(A\) and \(B\) belonging to \(\mathcal{L}\), one has \(A\cap B\in\mathcal{L}\) and \(A\cup B\in\mathcal{L}\). **Theorem 1.2**.: _(a) Let \(P\) be a discrete polymatroid on the ground set \([d]\) and \(\rho=\rho_{P}\) the ground set rank function of \(\mathcal{P}\). Let \(\mathcal{A}\) be the set of \(\rho\)-closed and \(\rho\)-inseparable subsets of \(\mathcal{P}\). If \(\mathcal{P}\) is reflexive, then \(\mathcal{A}\cup\{\emptyset\}\) is a sublattice of \(2^{[d]}\)._ _(b) Conversely, given a sublattice \(\mathcal{L}\) of \(2^{[d]}\), there exists a unique discrete polymatroid \(P\) on the ground set \([d]\) for which \(\mathcal{L}\) is the set of \(\rho\)-closed and \(\rho\)-inseparable subsets of \(\mathcal{P}\) and \(\mathcal{P}\) is reflexive._ Proof.: (a) If the independence polytope \(\mathcal{P}\) of \(P\) is reflexive, then Lemma 1.1 says that \(\rho(A)=|A|+1\) for each \(A\in\mathcal{A}\). It follows from [1, Proposition 7.2] that \(\mathcal{P}\) consists of those \((x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) for which \[x_{i}\geq 0,\ \ i=1,2,\ldots,d,\] and \[\sum_{i\in A}x_{i}\leq|A|+1,\ \ \ A\in\mathcal{A}. \tag{1}\] Since each \(\mathbf{e}_{i}\in\mathcal{P}\) and \(\mathcal{P}\) is compact, it follows that \[\bigcup_{A\in\mathcal{A}}A=[d].\] Furthermore, if \(X\not\in\mathcal{A}\), then \(\rho(X)>|X|+1\). In fact, if \(|X|=1\) and \(X=\{i\}\), then \(3\mathbf{e}_{i}\in\mathcal{P}\) and \(\rho(X)>2\). In general, if \(|X|=q\geq 2\) and \(X=\{i_{1},\ldots,i_{q}\}\) with \(i_{1}<\cdots<i_{q}\), then, one has \[\mathbf{v}=\frac{q}{q-1}\sum_{j=1}^{q}\mathbf{e}_{i_{j}}\in\mathcal{P} \tag{2}\] and \[\rho(X)\geq|\mathbf{v}|=\frac{q^{2}}{q-1}>q+1.\] To see why (2) holds, one shows that \(\mathbf{v}\) satisfies each of the inequalities (1). Let \(A\in\mathcal{A}\) with \(X\subsetneq A\), then \[\frac{q^{2}}{q-1}\leq q+2=|X|+2\leq|A|+1.\] Let \(A\in\mathcal{A}\) with \(|X\cap A|=k<q\), then \[k\frac{q}{q-1}\leq k+1\leq|A|+1.\] One claims that \(\mathcal{A}\cup\{\emptyset\}\) is a sublattice of \(2^{[d]}\). Let \(A,B\in\mathcal{A}\) and suppose that either \(A\cup B\not\in\mathcal{A}\cup\{\emptyset\}\) or \(A\cap B\not\in\mathcal{A}\cup\{\emptyset\}\). Then \[\rho(A)+\rho(B)=|A|+|B|+2=|A\cup B|+|A\cap B|+2<\rho(A\cup B)+\rho(A\cup B),\] which contradict the fact that \(\rho\) is submodular. Furthermore, since \(\bigcup_{A\in\mathcal{A}}A=[d]\), one has \([d]\in\mathcal{A}\), as desired. (b) By virtue of [1, Theorem 9.1] one introduces the nondecreasing submodular function \(\rho:2^{[n]}\to\mathbb{Z}_{+}\) by setting \[\rho(X)=\min\{|A|+1:X\subseteq A,A\in\mathcal{L}\},\ \ \emptyset\neq X\subset[d]\] together with \(\rho(\emptyset)=0\). Let \(P\) be the discrete polymatroid on the ground set \([n]\) and \(\rho\) the ground set rank function of \(\mathcal{P}\). Then \(\mathcal{L}\) is the set of \(\rho\)-closed and \(\rho\)-inseparable subsets of \([d]\). Furthermore, Lemma 1.1 guarantees that the independence polytope \(\mathcal{P}\) of \(P\) is reflexive. On the other hand, suppose that \(P^{\prime}\) is a discrete polymatroid on the ground set \([d]\) and \(\rho^{\prime}\) its ground set rank function of the independence polytope \(\mathcal{P}^{\prime}\) of \(P^{\prime}\) for which \(\mathcal{L}\) is the set of \(\rho^{\prime}\)-closed and \(\rho^{\prime}\)-inseparable subsets of \(P^{\prime}\) and for which \(\mathcal{P}^{\prime}\) is reflexive. Then by using Lemma 1.1 again one has \(\rho^{\prime}(A)=|A|+1\) for each \(A\in\mathcal{L}\). Hence \(\mathcal{P}=\mathcal{P}^{\prime}\) ([1, Proposition 7.2]). Thus \(P=P^{\prime}\) ([1, Theorem 3.4]), as desired. ## 2. Examples **Example 2.1**.: Let \(\mathcal{P}\subset\mathbb{R}^{3}\) be the convex polytope whose facets are each \(x_{i}=0\) together with \[x_{1}+x_{2}=3,\ \ x_{2}+x_{3}=3,\ \ x_{1}+x_{2}+x_{3}=4.\] It can be checked that \(\mathcal{P}\) is reflexive. However, \(\mathcal{P}\) cannot be the independence polytope of a discrete polymatroid on the ground set [3]. In fact, if \(\mathcal{P}\) is the independence polytope of a discrete polymatroid \(P\) on the ground set [3], then both \(u=(0,3,0)\) and \(v=(1,2,1)\) belong to the set of bases [1, p. 245] of \(P\). One has \(|u|<|v|\), which contradicts [1, Theorem 2.3]. **Example 2.2**.: Let \(\mathcal{L}\) be a chain of length \(d\) of \(2^{[d]}\), say, \[\mathcal{L}=\{\emptyset,\{d\},\{d-1,d\},\ldots,\{1,\ldots,d\}\}.\] Let \(P\) denote the discrete polymatroid constructed in Theorem 1.2 (b). Let \[\mathcal{B}=\{[d],[d],[d-1],\ldots,[2],[1]\}.\] Let \(P^{\prime}\) denote the transversal polymatroid [1, p. 267] presented by \(\mathcal{B}\). If \(X\subset[d]\) and \(i=\min(X)\), then it follows from the proof of Theorem 1.2 (b) that \[\rho_{P}(X)=|\{i,i+1,\ldots,d\}|+1=(d-(i-1))+1=d+2-i\] On the other hand, by the definition of the ground set rank function of a transversal polymatroid, one has \[\rho_{P^{\prime}}(X)=(d+1)-(i-1)=d+2-i.\] Hence \(\rho_{P}=\rho_{P^{\prime}}\). Thus \(P=P^{\prime}\). It would be of interest for which sublattice \(\mathcal{L}\) of \(2^{[d]}\) the discrete polymatroid constructed in Theorem 1.2 (b) can be a transversal polymatroid.
2306.11925
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching
Obtaining large pre-trained models that can be fine-tuned to new tasks with limited annotated samples has remained an open challenge for medical imaging data. While pre-trained deep networks on ImageNet and vision-language foundation models trained on web-scale data are prevailing approaches, their effectiveness on medical tasks is limited due to the significant domain shift between natural and medical images. To bridge this gap, we introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets. We have collected approximately 1.3 million medical images from 55 publicly available datasets, covering a large number of organs and modalities such as CT, MRI, X-ray, and Ultrasound. We benchmark several state-of-the-art self-supervised algorithms on this dataset and propose a novel self-supervised contrastive learning algorithm using a graph-matching formulation. The proposed approach makes three contributions: (i) it integrates prior pair-wise image similarity metrics based on local and global information; (ii) it captures the structural constraints of feature embeddings through a loss function constructed via a combinatorial graph-matching objective; and (iii) it can be trained efficiently end-to-end using modern gradient-estimation techniques for black-box solvers. We thoroughly evaluate the proposed LVM-Med on 15 downstream medical tasks ranging from segmentation and classification to object detection, and both for the in and out-of-distribution settings. LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models. For challenging tasks such as Brain Tumor Classification or Diabetic Retinopathy Grading, LVM-Med improves previous vision-language models trained on 1 billion masks by 6-7% while using only a ResNet-50.
Duy M. H. Nguyen, Hoang Nguyen, Nghiem T. Diep, Tan N. Pham, Tri Cao, Binh T. Nguyen, Paul Swoboda, Nhat Ho, Shadi Albarqouni, Pengtao Xie, Daniel Sonntag, Mathias Niepert
2023-06-20T22:21:34Z
http://arxiv.org/abs/2306.11925v3
LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching ###### Abstract Obtaining large pre-trained models that can be fine-tuned to new tasks with limited annotated samples has remained an open challenge for medical imaging data. While pre-trained deep networks on ImageNet and vision-language foundation models trained on web-scale data are prevailing approaches, their effectiveness on medical tasks is limited due to the significant domain shift between natural and medical images. To bridge this gap, we introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets. We have collected approximately \(1.3\) million in medical images from 55 publicly available datasets, covering a large number of organs and modalities such as CT, MRI, X-ray, and Ultrasound. We benchmark several state-of-the-art self-supervised algorithms on this dataset and propose a _novel self-supervised contrastive learning algorithm using a graph matching formulation_. The proposed approach makes three contributions: (i) it integrates prior pair-wise image similarity metrics based on local and global information; (ii) it captures the structural constraints of feature embeddings through a loss function constructed via a combinatorial graph-matching objective; and (iii) it can be trained efficiently end-to-end using modern gradient-estimation techniques for black-box solvers. We thoroughly evaluate the proposed LVM-Med on \(15\) downstream medical tasks ranging from segmentation and classification to object detection, and both for the in and out-of-distribution settings. LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models. For challenging tasks such as Brain Tumor Classification or Diabetic Retinopathy Grading, LVM-Med improves previous vision-language models trained on 1 billion masks by 6-\(7\%\) while using only a ResNet-50. We release pre-trained models at this link [https://github.com/duyhominhnguyen/LVM-Med](https://github.com/duyhominhnguyen/LVM-Med). ## 1 Introduction Constructing large-scale annotated medical image datasets for training deep networks is challenging due to data acquisition complexities, high annotation costs, and privacy concerns [1; 2]. Vision-language pretraining has emerged as a promising approach for developing foundational models that support various AI tasks. Methods such as CLIP [3], Align [4], and Flava [5] propose a unified model trained on large-scale image-text data, showing exceptional capabilities and performance across various tasks. However, their effectiveness in the medical domain still remains unclear. A recent work SAM [6] trains large vision models on over one billion annotated masks from 11M natural images, enabling interactive segmentation. Nevertheless, SAM's zero-shot learning performance is moderate on other datasets [7; 8], highlighting the need for fine-tuning to achieve satisfactory results [9]. To facilitate the development of foundation models in the medical domain, we make two major contributions. First, we have curated a vast collection of 55 publicly available datasets, resulting in approximately 1.3 million medical images covering various body organs and modalities such as CT, MRI, X-ray, ultrasound, and dermoscopy, to name a few. Second, we propose LVM-Med, a novel class of contrastive learning methods, utilizes pre-trained ResNet-50 and a ViT network SAM[10]. We evaluate various instances of LVM-Med relative to popular supervised architectures and vision-language models across \(15\) medical tasks. To our best knowledge, this is the first time such a large-scale medical dataset has been constructed and used to investigate the capabilities of SSL algorithms. LVM-Med incorporates a second-order graph-matching formulation, which subsumes and extends a large class of contrastive SSL methods. Given a batch of images, two random transformations are applied to each image, and the resulting transformed images are then fed to an image encoder. The embedding vectors obtained from images in a batch are used to construct two graphs where vertices represent pairs of transformed images generated from the same original one. Through solving a graph-matching problem [11; 12], we learn feature representation such that their encoding serve as suitable priors for a global solution of the graph-matching objective. This approach is distinct from prior contrastive learning methods that focus on merely optimizing pair-wise distances between transformed images or learning contrastive distances with positive and negative samples. It is worthwhile noting that previous contrastive learning methods are special instances of our general framework (Figure (1), right). LVM-Med has several advantages over existing approaches. First, it integrates advanced pair-wise image similarity taken from prior SSL methods into vertex affinities, resulting in both global and local information that can be efficiently fused. Second, it uncovers underlying structures of feature embeddings by utilizing edge constraints, enhancing robustness in the presence of similar entities in medical datasets. Third, though combinatorial problems are typically non-differentiable, LVM-Med can efficiently calculate gradients through the discrete combinatorial loss function using modern implicit maximum likelihood estimation techniques. Consequently, LVM-Med can scale successfully on large-scale data. In a wide range of \(15\) medical experiments, LVM-Med sets a new state-of-the-art in fully fine-tuning or prompt-based segmentation, linear and fully fine-tuning image classification, and domain generalization, outperforming several vision-language models trained on a hundred million image-text instances. We summarize major contributions in this work, including: 1. We present a collection of large-scale medical datasets, serving as a resource for exploring and evaluating self-supervised algorithms. Figure 1: (left) Overview of the body organs and modalities in our collected dataset; (right) LVM-Med unifies and extends contrastive and instance-based self-supervised learning approaches by specifying graph’s properties. 2. We propose LVM-Med, a novel SSL approach based on second-order graph matching. The proposed method is flexible in terms of integrating advanced pair-wise image distance and being able to capture structural feature embedding through the effective utilization of second-order constraints within a global optimization framework. 3. On both ResNet-50 and ViT architectures, LVM-Med consistently outperforms multiple existing self-supervised learning techniques and foundation models across a wide range of downstream tasks. ## 2 Related Work ### Self-supervised learning in medical image analysis The latest approaches of _global feature_ SSL rely on shared embedding architecture representations that remain invariant to different viewpoints. The variation lies in how these methods prevent collapsing solutions. _Clustering methods_[13; 14; 15] constrain a balanced partition of the samples within a set of cluster assignments. _Contrastive methods_[16; 17; 18; 19] uses negative samples to push far away dissimilar samples from each other through contrastive loss, which can be constructed through memory bank [20], momentum encoder [21], or graph neural network [22]. Unlike contrastive learning, _instant-based learning_ depends on maintaining the informational context of the feature representations by either explicit regularization [23; 24] or architectural design [25; 26]. Our work relates to contrastive and instance-based learning, where a simplified graph-matching version of 1-N or 1-1 reverts to these approaches. In contrast to global methods, _local methods_ specifically concentrate on acquiring a collection of local features that depict small portions of an image. A contrastive loss function can be used on those feature patches at different criteria such as image region levels [27], or feature maps [28; 29]. These strategies are also widely applied in the medical context, thereby pre-text tasks based on 3D volume's properties, such as reconstructing the spatial context [30], random permutation prediction [31] and self-restoration [32; 33], are proposed. Our LVM-Med model on this aspect can flexible unifying both global and local information by adding them to the affinities matrixes representing the proximity of two graphs, enhancing expressive feature representations. ### Vision-language foundation models In order to comprehend the multi-modal world using machines, it is necessary to create foundational models that can operate across diverse modalities and domains [34]. CLIP [3] and ALIGN [4] are recognized as groundbreaking explorations in foundation model development. These models demonstrate exceptional proficiency in tasks such as cross-modal alignment and zero-shot classification by learning contrastive pretraining on extensive image-text pairs from the web, despite the presence of noise. To further support multi-modal generation tasks such as visual question answering or video captioning, recent works such as FLAVA [5] and OmniVL [35] are designed to learn cross-modal alignment as well as image-video language models. Conversely, the SAM model [6] utilized a supervised learning strategy with over \(1\) billion masks on \(11\) million user-prompt interactions and achieved impressive zero-shot segmentation performance on unseen images. While many efforts have been proposed for natural image domains, limited research has been conducted on large-scale vision models for medical imaging. This motivated us to develop the LVM-Med model. ### Graph matching in visual computing Graph matching is a fundamental problem in computer vision, which aims to find correspondences between elements of two discrete sets, such as key points in images or vertices of 3D meshes, and used in numerous vision tasks, including 3D reconstruction [36], tracking [37], and shape model learning [38]. In this framework, the vertices of the matched graphs correspond to the elements of the discrete sets to be matched. Graph edges define the cost structure of the problem, namely, second order, where pairs of matched vertices are penalized in addition to the vertex-to-vertex matchings. This allows us to integrate the underlying geometrical relationship between vertices into account but also makes the optimization problem NP-hard. Therefore, many approximate approaches have been proposed to seek acceptable suboptimal solutions by relaxing discrete constraints [39; 40]. In other directions, gradient estimation techniques for black-box solvers are employed to make the hybrid discrete-continuous matching framework be differentially end-to-end [41; 42; 43]. Our LVM-Med follows the latter direction and, for the first time, presents the formulation of contrastive learning as a graph-matching problem. ## 3 Methodology ### Dataset construction We provide detailed information about the collected datasets in Appendix. The data was collected from publicly available resources, which include a diverse set of modalities and body organs as illustrated in Figure 1 (left). The data format is a combination of 2D images and 3D volumes as well as X-ray, MRI, CT, Ultrasonounds, etc. To avoid potential test data leaking for downstream tasks, we use the default training partition in each dataset; otherwise, we randomly sample with \(20\%\) total images. In total, we obtain approximately \(1.3\) million images. More statistics on the dataset are presented in the Appendix. ### Contrastive learning as graph matching Figure 2 provides an illustration of our LVM-Med method, which learns the feature representation \(f_{\theta}\) by matching two distorted views derived from the same input image through a graph-matching formulation. Below we describe in detail each component. #### 3.2.1 Graph construction on feature embedding Given a batch of \(N\) images \(\mathbf{B}=\{\mathbf{x}_{1},\,\mathbf{x}_{2},..,\mathbf{x}_{N}\}\) sampled from a dataset, we generate for each image \(\mathbf{x}_{i}\in\mathbf{B}\) two transformed images \(\mathbf{x}_{i}^{s}\) and \(\mathbf{x}_{i}^{t}\) by using two transformations \(s,t\sim T\) sampled from \(T\), a set of pre-defined image transformations. After the transformations, each image is of shape \((C\times H\times W)\), where \(C\) is the number of channels and \((H,W)\) the original spatial dimensions. These distorted images are fed into an encoder \(f_{\theta}:\mathbb{R}^{C\times H\times W}\rightarrow\mathbb{R}^{D\times R \times S}\) to produce two representations \(\mathbf{y}_{i}^{s}=f_{\theta}(\mathbf{x}_{i}^{s})\) and \(\mathbf{y}_{i}^{t}=f_{\theta}(\mathbf{x}_{i}^{t})\) where \(D\) is the number of feature channels and \((R,S)\) are the spatial dimensions of the feature map. On each such representation, we perform an average pooling operation \(\mathrm{Avg}:\mathbb{R}^{D\times R\times S}\rightarrow\mathbb{R}^{D}\) followed by another projection \(h_{\phi}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{F}\) to form two feature embeddings \(\mathbf{z}_{i}^{s}=h_{\phi}(\mathrm{Avg}(\mathbf{y}_{i}^{s}))\), and \(\mathbf{z}_{i}^{t}=h_{\phi}(\mathrm{Avg}(\mathbf{y}_{i}^{t}))\in\mathbb{R}^{F}\) with \(F<D\). Figure 2: LVM-Med Overview. Avg is the average pooling layer, MPN denotes for message passing network, \(\mathfrak{M}\) indicates the combinatorial solver, and \((c^{v},c^{e})\) represents vertex and edge affinity matrices. For each image \(\mathbf{x}_{i}\) in batch size, we generated two distorted versions and fed them into the feature representation \(f_{\theta}\) and another projector \(h_{\theta}\). The obtained embeddings \(\mathbf{z}_{i}^{t},\ \ell\in(s,t)\) are used to build two graphs \(G^{s},G^{t}\). We further design a message passing network \(g_{e}\) that aggregate feature per node by their neighbor information. Then we compute vertex and edge affinities \(c^{v},\mathbf{c}^{e}\) and use them to solve the graph matching. The output afterward is compared with pairs of ground truth \(\left(\mathbf{x}_{i}^{s},\mathbf{x}_{i}^{t}\right),i\in(1,..,N)\) representing distorted images generated from the same sample. In the backward pass, we use modern gradient-estimation techniques to approximate \(\frac{\partial L}{\partial\mathbf{c}^{v}}\) and \(\frac{\partial L}{\partial\mathbf{c}^{e}}\). Given a set of embeddings for a batch \(\mathbf{B}\), we construct two graphs \(G^{s}\) and \(G^{t}\) where, for each pair \((\mathbf{x}^{s}_{i},\mathbf{x}^{t}_{i})\) of corresponding distorted images, we add a node representing \(\mathbf{x}^{s}_{i}\) to \(G^{s}\) and a node representing \(\mathbf{x}^{t}_{i}\) to \(G^{t}\). Hence, for each \(\ell\in\{s,t\}\), we construct a graph \(G^{\ell}=(V^{\ell},E^{\ell})\) with \(V^{\ell}=\{\mathbf{x}^{\ell}_{1},...,\mathbf{x}^{\ell}_{N}\}\) the set of vertices and \(E^{\ell}\) the set of edges \(e^{\ell}_{ij}=(\mathbf{x}^{\ell}_{i},\mathbf{x}^{\ell}_{j})\). The node-level feature matrix is given by \(\mathbf{X}^{\ell}=\left[\mathbf{x}^{\ell}_{1};...;\mathbf{x}^{\ell}_{N}\right]\in\mathbb{R }^{N\times F}\) which associates each vertex \(\mathbf{x}^{\ell}_{i}\) with its feature embedding \(\mathbf{z}^{\ell}_{i}\). We create edges for each graph \(G^{\ell}\) through a \(k\)-nearest neighbors algorithm using the feature matrix \(\mathbf{X}^{\ell}\). The adjacency matrix \(\mathbf{A}^{\ell}\in\mathbb{R}^{N\times N}\) is defined as \(A^{\ell}_{ij}=1\) if \(e^{\ell}_{ij}\in E^{\ell}\) and \(A_{ij}=0\) otherwise. With the two graph structures given, we obtain a node-attributed graph \(G^{\ell}=(V^{\ell},\mathbf{A}^{\ell},\mathbf{X}^{\ell})\) on which a graph neural network \(g_{\varepsilon}\) is used to aggregate the nodes' features. In particular, \(g_{\varepsilon}\) computes an embedding \(\hat{\mathbf{Z}}^{\ell}=g_{\varepsilon}(\mathbf{X}^{\ell},\mathbf{A}^{\ell})\) by performing message passing operations. We set \(g_{\varepsilon}\) to be a graph convolutional network [44; 45] consisting of \(l+1\) layers \(g_{\varepsilon}=\{g_{l},g_{l-1},..,g_{0}\}\) where the output of layer \(l\) is computed as \[H^{\ell}_{l}=\sigma\left(\tilde{D}^{-\frac{1}{2}}(\mathbf{A}^{\ell}+\mathbf{I}_{N}) \tilde{D}^{-\frac{1}{2}}H^{\ell}_{l-1}g_{l-1}\right), \tag{1}\] where \(\mathbf{I}_{N}\) is the identity matrix modeling self-connections; \(\tilde{D}\) is a diagonal matrix with \(\tilde{D}_{ii}=\sum_{j}\mathbf{A}^{\ell}_{ij};g^{l-1}\) are the trainable parameters for each layer; \(\sigma(\cdot)\) is an activation function; and \(H^{\ell}_{0}=\mathbf{X}^{\ell}\). We use the outputs of the last layer as embeddings for the nodes, that is, \(\hat{\mathbf{Z}}^{\ell}=H^{\ell}_{l}\in\mathbb{R}^{N\times F}\) given the shared graph network \(g_{\varepsilon}\). We now have two graphs \(G^{s},G^{t}\) with node attribute matrices \(\hat{\mathbf{Z}}^{s},\;\hat{\mathbf{Z}}^{t}\), the outputs of the graph neural networks. Next, a graph-matching problem is constructed and solved where the gold matching is given by the pairs \((\mathbf{x}^{s}_{i},\mathbf{x}^{t}_{i})\;\;\forall i\in\{1,..,N\}\). #### 3.2.2 Learning affinities with global and local context To represent potential connections for a pair of node \((\mathbf{x}^{s}_{i},\mathbf{x}^{t}_{a})\) where \(\mathbf{x}^{s}_{i}\in G^{s},\;\mathbf{x}^{t}_{a}\in G^{t}\), we design a vertex affinity matrix \(\mathbf{c}^{v}\in\mathbb{R}^{|V^{s}|V^{t}|}\) when \(c^{v}_{ia}\) is the prior (feature-based) similarity between \(\mathbf{x}^{s}_{i}\) and \(\mathbf{x}^{t}_{a}\). An advantage of our formulation is its ability to integrate advanced pair-wise distance can be smoothly integrated to \(c^{v}_{ia}\), resulting in more expressive proximity representation. In particular, we leverage both global and local consistency derived from feature embeddings of distorted images. The _global distance_ used in several prior works can be computed as \(c^{\textit{jdo}}_{ia}(\mathbf{x}^{s}_{i},\mathbf{x}^{t}_{a})=\cos(\hat{\mathbf{z}}^{s}_{i},\hat{\mathbf{z}}^{t}_{a})\) where \(\cos(\cdot)\) denotes cosine similarity; \(\hat{\mathbf{z}}^{\ell}_{m}\) is the embedding of \(\mathbf{x}^{t}_{m}\) (\(\ell\in\{s,t\},\;m\in\{i,a\}\)) obtained after message passing in Eq. (1). Compared to global methods that implicitly learn features for the entire image, local methods concentrate on explicitly learning a specific group of features that characterize small regions of the image. As a result, they are more effective for dense prediction tasks such as segmentation [28; 29; 46]. While recent works applied these tactics as a part of pair-wise minimization conditions [47; 27] Instead, we integrate them as a part of vertex costs \(c^{v}_{ia}\) and use it to solve the graph matching problem. Indeed, we adapt both location- and feature-based local affinity computed as: \[c^{\textit{lo}}_{ia}(\mathbf{x}^{s}_{i},\mathbf{x}^{t}_{a})=\mathbb{E}_{p\in\mathbf{P}} \cos(\mathbf{q}^{s}_{p},\mathbf{q}^{t}_{\text{m}(p)})+\mathbb{E}_{p\in\mathbf{P}}\cos(\bm {q}^{s}_{p},\mathbf{q}^{t}_{\text{m}^{\prime}(p)}) \tag{2}\] where \(\mathbf{P}=\{(r,s)|\;(r,s)\in[1,...,R]\times[1,..,S]\}\) be the set of coordinates in the feature map \(\mathbf{y}^{s}_{i}\in\mathbb{R}^{D\times R\times S}\) of \(\mathbf{x}^{s}_{i}\); \(\mathbf{q}^{t}_{p}\) (\(\ell\in\{s,t\}\)) be the feature vector at position \(p\); \(\text{m}(p)\) denote the spatial closest coordinate to \(p\) in coordinates of feature map \(\mathbf{y}^{t}_{a}\) estimated through transformations on original image \(\mathbf{x}_{i}\); finally \(\;\text{m}^{\prime}(p)\) represents the closest feature vector to \(p\) in \(\mathbf{y}^{t}_{a}\) using \(l^{2}\) distance. Intuitively, the local cost in Eq. (2) enforces invariance on both spatial location and between embedding space at a local scale. Our final affinity cost is computed as: \[c^{v}_{ia}(\mathbf{x}^{s}_{i},\mathbf{x}^{t}_{a})=\alpha\left(c^{\textit{jdo}}_{ia}(\bm {x}^{s}_{i},\mathbf{x}^{t}_{a})\right)+(1-\alpha)\left(c^{\textit{lo}}_{ia}(\mathbf{x}^ {s}_{i},\mathbf{x}^{t}_{a})+c^{\textit{lo}}_{ia}(\mathbf{x}^{t}_{a},\mathbf{x}^{s}_{i})\right) \tag{3}\] #### 3.2.3 Self-supervision through second-order graph matching While the standard graph matching problem for vertex-to-vertex correspondences can be used in our setting (LAP), it fails to capture the similarity between edges. If there are duplicated entities represented by distinct nodes in the same graph, the LAP will consider them identical and skip their neighboring relations. For instance, during the image sampling, two consecutive image slides were sampled from a 3D volume, resulting in their appearances have s a small difference. In such cases, it is complicated to correctly identify those augmented images generated from the same one without using information from the relations among connected nodes in the constructed graph. To address this problem, we introduce additional edge costs \(\mathbf{c}^{e}\in\mathbb{R}|^{E^{s}||E^{t}|}\) where \(c^{e}_{ia,jb}\) represents the similarity between an edge \(v^{s}_{ij}=\left(\mathbf{x}^{s}_{i},\mathbf{x}^{s}_{j}\right)\in E^{s}\) and \(v^{t}_{ab}=(\mathbf{x}^{t}_{a},\mathbf{x}^{t}_{b})\in E^{t}\). These edge costs (second-order) are computed as \(c^{e}_{ia,jb}=\cos((\hat{\mathbf{z}}^{s}_{i}-\hat{\mathbf{z}}^{s}_{j}),(\hat{\mathbf{z}}^{t }_{a}-\hat{\mathbf{z}}^{t}_{b}))\). We now establish the second-order graph-matching problem. Denoting \(\mathbf{v}=\{0,1\}^{|V^{s}||V^{t}|}\) be indicator vector of matched vertices, i.e., \(v_{ia}=1\) if the vertex \(\mathbf{x}^{s}_{i}\in V^{s}\) is matched with \(\mathbf{x}^{t}_{a}\in V^{t}\) and \(v_{ia}=0\) otherwise. The node correspondence between two graphs \(G^{s}\) and \(G^{t}\) that minimizes the global condition stated as: \[\begin{split}&\mathtt{GM}(\mathbf{c}^{v},\mathbf{c}^{e})=\operatorname*{ arg\,min}_{\mathbf{v}\in U(\mathbf{1},\mathbf{1})}-\sum_{i,a}c^{v}_{ia}v_{ia}-\sum_{i,j,a,b}c^{e} _{ia,jb}v_{ia}v_{jb}\\ &\text{where}\quad U(\mathbf{1},\mathbf{1})=\{\mathbf{v}\in\{0,1\}^{N\times N }|\mathbf{v}\mathbf{1}_{N}=\mathbf{1},\mathbf{v}^{T}\mathbf{1}_{N}=\mathbf{1}\}\end{split} \tag{4}\] and \(\mathbf{1}_{N}\) be a \(n\)-dimension one-value vector. The constraint \(U(\mathbf{1},\mathbf{1})\) restricts \(\mathbf{v}\) satisfying the one-to-one matching. Essentially, the Eq. (4) solves the vertex-to-vertex correspondence problem using both node and edges affinities, which can be seen as a form of structural matching (Figure (1),right) and generally can be integrated with higher-order graph constraints as triangle connections or circles. In the experiment, we found out that Eq. (4) significantly improved downstream task performance compared to the pure linear matching approach (Table (6)). Since the Eq. 4 in general is an NP-Hard problem [48] due to its combinatorial nature, we thus use efficient heuristic solvers based on Lagrange decomposition techniques [49]. #### 3.2.4 Backpropagating through a graph matching formulation With \(\hat{\mathbf{v}}=\mathtt{GM}(\mathbf{c}^{v},\mathbf{c}^{e})\) a solution obtained from the solver, we use the Hamming distance and an optimal solution \(\mathbf{v}^{*}\) to define the following loss function \[L(\hat{\mathbf{v}},\mathbf{v}^{*})=\hat{\mathbf{v}}.(1-\mathbf{v}^{*})+\mathbf{v}^{*}.(1-\hat{\bm {v}}). \tag{5}\] The proposed approach aims to learn the feature representation function \(f_{\theta}\) such that its output minimizes Eq. (5). However, this is a difficult problem because the partial derivatives of the loss function w.r.t vector costs \(\mathbf{c}^{v},\mathbf{c}^{e}\), i.e., \(\partial L/\partial\mathbf{c}^{v}\) and \(\partial L/\mathbf{c}^{e}\), are zero almost everywhere [41, 50] due to the objective function in Eq. (4) being piece-wise constant, preventing direct gradient-based optimization. To approximate the gradients required for backpropagation, we adopt IMLE [43, 51]. Let \(\mathbf{\theta}=(\mathbf{c}^{v},\mathbf{c}^{e})\) be the input to the combinatorial graph matching problem in Eq. (4). The core idea of IMLE is to define a probability distribution \(\rho(\mathbf{v};\mathbf{\theta})\) over solutions of the combinatorial optimization problem, where the probability of a solution is proportional to its negative cost, and to estimate \(\partial L/\partial\mathbf{\theta}\) through the gradients of the expectation \(\nabla_{\mathbf{\theta}}\mathbb{E}_{\hat{\mathbf{v}}\sim\rho(\mathbf{v};\mathbf{\theta})}\left[ L(\hat{\mathbf{v}},\mathbf{v}^{*})\right]\). Since exact sampling from \(\rho(\mathbf{v};\mathbf{\theta})\) is typically intractable, IMLE instead chooses a noise distribution \(\rho(\mathbf{\epsilon})\) and approximates the gradient of the expectation over \(\rho(\mathbf{v};\mathbf{\theta})\) with the gradient of the expectation over \(\rho(\mathbf{\epsilon})\) \[\nabla_{\mathbf{\theta}}\mathbb{E}_{\hat{\mathbf{v}}\sim\rho(\mathbf{v};\mathbf{\theta})} \left[L(\hat{\mathbf{v}},\mathbf{v}^{*})\right]\approx\nabla_{\mathbf{\theta}}\mathbb{E}_ {\mathbf{\epsilon}\sim\rho(\mathbf{\epsilon})}[L(\mathtt{GM}(\mathbf{\theta}+\mathbf{\epsilon} ),\mathbf{v}^{*})].\] The above approximation invokes the reparameterization trick for a complex discrete distribution. A typical choice for \(\rho(\mathbf{\epsilon})\) is the Gumbel distribution, that is, \(\rho(\mathbf{\epsilon})\sim\mathrm{Gumbel}(0,1)\)[52]. Now, by using a finite-difference approximation of the derivative in the direction of the gradient of the loss \(\nabla_{\hat{\mathbf{v}}}L(\tilde{\mathbf{v}},\mathbf{v}^{*})\), we obtain the following estimation rule: \[\nabla_{\mathbf{\theta}}\mathbb{E}_{\hat{\mathbf{v}}\sim p(\mathbf{v};\mathbf{\theta})}\left[L( \hat{\mathbf{v}},\mathbf{v}^{*})\right]\approx\mathbb{E}_{\mathbf{\epsilon}\sim\rho(\mathbf{ \epsilon})}\bigg{[}\frac{1}{\lambda}\bigg{\{}\tilde{\mathbf{v}}-\mathtt{GM}(\mathbf{ \theta}+\mathbf{\epsilon}-\lambda\nabla_{\tilde{\mathbf{v}}}L(\tilde{\mathbf{v}},\mathbf{v}^{*} ))\bigg{\}}\bigg{]}, \tag{6}\] ``` functionForwardPass(\(\mathbf{c^{v}},\mathbf{c^{c}}\)) //Gumbel noise distribution sampling \(\mathbf{\epsilon},\mathbf{\epsilon^{\prime}}\sim\text{Gumbel}(0,1)\) //Graph-matching with perturbed \((\mathbf{c}^{v},\mathbf{c}^{c})\) \(\mathbf{\tilde{v}}=\text{GM}\left(\mathbf{c^{v}}+\mathbf{\epsilon},\mathbf{c^{c}}+\mathbf{\epsilon^ {\prime}}\right)\) //Save values for the backward pass save (\(\mathbf{c^{v}},\mathbf{c^{c}}\)), \((\mathbf{\epsilon},\mathbf{\epsilon^{\prime}})\) and \(\mathbf{\tilde{v}}\) return\(\mathbf{\tilde{v}}\) ``` **Algorithm 1** Forward and Backward Pass for \(\mathbf{c^{v}}\), \(\mathbf{c^{c}}\) where \(\mathbf{\tilde{v}}=\text{GM}(\mathbf{\theta}+\mathbf{\epsilon})\), \(\lambda\) is a step size of finite difference approximation. Using a Monte Carlo approximation of the above expectation, the gradient for \(\mathbf{\theta}\) is computed as a difference of two or more pairs of perturbed graph-matching outputs. We summarize in Algorithm 1 the forward and backward steps for \(\mathbf{c^{v}},\mathbf{c^{c}}\). ## 4 Experiments ### Implementation details Pre-trainingWe utilize Resnet50 [53] and Vision Transformer (ViT-B/16) [54] to train our LVM-Med. For Resnet50, we load pre-trained from ImageNet-1K [55], and SAM Encoder backbone weight [10] for ViT. The raw image is augmented to two different views by using Multi-crop techniques as [29] with small modifications in ratio crops. We trained the LVM-Med with 100 epochs on the collected dataset. The batch size of \(3200\) is used for ResNet50 and we reduced it to \(2800\) for ViT due to memory limitation. The model is optimized with Adam [56] with an initial learning rate \(2\times 10^{-3}\) and reduced halved four times. We use \(16\) A100-GPUs per with \(80\)GB and complete the training process for LVM-Med with ResNet-50 in five days and LVM-Med with ViT encoder in seven days. Other competitor SSL methods as VicRegl, Twin-Barlon, Dino, etc, are initialized from ResNet-50 pre-trained ImageNet-1K and trained with \(100\) epochs with default settings as LVM-Med. Downstream TasksTable 1 lists the datasets and downstream tasks used in our experiments. We cover segmentation, object detection, and image classification problems. We compare 2D-SSL methods trained in our dataset with foundation models like Clip [3], Align [4], Flava [5], and SAM [6] with pre-trained ViT (Bert for Align) taken from each method, respectively. During the downstream task, trained SSL weights are then extracted and attached in U-Net for ResNet50 backbone, TransNet [67] for ViT, and then fine-tuned with training splits of each dataset. Depending on the downstream task's properties, we apply different image resolutions and other parameters like the number of epochs and learning rate for different data domains. Details for these configurations are presented in Appendix. ### 2D- and 3D-based segmentation We evaluate LVM-Med on _eight_ medical segmentation tasks, including five 2D-based and three 3D-based segmentation. In 2D settings, we also compare with 2D supervised architectures, such as U-Net, U-Net++, Attention U-Net, etc. These networks are initialized with ResNet-50 pre-trained ImageNet. Additionally, we investigate the prompt-based segmentation settings inspired by the current success of SAM's zero-shot learning. We utilized the ground truths and added random noise to simulate box-based user prompts as [9]. We next compare three variations of SAM: (i) freezing \begin{table} \begin{tabular}{c|c|c|c} \hline \hline **Evaluation** & **Downstream Task Data** & **Modality** & **Numst** & **Task** \\ \hline Fine-Tuning & RotTS2018 [57] & 31D MRI & 285 & Tumor Segmentation \\ Fine-Tuning & MNRAS-CT [58] & 3D CT & 20 & Heart Features Segmentation \\ Fine-Tuning & MNRAS-MNAS [58] & 3D MRI & 30 & Heart Features Segmentation \\ Fine-Tuning & ISIC-2018 [59] & 2D D D-D microscopy & 2596 & Skin Lelong Segmentation \\ Fine-Tuning & ISRT [60] & 2D X-ray & 247 & Multi-Graph Segmentation \\ Fine-Tuning & KuSfz [61] & 2D Endoscopy & 1000 & Detection \\ Fine-Tuning & Drive [62] & Fundus & 40 & Vessel Segmentation \\ Fine-Tuning & BUID [63] & 2D Ultrasound & 647 & Breast Cancer Segmentation \\ Linear Evaluation \& FGAKB [64] & Fundus & 1841 & DR Grading \\ Fine-Tuning & Linear Evaluation & 2D MRI & 2364 & Brain Tumor Classification \\ Fine-Tuning & Multi-site Prostate & 3D MRI & 116 & Prostate Segmentation \\ Fine-Tuning & ViRD [66] & 2D X-ray & 18000 & Lung Diseases Detection \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of datasets and downstream tasks Figure 3: FGADR performance with top architectures. image and prompt encoders, only fine-tuning mask decoder; (ii) without any training and inference using box prompts; (iii) similar to (i) but replacing the original image encoder by LVM-Med's ViT architecture taken from SAM trained in our dataset. In 3D settings, we segment 2D slices and merge results for a 3D volume. We also benchmarked with 3D self-supervised methods from [77]. Tables (2) and (3) show that our two versions with ResNet-50 and Sam's ViT hold the best records in each category. For instance, we outperform 2D SSL methods trained on the same dataset, surpassing foundation models such as SAM, Flava, and Clip. In the prompt-based settings, LVM-Med also delivers better performance compared with SAM. Second, LVM-Med achieves the best overall results on _seven of eight segmentation tasks_, mostly held by LVM-Med with ResNet-50. The improvement gaps vary on each dataset, for e.g., from \(3-5\%\) on Kvasir and BUID compared with 2D supervised methods. ### Linear and finetuning image classification We analyze LVM-Med on image classification tasks using linear probing (frozen encoders) and fully fine-tuning settings, two popular evaluations used in self-supervised learning. The experiments are conducted on the FGADR Grading and Brain tumor classification tasks. Table (5) presents the average accuracy metric on three training times. LVM-Med (ResNet-50) consistently outperforms other approaches on two datasets. For example, it is better than Clip by \(10.46\%\) and \(8.46\%\) on FGADR and Brain Tumor datasets with linear evaluation. In the foundation model setting, LVM-Med (ViT) also improves SAM's results by \(7.32\%\) and \(4.69\%\) on FGADR with linear and fully-finetuning. Another point we observe is that the overall 2D-SSL methods based on ResNet-50 and trained on the collected medical dataset achieve higher accuracy than foundation models using ViT. We also compare LVM-Med with the top methods on the FGADR dataset, including AFN-Net [78], JCS [79], CoLL [80], and DRG-Net [81]. We choose the DRG-Net as the backbone and replace the employed encoder with our weights (R50). Figure (3) shows that LVM-Med hold the first rank overall. \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline \hline \multicolumn{1}{l|}{**Method**} & \multicolumn{1}{c|}{**Basics-Breatching**} & \multicolumn{1}{c|}{**Basics**} & \multicolumn{1}{c}{**Basics**} & \multicolumn{1}{c}{**Basics**} & \multicolumn{1}{c}{**Basics**} & \multicolumn{1}{c}{**Avurage**} \\ \hline 3D-Transformer [71] & 66.54 \(\pm\) 0.04 & 67.30 \(\pm\) 2.29 & 67.64 \(\pm\) 2.21 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ ID (72) & 67.38 \(\pm\) 0.75 & 76.83 \(\pm\) 2.32 & 67.17 \(\pm\) 1.27 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ MsN\({}_{3}\)[73] & 60.68 \(\pm\) 1.25 & 76.41 \(\pm\) 2.40 & 64.00 \(\pm\) 1.66 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ Med ### Object detection & In-out-distribution evaluation Figure 4 indicates our performance on the object detection task using VinDr and Kvasir datasets. We use Faster R-CNN and load ResNet-50 from 2D SSL pretrained weights. Results are presented by Average Precision with IoU=0.5 over three training times. Compared to pre-trained Imagenet, LVM-Med still outperforms by \(1\)-\(2\%\) though overall, our improvements are smaller than image classification and segmentation tasks. We also validate LVM-Med performance on the in-out-distribution setting in Table (4) using the segmentation task on the Multi-Prostate dataset. We train LVM-Med and other competitors in BMC data and use the trained models to predict the remaining datasets. Both two versions of LVM-Med with ResNet-50 and ViT, on average, surpass all baselines, which validates the potential abilities of LVM-Med for the in-out-distribution problem. ### Ablation study We do the following settings to evaluate the performance of components used in LVM-Med: (i) LVM-Med without using second-order graph matching conditions, i.e., only solving vertex-to-vertex correspondence problem; (ii) LVM-Med without using message passing network \(g_{e}\) in Eq. (1) to aggregate information from local connections; (iii) LVM-Med w/o using approximate gradients from Gumbel noise in Eq. (6). For this, we add a constant value to \(\mathbf{c}^{v},\mathbf{c}^{e}\) as prior works [50; 42], and finally (iv) LMV-Med without using local similarity \(c^{lo}_{ia}\) in Eq. (2). Other ablation studies are presented in Appendix. Table (6) indicates that all factors contribute to the final performance, wherein the second-order and Gumbel noise are the two most two important parts. ## 5 Conclusion We have demonstrated that a self-supervised learning technique based on second-order graph-matching, trained on a large-scale medical imaging dataset, significantly enhances performance in various downstream medical imaging tasks compared to other supervised learning methods and foundation models trained on hundreds of millions of image-text instances. Our findings are supported by the benefits shown in two different architectures: ResNet-50 and ViT backbones, which can be used for either end-to-end or prompt-based segmentation. **Limitations and Future Work.** We propose to investigate the following points to improve LVM-Med performance. Firstly, extending LVM-Med to a hybrid 2D-3D architecture to allow direct application for 3D medical tasks instead of 2D slices. Secondly, although LVM-Med with ViT backbone utilizes more total parameters, in some cases, it is less effective than LVM-Med ResNet-50. This raises the question of whether a novel approach could improve the performance of ViT architectures. Finally, integrating multi-modal information such as knowledge graphs, bio-text, or electronic health records for LVM-Med is also important to make the model more useful in real-world applications. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Cls.(Acc) & Seg. (Dec) \\ \hline LVM-Med (Full) & **67.47** & **83.05** \\ \hline LVM-Med w/o second-order & 62.17 & **80.21** \\ \hline LVM-Med w/o message passing & 65.08 & 81.19 \\ LVM-Med w/o Gumbel noise & 64.32 & 81.37 \\ LVM-Med w/o local similarity & 65.67 & 81.54 \\ \hline \hline \end{tabular} \end{table} Table 6: LVM-Med ablation study. Results are reported on an average of five 2D segmentation and two linear classification tasks. The two most important factors are highlighted. Figure 4: LVM-Med on object detection. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline **Method** & **Linear Evaluation (Frozen)** & \multicolumn{2}{c}{**Fine-tuning**} \\ \hline \hline \multirow{2}{*}{\begin{tabular}{l} Twin-Batch [23] \\ Time [20] \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} 66.86\(\pm\) 0.41 \\ 66.89 \(\pm\) 1.91 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} 62.27 \(\pm\) 0.32 \\ 62.27 \(\pm\) 0.32 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{l} 67.35\(\pm\) 1.36 \\ 67.35 \(\pm\) 1.36 \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{l} 79.19 \(\pm\) 1.38 \\ 71.91 \(\pm\) 1.36 \\ \end{tabular} } \\ [23] & 65.39 \(\pm\) 1.70 & 62.27 \(\pm\) 1.87 & 67.53 \(\pm\) 1.28 & 77.35 \(\pm\) 1.36 \\ [21] & 65.39 \(\pm\) 1.04 & 62.35 \(\pm\) 1.92 & 67.55 \(\pm\) 1.79 & 74.53 \(\pm\) 0.43 \\ [2] & 65.34 \(\pm\) 1.59 & 64.47 \(\pm\) 0.35 & 67.94 \(\pm\) 1.78 & 71.30 \(\pm\) 0.55 \\ [2] & 64.71 \(\pm\) 1.60 & 69.94 \(\pm\) 1.36 & 65.09 \(\pm\) 1.46 & 71.88 \(\pm\) 2.03 \\ **LVM-Med (850)** & **68.38 \(\pm\) 0.48** & **65.33 \(\pm\) 0.31** & **68.38 \(\pm\) 0.48** & **62.38 \(\pm\) 2.28** \\ \hline CID [13] & 57.87 \(\pm\) 0.50 & 57.87 \(\pm\) 0.71 & 57.48 \(\pm\) 0.36 & 58.46 \(\pm\) 2.27 \\ Flaw [3] & 31.87 \(\pm\) 0.69 & 35.19 \(\pm\) 0.43 & 57.18 \(\pm\) 0.96 & 40.10 \(\pm\) 5.97 \\ Alpha [4] & 36.95 \(\pm\) 1.04 & 30.71 \(\pm\) 3.75 & 57.88 \(\pm\) 0.96 & 63.96 \(\pm\) 0.04 \\ **SAM (s)** & 59.13 \(\pm\) 0.41 & 31.81 \(\pm\) 4.26 & 89.75 \(\pm\) 1.32 & 60.66 \(\pm\) 1.36 \\ **LVM-Med (SAMs)** & **62.46 \(\pm\) 0.86** & **59.31 \(\pm\) 0.48** & **63.44 \(\pm\) 0.73** & **67.34 \(\pm\) 2.08** \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparison on linear evaluation and fine-tuning classification. The results are reported with average Accuracy on three training times. ## Supplementary Material We present below LVM-Med pseudo-code (Section A), implementations used in downstream tasks (Section B), additional ablation studies of LVM-Med (Section C), further prompt-based segmentation results on 3D datasets, image classification benchmark (Section D), predicted masks using the user-based prompt (Section E), and finally the dataset overview (Section F). ## Appendix A LVM-Med Pseudo-code First, we provide a pseudo-code for training LVM-Med in Pytorch style: ``` #f\({}_{\theta}\):encodernetwork,h\({}_{\phi}\):projectornetwork,g\({}_{\epsilon}\):messagepassingnetwork, #k_nodes:numberofnearestneighbors,Avg:averagepooling, #pos:positionofimageaftertransform,cos:cosinesimilarity, #\(\alpha\):coefficienttradesoffbetweenglobalandlocalcosts,L\({}_{2}\):L2-distance, #\(\gamma\):maximumpairsarekept,select_top:selecttokeepthe\(\gamma\)bestmatches. ``` forXinloader:#loadbatchX=[x\({}_{1},x_{2},...,x_{\texttt{N}}\)]withNsamples #applytwotransformationsandtX\({}^{\texttt{s}}\),Pos\({}^{\texttt{s}}=\texttt{s}(\texttt{X})\)#\({}^{\texttt{k}}=[x_{1}^{\texttt{k}},x_{2}^{\texttt{k}},...,x_{\texttt{N}}^{ \texttt{k}}]\),Pos\({}^{\texttt{k}}=[\texttt{pos}_{1}^{\texttt{k}},\texttt{pos}_{2}^{\texttt{k}},..., \texttt{pos}_{\texttt{N}}^{\texttt{k}}]\),k\(\in\{\texttt{s},\texttt{t}\}\) \(\texttt{X}^{\texttt{t}}\),Pos\({}^{\texttt{t}}=\texttt{t}(\texttt{X})\) \(\texttt{\#computefeaturerepresentations}\) \(\texttt{Y}^{\texttt{s}}=\texttt{f}_{\theta}(\texttt{X}^{\texttt{s}})\);\(\texttt{Y}^{\texttt{t}}=\texttt{f}_{\theta}(\texttt{X}^{\texttt{t}})\)#featuredimensions:NxDxRxS \(\texttt{\#applyingprojection}\) \(\texttt{Z}^{\texttt{s}}=\texttt{h}_{\phi}(\texttt{Avg}(\texttt{Y}^{\texttt{s}}))\);\(\texttt{Z}^{\texttt{t}}=\texttt{h}_{\phi}(\texttt{Avg}(\texttt{Y}^{\texttt{t}}))\)#dimensions:NxF \(\texttt{\#buildgraphstructuresandmessagepassing}\) \(\texttt{G}^{\texttt{s}}=\texttt{k-nearest-neighbor}(\texttt{Z}^{\texttt{s}}\),k_connects) \(\texttt{G}^{\texttt{t}}=\texttt{k-nearest-neighbor}(\texttt{Z}^{\texttt{t}}\),k_connects) \(\texttt{Z}^{\texttt{s}}=\texttt{g}_{\epsilon}(\texttt{G}^{\texttt{s}}, \texttt{Z}^{\texttt{s}})\);\(\texttt{Z}^{\texttt{t}}=\texttt{g}_{\epsilon}(\texttt{G}^{\texttt{t}}, \texttt{Z}^{\texttt{t}})\) \(\texttt{\#computevertexandedgeaffinitymatrices}\) \(\texttt{c}_{\texttt{ia}}^{\texttt{v}}=\alpha*cos(\texttt{z}_{1}^{\texttt{s}}, \texttt{z}_{1}^{\texttt{s}})+(1-\alpha)*\text{local\_cost}(\texttt{y}_{1}^{ \texttt{s}},\texttt{y}_{\texttt{a}}^{\texttt{t}},\texttt{pos}_{1}^{\texttt{s }},\texttt{pos}_{\texttt{a}}^{\texttt{t}})\)#affinityx\({}_{1}^{\texttt{s}}\)&x\({}_{\texttt{a}}^{\texttt{t}}\)\(\texttt{c}_{\texttt{ia,jb}}^{\texttt{a}}=\cos((\texttt{z}_{1}^{\texttt{s}}-\texttt{z}_{3}^{ \texttt{s}}),(\texttt{z}_{1}^{\texttt{s}}-\texttt{z}_{3}^{\texttt{t}}))\)#affinitybetweenedgesv\({}_{\texttt{ij}}^{\texttt{s}},\texttt{y}_{\texttt{ab}}^{\texttt{t}}\) \(\texttt{c}^{\texttt{v}}=\{c_{\texttt{ij}}^{\texttt{v}}\}\in\texttt{R}^{\texttt{N} \texttt{x}\texttt{N}}\);\(\texttt{c}^{\texttt{e}}=\{c_{\texttt{ia,jb}}^{\texttt{e}}\}\in \texttt{R}^{\texttt{|E^{\prime}||E^{\prime}|}}\)#E^{\texttt{k}}\(\texttt{beasetofedgesinG}^{\texttt{k}}\),k\(\in\{\texttt{s},\texttt{t}\}\) \(\texttt{\#perturbedcostswithGumbelnoise}\) \(\epsilon,\epsilon^{\prime}\sim\texttt{Gumbel}(0,1)\) \(\texttt{c}^{\texttt{v}}=\texttt{c}^{\texttt{v}}+\epsilon\);\(\texttt{c}^{\texttt{e}}=\texttt{c}^{\texttt{e}}+\epsilon^{\prime}\) \(\texttt{\#solvinggraphmatchingandcomputeloss}\) \(\texttt{\vartheta}=\texttt{GM}(\texttt{c}^{\texttt{v}},\texttt{c}^{\texttt{e}})\) \(\texttt{L}(\texttt{\vartheta},\texttt{v}^{\texttt{s}})=\texttt{\vartheta}.(1- \texttt{v}^{\texttt{v}})+\texttt{v}^{\texttt{s}}.(1-\texttt{\vartheta})\)#computehammingloss \(\texttt{\#updatenetwork}\) \(\texttt{L.backward()\#approximate(\partial\texttt{L}/\partial\texttt{c}^{\texttt{v}}, \partial\texttt{L}/\partial\texttt{c}^{\texttt{e}})}\)byAlgorithm1. Update(g.params),Update(h\({}_{\phi}\).params),Update(f\({}_{\theta}\).params) \(\texttt{\#definelocal\_cost}\) \(\texttt{deflocal\_cost}(\texttt{y}_{1}^{\texttt{s}},\texttt{y}_{\texttt{a}}^{ \texttt{t}},\texttt{pos}_{\texttt{i}}^{\texttt{s}},\texttt{pos}_{\texttt{a}}^{ \texttt{t}})\): \(\texttt{\#location-basedlocalcost}\) \(\texttt{y}_{1,\texttt{nm}}^{\texttt{s}}=\texttt{torch\_zeros\_like}(\texttt{y}_{1}^{ \texttt{s}})\) \(\texttt{forr}\),sinR,S: \(\texttt{r}^{\texttt{s}}\),s'=argmin((L_{2}(\texttt{pos}_{\texttt{i}}^{\texttt{s}}[ \texttt{r},\texttt{s}],\texttt{pos}_{\texttt{a}}^{\texttt{t}}[\texttt{r}^{ \prime},\texttt{s}^{\texttt{r}}]))\) \(\texttt{y}_{1,\texttt{nm}}^{\texttt{s}}[\texttt{r},\texttt{s}]=\texttt{y}_{ \texttt{a}}^{\texttt{t}}[\texttt{r}^{\prime},\texttt{s}^{\texttt{r}}]\) \(\mathtt{y}^{\mathtt{s}}_{\mathtt{i}\_\mathtt{i}1},\mathtt{y}^{\mathtt{s}}_{ \mathtt{i}\_\mathtt{m}\_\mathtt{fil}}=\mathtt{select\_top}\left(\mathtt{y}^{ \mathtt{s}}_{\mathtt{i}},\mathtt{y}^{\mathtt{s}}_{\mathtt{i},\mathtt{m}}, \gamma\right)\) \(\mathtt{location\_cost}=\mathtt{cos}(\mathtt{y}^{\mathtt{s}}_{\mathtt{i}\_ \mathtt{i}\_\mathtt{i}1},\mathtt{y}^{\mathtt{s}}_{\mathtt{i}\_\mathtt{m}\_ \mathtt{fil}})\) \(\mathtt{\#\ featured-based\ local\ cost}\) \(\mathtt{y}^{\mathtt{s}}_{\mathtt{i},\mathtt{m}}=\mathtt{torch\_zeros\_like}( \mathtt{y}^{\mathtt{s}}_{\mathtt{i}})\) for r, s in R, S: \(\mathtt{r}^{\mathtt{\prime}}\), s' = \(\mathtt{argmin}((\mathtt{L}_{2}(\mathtt{y}^{\mathtt{s}}_{\mathtt{i}}[\mathtt{r },\mathtt{s}],\ \mathtt{y}^{\mathtt{t}}_{\mathtt{s}}[\mathtt{r}^{\mathtt{\prime}}, \mathtt{s}^{\prime}]))\) \(\mathtt{y}^{\mathtt{s}}_{\mathtt{i}\_\mathtt{i}1},\mathtt{y}^{\mathtt{s}}_{ \mathtt{i}\_\mathtt{m}\_\mathtt{fil}}=\mathtt{select\_top}\left(\mathtt{y}^{ \mathtt{s}}_{\mathtt{i}},\mathtt{y}^{\mathtt{s}}_{\mathtt{i},\mathtt{m}}, \gamma\right)\) \(\mathtt{feature\_cost}=\mathtt{cos}(\mathtt{y}^{\mathtt{s}}_{\mathtt{i}\_ \mathtt{i}1},\mathtt{y}^{\mathtt{s}}_{\mathtt{i}\_\mathtt{m}\_ \mathtt{fil}})\) \(\mathtt{return\ 0.5*}(\mathtt{location\_cost}+\mathtt{feature\_cost})\) We trained LVM-Med with graph size of \(16\) nodes, each node connected to the top 5 nearest neighbors after using kNN, \(\lambda\) value in Algorithm 1 is \(80\), and \(\alpha=0.8\) for associating global- and local-based similarities when computing \(c^{v}_{ij}\). The size of projector \(h_{\phi}\) is \(2048\times 128\) for ResNet-50, and \(768\times 128\) for ViT. We configure the message passing network \(g_{\theta}\) with two convolutional layers of size \(128\). For the user-based prompt version, because the SAM model [9] requires an input of shape \(256\times 14\times 14\) for the mask decoder part, we add two additional convolutional layers with a kernel size of \(1\) and \(3\) at the end of ViT backbone to convert from shape \(768\times 14\times 14\) to the target shape. ## Appendix B Downstream task setups ### Downstream tasks Segmentation tasksOn 2D-based segmentation tasks, we employ U-Net architecture [82] and load ResNet-50 [53] trained by self-supervised learning algorithms as network backbones. With foundation models, we use TransUet [67] and take pre-trained ViT models as the backbones. For the prompt-based segmentation, we follow the architecture of SAM [6] consisting of encoder, prompt, and mask decoder layers. We also fine-tune SAM where encoder and prompt networks are frozen, only learning decoder layers [9]. Our LVM-Med for prompt-based setting is similar to [9] except that we substitute SAM's encoders with our weights. We utilize Adam optimizer for all experiments and train architectures with Dice and Cross-Entropy loss [83]. We also normalize the norm-2 of gradient values to stabilize the training step to maximize 1. Table 7 summarizes each dataset's learning rate, number of epochs, and image resolution. On 3D-based segmentations, we reformulate these tasks as 2D segmentation problems and make predictions on 2D slices taken from 3D volumes. Furthermore, we apply balance sampling to select equally 2D slices covering target regions and other 2D slices, not including the ground truth. Table 8 presents configurations used for 3D datasets; other settings are identical to 2D cases. \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline \hline & **SIC-2018 (Skin Lees)** & **JSRT (Lang X-ray)** & **Kussir (Pobry)** & **Drive (Vessel)** & **BUID (Breast Cancer)** \\ \hline \multirow{3}{*}{**ResNet-50.**} & \(\mathtt{lr}=10^{-4}\) epochs 35 & \(\mathtt{lr}=10^{-3}\) epochs 50 & \(\mathtt{lr}=10^{-3}\) epochs 35 & \(\mathtt{lr}=10^{-3}\) epochs 50 & \(\mathtt{lr}=10^{-4}\) epochs 50 \\ & shape \(512\times 512\) & shape \(224\times 224\) & shape \(224\times 224\) & shape \(224\times 224\) & shape \(226\times 226\) \\ & batch size 16 & batch size 32 & batch size 64 & batch size 16 & batch size 16 \\ \hline \multirow{3}{*}{**Foundation Model**} & \(\mathtt{lr}=10^{-4}\) epochs 100 & \(\mathtt{lr}=10^{-3}\) epochs 200 & \(\mathtt{lr}=10^{-3}\) epochs 200 & \(\mathtt{lr}=10^{-3}\) epochs 200 & \(\mathtt{lr}=10^{-4}\) epochs 200 \\ & shape \(512\times 512\) & shape \(224\times 224\) & shape \(224\times 224\) & shape \(224\times 224\) & shape \(226\times 226\) \\ & batch size 16 & batch size 32 & batch size 16 & batch size 16 & batch size 16 \\ \hline \multirow{3}{*}{**Prompt-based Seg.**} & \(\mathtt{lr}=10^{-4}\) epochs 50 & \(\mathtt{lr}=3\times 10^{-4}\) epochs 50 & \(\mathtt{lr}=10^{-3}\) epochs 20 & \(\mathtt{lr}=3\times 10^{-4}\) epochs 100 & \(\mathtt{lr}=10^{-4}\) epochs 20 \\ & shape \(1024\times 1024\) & shape \(1024\times 1024\) & shape \(1024\times 1024\) & shape \(1024\times 1024\) & shape \(1024\times 1024\) \\ \cline{1-1} & batch size 16 & batch size 16 & batch size 16 & batch size 16 & batch size 16 \\ \hline \hline \end{tabular} \end{table} Table 7: Configurations for training 2D segmentation tasks Image classification tasksWe take the feature embedding outputs of each architecture and build one fully connected layer to produce desired classes for image classification tasks. We freeze the encoder layers for the linear evaluation and only train the fully connected layer. For the fully-finetuning, the whole network is trained. The Adam optimizer [56] with cross-entropy loss function and learning rates \(\{5\times 10^{-4},10^{-3}\}\) are used for Brain Tumor and FGADR, respectively. To benchmark LVM-Med with other state-of-the-art methods on FGADR (Figure 3 in paper), we follow the settings of DRG-Net [81] and change their encoder layers by our networks. Object detectionWe use Faster-RCNN [84] for object detection tasks. The ResNet-50 of Faster-RCNN is replaced by pre-trained weights. In the Vin-Dr dataset, there is a total of \(14\) objects for, e.g., Aortic enlargement, Atelectasis, Calcification, etc. We use image resolutions of \(512\times 512\), Adam solver, and learning rate \(10^{-4}\) in \(40\) epochs. In the Kvasir dataset for polyp detection, we also resize images to a fixed size of \(512\times 512\), employ the Adam optimizer with learning rate \(2.5\times 10^{-4}\) and batch size \(8\). ## Appendix C LVM-Med ablation studies ### Graph sizes and \(\lambda\) in backpropagation We provide in Figure 5 and Figure 6 LVM-Med performance when changing the number of nodes in graph construction steps \(G^{s},G^{t}\) and \(\lambda=80\) used in Algorithm 1 in the backpropagation step. The results are reported on the average Dice score of five 2D segmentation tasks and the average accuracy of two linear classifications on FGADR and Brain Tumor Classification. Figure 5 indicates that \(16\) is the best value for both classification and segmentation. Increasing the graph's nodes tends to decrease classification performance. Figure 6 compared different values for \(\lambda\in\{70,80,90,100\}\). We observe that \(\lambda=\{80,90\}\) achieve good results for linear classification tasks though \(\lambda=\{90,100\}\) decreases segmentation performance. ### Performance on large- and small-scale We investigate LVM-Med performance when reducing the number of datasets in the pre-training step. Especially, we trained LVM-Med on a _small-scale_ with four datasets: LUNA2016 [85], LiTS2017 [86], BraTS2018 [57], and MSD (Heart) [87]. We compare this version with our default settings trained on \(55\) datasets (Section F). Two models are evaluated on dice scores of five 2D segmentation tasks, the accuracy metric of two linear image classifications, and mAP50 of two object detection tasks on VinDr and Kvasir detection. Table 9 shows that LMV-Med full leads to better performance overall, especially with the classification settings; the improvement gap is around \(3.6\%\). In summary, we conclude that LVM-Med is beneficial when training in large-scale medical settings. \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline & **BraTS** & **MMWHS-CT** & **MMWHS-MRI** & **BMC** \\ \hline \multirow{3}{*}{**ResNet50**} & lr = \(15\times 10^{-4}\), epochs \(20\) & lr = \(10^{-3}\), epochs \(20\) & lr = \(15\times 10^{-4}\), epochs \(30\) & lr = \(10^{-3}\), epochs \(30\) \\ & shape \(224\times 224\) & shape \(224\times 224\) & shape \(224\times 224\) & shape \(224\times 224\) \\ & batch size \(128\) & batch size \(64\) & batch size \(64\) & batch size \(64\) \\ \hline \multirow{3}{*}{**Foundation Model**} & lr = \(10^{-4}\), epochs \(100\) & lr = \(10^{-4}\), epochs \(100\) & lr = \(10^{-4}\), epochs \(100\) \\ & shape \(224\times 224\) & shape \(224\times 224\) & shape \(224\times 224\) & shape \(224\times 224\) \\ & batch size \(16\) & batch size \(16\) & batch size \(16\) & batch size \(16\) \\ \hline \hline \end{tabular} \end{table} Table 8: Configurations for 3D-based-segmentation tasks ### Performance on weighting global and local similarities We test with different \(\alpha=\{0.7,0.8,0.9\}\) which used to fuse global- and local-based similarities \(c^{v}_{ij}\). Table 9 demonstrates that \(\alpha=0.8\) is generally the best value in average across segmentation, classification, and object detection tasks. ### Computational complexity We present a parameter comparison of LVM-Med with other foundation models in Table 10. Our LVM-Med model, based on ResNet-50, has significantly fewer parameters, approximately 3-4 times smaller than models such as Flava or SAM, while still maintaining competitive performance. When utilizing the ViT encoder pre-trained by the SAM method, LVM-Med's parameters are comparable to the Flava model and slightly higher than Clip and Align by \(1.03\) and \(1.43\) times, respectively. However, it is important to note that both LVM-Med and SAM outperform these models by a significant margin in various settings. ## Appendix D Prompt-based segmentation on 3D datasets and classification tasks We provide additional results for LVM-Med on 3D-based prompt segmentation and image classification tasks with several fully connected layers. ### Promt-based Segmentation on 3D datasets We perform experiments on three 3D datasets in Table 11, including BraTS, MMWHS-MRI, and MMWHS-CT. The setup for box prompts follows 2D segmentation cases. We discover that the LMV-Med in 3D cases consistently improves the performance of fine-tuned SAM [9] as in 2D settings and attains a large margin compared with SAM without training [6]. This evidence thus confirms that LVM-Med is also effective under prompt-based scenarios. ### Image classification We aim to inspect whether foundation models improve their performance given more fully connected layers for image classification tasks with both frozen encoders or fully fine-tuning. For each method in this category and our LVM-Med (ResNet-50 and ViT), we configure two fully connected layers with sizes \(512-256\) and \(512-128\) for the Brain and FGADR respectively that map from the output dimension of each network to a number of desired classes. Table 12 presents obtained results where new settings are highlighted in color. We notice the following points. (i) Firstly, using more fully connected layers tends to improve the performance of foundation models, especially on linear evaluation. For e.g., the Clip increases from \(4.79\%-9.98\%\) on FGADR and Brain Tumor classification tasks, respectively. Similarly, our LVM-Med with SAM's ViT also achieves better results by approximately \(1.37\%\) and \(4.82\%\) on those tasks. (ii) Secondly, LVM-Med overall attains the best results in four settings using linear or several fully connected layers with ResNet-50. LVM-Med with ViT architecture also delivers the best records on three of four test cases compared with foundation models. ## Appendix E Visualizing results We provide qualitative results for prompt-based segmentation in Figure 7. We compare three approaches, including (i) the standard SAM without fine-tuning [6] (second column), (ii) SAM \begin{table} \begin{tabular}{l|l|c c c} \hline \hline & **Method** & **BraTS** & **MMWHS-MRI** & **MMWHS-CT** \\ \hline \multirow{3}{*}{**Prompt-based Seg.**} & SAM (fixed encoder) [9] & 85.37 \(\pm\) 0.07 & 77.64 \(\pm\) 1.14 & 76.61 \(\pm\) 1.91 \\ & SAM with Prompt (no-train) [6] & 38.97 \(\pm\) 0.21 & 59.74 \(\pm\) 0.76 & 50.25 \(\pm\) 0.33 \\ & **LVM-Med (SAM’s ViT)** & **85.76 \(\pm\)0.07** & **78.91 \(\pm\) 0.80** & **78.03 \(\pm\) 0.93** \\ \hline \hline \end{tabular} \end{table} Table 11: Prompt-based segmentation on 3D datasets. \begin{table} \begin{tabular}{l|l|c c c c} \hline \hline & **Method** & \multicolumn{2}{c}{**Linear Evaluation (Frozen)**} & \multicolumn{2}{c}{**Fine-tuning**} \\ \hline & & **FGADR (DR Grading)** & **Brain Tumor Class.** & **FGADR (DR Grading)** & **Brain Tumor Class.** \\ \hline \multirow{8}{*}{**2D-SSL on medical**} & Twin-Barion [23] & 66.86 \(\pm\) 0.41 & 63.03 \(\pm\) 0.32 & 66.37 \(\pm\) 0.77 & 74.20 \(\pm\) 1.38 \\ & Dino [70] & 65.98 \(\pm\) 1.91 & 62.27 \(\pm\) 0.32 & 67.35 \(\pm\) 1.36 & 71.91 \(\pm\) 1.55 \\ & SimCLR [16] & 65.30 \(\pm\) 1.70 & 62.52 \(\pm\) 1.67 & 67.55 \(\pm\) 0.28 & 73.52 \(\pm\) 3.56 \\ & Macro-\(\varnothing\)[18] & 65.98 \(\pm\) 1.04 & 62.35 \(\pm\) 1.92 & 67.55 \(\pm\) 1.79 & 74.53 \(\pm\) 0.43 \\ & Deepcluster [13] & 65.34 \(\pm\) 1.93 & 64.47 \(\pm\) 0.55 & 67.94 \(\pm\) 1.78 & 73.10 \(\pm\) 0.55 \\ & VicRegl [29] & 64.71 \(\pm\) 0.60 & 59.64 \(\pm\) 1.36 & 65.69 \(\pm\) 1.46 & 73.18 \(\pm\) 2.03 \\ & **LVM-Med (RS0)** & **68.33 \(\pm\) 0.48** & 66.33 \(\pm\) 0.31 & 68.32 \(\pm\) 0.48 & 76.82 \(\pm\) 2.23 \\ \cline{2-6} & **LVM-Med (RS0)** & **66.67 \(\pm\) 0.84** & **74.20 \(\pm\) 0.84** & **70.58 \(\pm\) 0.36** & **78.77 \(\pm\) 0.78** \\ \hline \multirow{8}{*}{**Foundation Model**} & Clip [3] & 57.87 \(\pm\) 0.50 & 57.87 \(\pm\) 0.71 & 57.48 \(\pm\) 0.86 & 34.86 \(\pm\) 2.27 \\ & 62.66 \(\pm\) 0.36 & **67.85 \(\pm\) 0.23** & 56.21 \(\pm\) 1.36 & 21.74 \(\pm\) 1.14 \\ \cline{1-1} \cline{2-6} & Flava [5] & 31.87 \(\pm\) 0.69 & 35.19 \(\pm\) 0.43 & 57.18 \(\pm\) 0.96 & 34.01 \(\pm\) 5.97 \\ \cline{1-1} \cline{2-6} & & 32.84 \(\pm\) 0.12 & 24.45 \(\pm\) 4.30 & 56.01 \(\pm\) 0.86 & 33.67 \(\pm\) 8.11 \\ \cline{1-1} \cline{2-6} & Align [4] & 36.95 \(\pm\) 1.04 & 30.71 \(\pm\) 2.35 & 57.28 \(\pm\) 0.97 & 63.96 \(\pm\) 0.04 \\ \cline{1-1} \cline{2-6} & & 38.12 \(\pm\) 1.45 & 30.41 \(\pm\) 1.35 & 57.87 \(\pm\) 1.90 & 61.42 \(\pm\) 0.25 \\ \cline{1-1} \cline{2-6} & SAM [6] & 55.13 \(\pm\) 0.41 & 31.81 \(\pm\) 4.26 & 58.75 \(\pm\) 1.32 & 60.66 \(\pm\) 1.36 \\ \cline{1-1} \cline{2-6} & **57.48 \(\pm\) 0.24** & 36.98 \(\pm\) 1.61 & 58.75 \(\pm\) 0.99 & 60.07 \(\pm\) 0.31 \\ \cline{1-1} \cline{2-6} & **LVM-Med (SAM’s ViT)** & **62.46 \(\pm\) 0.86** & 59.31 \(\pm\) 0.48 & 63.44 \(\pm\) 0.73 & 67.34 \(\pm\) 2.08 \\ \cline{1-1} \cline{2-6} & **63.83 \(\pm\) 1.36** & 64.13 \(\pm\) 1.14 & **59.04 \(\pm\) 0.14** & **64.97 \(\pm\) 2.21** \\ \hline \hline \end{tabular} \end{table} Table 12: Comparing SSL approaches and Foundation models on classification tasks with two evaluation protocols, Linear evaluation and full Fine-tuning. Settings used with several fully connected layers are in cyan. The best results in 2D-SSL and foundation models (two fully connected layers) are in bold; the best results overall are in bold and underlined. with encoders and prompt networks are frozen, and only decoder layers are trained as [7] (third column), and (iii) a similar setting as (ii) but encoders taken from LVM-Med version with SAM's ViT architecture (fourth column). For all methods, we simulate box-based prompts using the ground-truth masks and define boxes covering those target regions perturbed by offset values. Figure 7 demonstrates that the original SAM is prone to generate useless predictions (top and bottom rows) or less precise boundaries. In contrast, updated SAM and LVM-Med produce more accurate results, confirming the importance of fine-tuning to achieve adequate results. Figures in the third and fourth columns also illustrate that SAM tends to over-segment or lacks structures on an object's edges in several cases, while LVM-Med is more stable in those situations (red arrows). ## Appendix F Dataset overviews Table 13 overviews the dataset used in our study. For each dataset, we provide its modality, data dimension, and the total of samples. If the training/testing rate is available (column **Train/Test Rate**), we utilize all training data; otherwise, we sample \(20\%\) total samples to avoid potential test data leaking for downstream tasks used in the pre-training step. For datasets whose data dimensions are 3D volumes, we sample 2D slices from those formats. Some datasets, such as MSD or ADNI, comprise different sub-datasets inside; we consider these sub-sets as independent ones to avoid confusion during the training steps. In summary, a total of \(55\) datasets are used with approximately \(40\%\) in 3D datasets and \(60\%\) in 2D images as presented in Figure 8. Moreover, we also outline ratios between distinct data modalities such as MRI, CT, X Figure 7: Visualizing prompt-based predictions on three datasets: BUID, Kvasir, and ISIC. Red arrows show differences between SAM (fine-tuning) and LVM-Med using SAM’s ViT architecture. Best viewed in color with **zoom**. ray, grayscale types such as Ultrasound, OCT, and finally, color images depicted in Figure 10. \begin{tabular}{l l l l l l l} \hline \hline **No** & **Data Name** & **Topic** & **Disease** & **Modality** & **Format** & **Default** & **Total** \\ & & & & & & **Train/Test** & **Rate** \\ \hline 16 & Pevic- & Pelvic & No Label & CT & 3D & No & 12 \\ & Reference- & & & & & \\ & Data [99, 96] & & & & & & \\ \hline 17 & ProstateX & Prostate & The clinical significance of prostate lesions prediction & MRI & 3D & No & 40 \\ \hline 18 & TCGA-CESC & Cervical & No Label & Color images & 2D & No & 3977 \\ \hline 19 & TCGA- & Colon & No Label & Color images & 2D & No & 1644 \\ & COAD & & & & & \\ & [105, 96] & & & & & \\ \hline 20 & TCGA-ESCA & Cuticle & No Label & Color images & 2D & No & 4427 \\ & [106, 96] & & & & & \\ \hline 21 & TCGA-KICH & Kidney & No Label & Color images & 2D & No & 2192 \\ \hline 22 & TCGA-KIRC & Kidney & No Label & Color images & 2D & No & 34108 \\ & [108, 96] & & & & & \\ \hline 23 & TCGA- & Rectum & No Label & Color images & 2D & No & 248 \\ & [105, 96] & & & & & \\ \hline 24 & TCGA-SARC & Sacroma & No Label & Color images & 2D & No & 624 \\ & [109, 96] & & & & & \\ \hline 25 & TCGA- & Thyroid & No Label & Color images & 2D & No & 665 \\ & [105, 96] & & & & & \\ \hline 26 & VinDr [66] & Lung & Abnormal Disease Classification & X-ray & 2D & No & 18000 \\ \hline 27 & LUNA2016 & Lung & Nadule Detection and False Positive Reduction & CT & 3D & No & 49386 \\ \hline 28 & BCCD [110] & Cells & Blood cell detection & Color images & 2D & No & 364 \\ \hline 29 & C- & Cells & Leukemia detection & Color images & 2D & Yes & 12529 \\ & NMC\_Leukemia & & & & & \\ [111, 112] & & & & & \\ \hline 30 & CBIS-DDSM & Breast & Breast Cancer Classification & X-ray & 2D & No & 6774 \\ & [113, 114] & & & & & \\ \hline 31 & COVIDx & Lung & Covid-19 Detection & X-ray & 2D & Yes & 194922 \\ & [115] & & & & & \\ \hline 32 & Heidelberg & Eye & OCT Imaging Classification & OCT & 2D & Yes & 84495 \\ & OCT [116] & & & & & \\ \hline 33 & m2caiSeg & Laparoscopic & Semantic Segmentation Laparoscopic & Color images & 2D & Yes & 614 \\ \hline 34 & NuCLS [118] & Nucleus & Nucleus Segmentation Detection / Classification & Color images & 2D & Yes & 1744 \\ \hline 35 & SARAS- & Prostatectomy & Action classification in Prostatectomy & Color images & 2D & Yes & 29454 \\ & MESAD & [119][120][121] & & Surgery & & & \\ \hline 36 & Shoulder & Shoulder & Shoulder X-ray Classification & X-ray & 2D & Yes & 1049 \\ & X-ray images & from Sun Yat- & & & & \\ & sen Memorial Hospital [122] & & & & & \\ \hline \hline \end{tabular}
2304.11184
Non-local computation and the black hole interior
In a two sided black hole, systems falling in from opposite asymptotic regions can meet inside the black hole and interact. This is the case even while the two CFTs describing each asymptotic region are non-interacting. Here, we relate these behind the horizon interactions to non-local quantum computations. This gives a quantum circuit perspective on these interactions, which applies whenever the interaction occurs in the past of a certain extremal surface that sits inside the black hole and in arbitrary dimension. Whenever our perspective applies, we obtain a boundary signature for these interior collisions which is stated in terms of the mutual information. We further revisit the connection discussed earlier between bulk interactions in one sided AdS geometries and non-local computation, and recycle some of our techniques to offer a new perspective on making that connection precise.
Alex May, Michelle Xu
2023-04-21T18:00:05Z
http://arxiv.org/abs/2304.11184v3
# Non-local computation and the black hole interior ###### Abstract In a two sided black hole, systems falling in from opposite asymptotic regions can meet inside the black hole and interact. This is the case even while the two CFTs describing each asymptotic region are non-interacting. Here, we explore these behind the horizon interactions in planar black holes, where we can relate them to non-local quantum computations. This gives a quantum circuit perspective on these interactions, which applies in arbitrary dimension. We further revisit the connection discussed earlier between bulk interactions in one sided AdS geometries and non-local computation, and recycle some of our techniques to offer a new perspective on making that connection precise. + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutet: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutetext: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA + Footnote †: institutet: \({}^{a}\) Department of Physics, University of California, Berkeley, CA 94720, USA Introduction In the AdS/CFT correspondence, local bulk interactions are supported by local interactions in a lower dimensional boundary description. How this can be possible is puzzling, in a way that is most sharp in the context of two sided black hole solutions. There, excitations originating in each of the two asymptotic regions can meet and interact behind the black hole horizon. For instance, we can imagine an observer, Alice, who is created by acting on the left CFT, and a second observer Bob who is created by acting on the right CFT. While the CFTs don't interact, the bulk description of their time evolution involves Alice and Bob meeting and interacting behind the horizon. After being initially emphasized by Marolf and Wall [1], how to understand these behind the horizon interactions as emerging from a boundary perspective has remained puzzling. In this article, we relate behind the horizon interactions to a circuit construction in quantum information theory known as a non-local quantum computation, illustrated in figure 1. Our discussion is closely related to earlier work which argued bulk interactions in spacetimes with a single asymptotically AdS boundary are supported by non-local computations in the CFT [2; 3; 4; 5]. Our main contribution is to extend this to the setting of two sided black hole geometries, where the tension between having local bulk interactions and two boundary theories which do not interact becomes particularly sharp. Our construction also works in arbitrary dimensions, extending earlier results which apply to \(2+1\) bulk dimensions. The basic setting we study is of a planar AdS\({}_{d+1}\) black hole, dual to two CFTs on spatial \(\mathbb{R}^{d-1}\) and placed in the thermofield double state. In this setting, the entanglement wedge of a region defined by \(X_{1}>0\) (and including points from both CFTs) will include a portion of the black hole interior [6]. We leverage this geometrical observation and entanglement wedge reconstruction to argue behind the horizon interactions can be reproduced in the form of a non-local computation. More quantitatively, we are interested in understanding how much entanglement is necessary to support interaction within a bulk region of a given size. While our planar black hole solutions have infinite entanglement between the two CFTs, we use end-of-the-world brane geometries, dual to pairs of entangled BCFTs, to truncate the geometry and render the entanglement finite. This also fixes the size of the bulk subregion where interaction can happen. For large enough BCFTs, extremal surfaces enclosing half spaces still reach into the black hole, but small BCFTs have extremal surfaces that sit outside the hole and instead attach to the ETW branes. In bulk \(2+1\) dimensions we can solve for this transition, which gives a quantitative perspective on how much entanglement is necessary to support bulk interactions within a scattering region of a given size. It is useful to compare our discussion to the earlier one [2; 3; 4] addressing the single sided setting. There, interactions in the bulk (figure 0(a)) were most easily related to boundary processes with the form shown in figure 0(c), which we refer to as an augmented non-local computation. Notice that compared to a standard non-local computation, extra quantum systems appear in the second round operations. While we could now choose to consider the (slightly more complicated) boundary description, these 'extra regions' actually make the boundary circuit much more difficult to understand and constrain. In particular, these circuits can support interactions while having the reduced density matrix describing the state held at the two input locations be unentangled [5]. This obscures the role entanglement is playing in supporting local bulk interaction. In the two sided setting, this complication is naturally removed, and bulk interactions are directly related to the standard non-local computation scenario. To relate interactions in the one-sided setting to non-local computation, [2; 3; 4] argued on heuristic grounds that the (non-augmented) non-local computation was the correct boundary model, and [5] argued this can be made precise in a lattice regularization of the CFT.1 Applied in AdS\({}_{2+1}\), our construction reaches the same conclusion by assuming the AdS/BCFT correspondence rather than introducing a lattice. Both constructions can be used to relate bulk computation to non-local computation exploiting finite dimensional Figure 1: (a) Circuit diagram showing the local implementation of a unitary in terms of a unitary \(\mathbf{U}\). (b) Circuit diagram showing the non-local implementation of a unitary \(\mathbf{U}\). The boundary description implies that interactions in the interior of certain black hole solutions must be reproducible in this form, using an entangled state with mutual information set by the black hole area. c) For AdS geometries with a single asymptotic boundary, bulk computations are naively related to the augmented non-local computation shown here. Extra systems enter the circuit in the second round. entangled states: [5] inherits this from the lattice description, while our approach uses the relation between smooth max entropy and von Neumann entropy in holographic states [9] to justify a finite dimensional approximation. With the relationship between bulk interaction and non-local computation in hand, we can revisit a number of questions raised around this connection, which we can now pose in the black hole setting and in arbitrary dimensions. In particular, this connection implies that constraints on non-local computation imply constraints on bulk interaction.2 This has potential, but not fully explored, implications for constraints on computation in holographic spacetimes [10]. Conversely, whatever interactions can happen in the bulk within spacetime regions of a given size must, assuming the AdS/CFT correspondence, be implementable non-locally with an amount of entanglement set by the size of the region. This has potential, but not understood, implications for non-local computation. While we focus in this article on establishing the non-local computation and bulk interaction in new settings, we briefly comment on these directions in the discussion. Footnote 2: Unfortunately, constraints on non-local computation are so far poorly understood, but it remains an active area in quantum information theory. See [10] for a recent summary. Examining the connection between non-local computation and bulk interactions from the perspective of black holes and ETW branes also highlights an issue relating to sub-AdS locality. In both the two-sided and one-sided cases, our boundary description of the bulk interaction also includes a description of a super-AdS sized subregion of the bulk, even when the interaction is localized to a small region. In our context, this is enforced by a change in the minimal extremal surface homologous to certain boundary subregions. In this sense, our quantum circuit model of behind the horizon interactions can't (at this stage) isolate the boundary description of interactions happening within sub-AdS scale bulk regions. The construction in [5] also suffers from this limitation. This is analogous to tensor network models of spatial slices of the bulk geometry, which also struggle to capture sub-AdS locality [11; 12; 13; 14]. However, the heuristic arguments of [2; 4; 15] suggest that there should be a non-local computation description of sub-AdS scale bulk subregions.3 This motivates looking for constructions that can related the sub-AdS sized regions to (non-augmented) non-local computation. We comment in the discussion on some extensions of our setting that may allow this. Footnote 3: In particular, the connected wedge theorem of [2; 4; 15] gives that the boundary entanglement is consistent with small bulk subregions having a non-local computation description. **Outline of our article** In section 2.1 we describe the AdS/BCFT correspondence, our prescription for relating entangled BCFT states to bulk AdS geometries. In section 2.2 we review an earlier framework for relating quantum information processing tasks in the bulk and boundary perspectives, into which we can fit our construction. In section 3 we discuss behind the horizon interactions in the planar BTZ black hole. Section 3.1 gives our general construction relating behind the horizon interactions and non-local computation in the context of planar black holes. Section 3.2 describe a more concrete setting, the planar BTZ black hole, where more quantitative details can be determined. In section 4 we point out that the entanglement between our BCFTs can be well approximated by a state in a finite dimensional Hilbert space. This allows a stronger connection between existing bounds on non-local computation given in the quantum information literature, typically proven for finite dimensional systems, and constraints on interactions happening in a holographic geometry. In section 5 we revisit the global AdS\({}_{2+1}\) setting, discussed in earlier works [2; 3; 5; 16]. Here, we point out that the same ETW brane techniques allow a new perspective on the relationship between bulk interactions and non-local computation. The principle claim is that interactions happening in the bulk can be reproduced as non-local computations, using entanglement related to the size of the region in which they occur. For large regions, the entanglement needed approaches the area of the region, up to an additive constant. In 5.4 we briefly discuss how to extend the connection between non-local computation and bulk interaction to global AdS\({}_{d+1}\) for \(d>2\). In section 6 we conclude with some remarks and open questions. Summary of notation:Spacetime notation: * We use italic capital letters \(\mathcal{A},\mathcal{B},\mathcal{C},\dots\) for boundary spacetime regions. * The entanglement wedge of a boundary region \(\mathcal{A}\) is denoted by \(E_{\mathcal{A}}\). * The quantum extremal surface associated to region \(\mathcal{A}\) is denoted \(\gamma_{\mathcal{A}}\). * We use plain capital letters \(A,B,C,\dots\) to refer to bulk spacetime regions. * We use \(J^{\pm}(A)\) to denote the causal future or past of region \(A\) taken in the bulk geometry, and \(\hat{J}^{\pm}(\mathcal{A})\) to denote the causal future or past of \(\mathcal{A}\) within the boundary spacetime. Quantum notation: * We use capital letters to denote quantum systems \(A,B,C,...\) * We use boldface, script capital letters for quantum channels, \(\boldsymbol{\mathcal{N}}(\cdot)\), \(\boldsymbol{\mathcal{T}}(\cdot)\),... * We use boldface capital letters to denote unitaries or isometries, \(\mathbf{U},\mathbf{V},...\) ## 2 Preliminaries ### AdS/BCFT primer Our constructions exploit the AdS/BCFT correspondence [17; 18], which relates holographic CFTs defined on manifolds with a boundary to AdS geometries ended by end-of-the-world branes. One way to understand why these solutions appear in our constructions is that we will consider the state on subregions of a CFT, then want some controlled way to consider a purification of that subregion with a well defined bulk geometry. Introducing CFT boundary conditions at the edge of the subregion we are considering is one way to do this. In this brief section we introduce the needed elements of AdS/BCFT. A BCFT is a conformal field theory living on a manifold with boundary, along with a conformally invariant boundary condition. For appropriate BCFTs, the AdS/BCFT correspondence conjectures a bulk dual description, which consists of an asymptotically AdS region along with an extension of the CFT boundary into the bulk as an ETW brane. To avoid confusion with the bulk-boundary language of the AdS/CFT correspondence, we will refer to the CFT boundary as the _edge_. The bulk spacetime and brane are described by an action \[I_{\text{bulk}}+I_{\text{brane}} =\frac{1}{16\pi G_{N}}\int d^{d+1}x\,\sqrt{g}(R-2\Lambda+L_{\text {matter}})\] \[\qquad+\frac{1}{8\pi G_{N}}\int_{\mathcal{B}}d^{d}y\,\sqrt{h}(K+L _{\text{matter}}^{\mathcal{B}})\;, \tag{1}\] where \(L_{\text{matter}}\) and \(L_{\text{matter}}^{\mathcal{B}}\) are matter Lagrangians for fields in the bulk and brane respectively. \(R\) is the Ricci curvature and \(\Lambda\) the bulk cosmological constant, while \(K\) is the trace of the extrinsic curvature of the brane, \[K_{ab}=\nabla_{a}n_{b}\;, \tag{2}\] for outward normal \(n_{j}\) to \(\mathcal{B}\), and \(a,b\) refer to brane coordinates \(y^{a}\). This action leads to Einstein's equations in the bulk, along with the boundary condition \[-\frac{1}{8\pi G_{N}}(K_{ab}-Kh_{ab})=T_{ab}^{\mathcal{B}}\;. \tag{3}\] This AdS/BCFT model should be understood as a bottom-up model of concrete holographic dualities. For example one can set conformally invariant boundary conditions on \(\mathcal{N}=4\) SYM and study a holographic dual [19; 20; 21; 22]. In these models the bulk geometry has compact dimensions that degenerate somewhere, which corresponds in the bottom-up model to the placement of the ETW brane. We will be most interested in on-shell action and entropy calculations, where the bottom-up model reproduces universal CFT results [17]. More refined probes of these AdS/BCFT models may be more problematic [23], though our arguments don't rely on the models being correct for those observables.4 Footnote 4: In more detail, [23] studies Lorentzian BCFT correlation functions that probe the (putative) brane geometry in the bottom-up model given here. They find the singularity structure of these correlators doesn’t match that expected from bulk causality unless an apparently unnatural condition is imposed on the BCFT spectrum. In contrast, our setting involves a bulk process that remains far away from the branes, and we only rely on the bottom up model to determine the placement of minimal surfaces and to understand when a particular geometry is the relevant bulk saddle. The Ryu-Takayanagi formula gives a method for computing boundary entropy in terms of bulk degrees of freedom. In one of its modern forms, this can be expressed as [24] \[S(A)=\min_{\gamma_{ext}}\text{ext}_{\gamma\in\text{Hom}(A)}\left(\frac{\text{ area}(\gamma)}{4G_{N}}+S_{bulk}(E_{\gamma})\right). \tag{4}\] The set \(\text{Hom}(A)\) is the set of codimension 2 spacelike surfaces such that there exists a codimension 1 spacelike surface \(E_{\gamma}\) satisfying \[\partial E_{\gamma}=\gamma\cup A. \tag{5}\] We say that \(\gamma\in\text{Hom}(A)\) are _homologous_ to \(A\). The quantity \(S_{bulk}(E_{\gamma})\) is the von Neumann entropy of the bulk subregion \(E_{\gamma}\). In AdS/BCFT, the Ryu-Takayanagi formula continues to calculate the entropy of boundary subregions, provided the homology condition is appropriately adapted [25]. The appropriate definition of homologous in the presence of ETW branes is that there needs to exist a spacelike codimension 1 surface \(E_{\gamma}\) such that \[\partial E_{\gamma}=\gamma\cup A\cup b \tag{6}\] where \(b\) is allowed to be any portion of the ETW brane. Given a subregion of the boundary \(A\), it is natural to ask if a subregion of the bulk is recorded into \(A\). To make this question more precise, we should introduce a choice of bulk subspace, which we refer to as the code-space and label \(\mathcal{H}_{code}\). The subspace \(\mathcal{H}_{code}\) might for instance be specified by a particular choice of bulk geometry, along with some qubits distributed spatially across the bulk. Then, assume we are told the bulk degrees of freedom are in a state within \(\mathcal{H}_{code}\), and we are given the degrees of freedom on subregion \(A\). What portion of the bulk degree's of freedom can we recover? Answering this question is related closely to the RT formula, in both AdS/CFT and AdS/BCFT In particular, the portion of the bulk we can recover if we know the bulk state in \(\mathcal{H}_{code}\) is given by [26; 27] \[E_{A}\equiv\bigcap_{\psi\in\mathcal{H}_{code}}E_{\gamma_{A}}. \tag{7}\] That is, for each state in the code space we find where the RT surface \(\gamma_{A}\) sits, and define the corresponding bulk subregion \(E_{\gamma_{A}}\). Then, we define the intersection of all such surfaces, considering all states in the code-subspace. Note that in this procedure we should include mixed states of the code-space. The resulting region is the portion of the bulk degrees of freedom we can recover, if we know nothing about which state in the code-space the full bulk is in. This region is sometimes referred to as the _reconstruction wedge_ of region \(A\), defined relative to the code-space \(\mathcal{H}_{code}\). Given that it is possible to recover information inside the reconstruction wedge, we can also ask what explicit operation recovers the code space from the CFT degrees of freedom. Given a global map from the bulk subspace \(\mathcal{H}_{code}\) to the boundary Hilbert space, it was understood in [28] how to construct such a recovery channel. Note that in this construction, a single choice of recovery channel works correctly for the entire code-space. To illustrate the AdS/BCFT correspondence, we show a simple AdS/BCFT solution in figure 2. There, the BCFT is defined on \(\mathbb{R}^{-}\times\mathbb{R}\), though we supress the timelike \(\mathbb{R}\) direction in the figure. We have taken the bulk matter Lagrangian to be zero, and the brane matter Lagrangian to be \[L^{\mathcal{B}}_{matter}=-T. \tag{8}\] he parameter \(T\) is referred to as the tension. The solution can be described by \[ds^{2}=\frac{\ell^{2}}{z^{2}}(dz^{2}-dt^{2}+dx^{2}) \tag{9}\] with an ETW brane along an \(AdS_{1+1}\) slice \(\frac{x}{z}=\tan\Theta\), where the angle \(\Theta\) is related to the tension parameter as \[\sin\Theta=T\ell. \tag{10}\] Notice that consistent solutions require \[-\frac{1}{\ell}\leq T\leq\frac{1}{\ell}. \tag{11}\] In the CFT, the tension parameter is related to the boundary entropy, a measure of the number of degrees of freedom localized to the edge. The boundary entropy is expressed as \(S_{b}=\log g\), and is related to the tension by \[\log g_{B}=\frac{\ell}{4G_{N}}\text{arctanh}(T\ell). \tag{12}\] Qualitatively, increasing the boundary entropy adds degrees of freedom to the CFT, and corresponding increases the angle \(\Theta\), adding a portion of bulk spacetime. ### Holographic quantum tasks with input and output regions In this section we review an operational perspective on AdS/CFT introduced in [2] and subsequently developed in [3; 15; 16]. The basic idea is to consider quantum information processes that can be interpreted in the bulk and boundary perspectives, and to reason about how the two descriptions of these processes inform each other. An up to date and more complete description of this perspective can be found in [15]. Here we briefly review a few ingredients which contextualize our discussion. We will consider information processing tasks that have inputs and outputs distributed in spacetime. To describe this task succinctly, it is helpful to introduce two agencies, Alice and Bob, who interact to carry out the task.5 We will label the input locations by \(\mathcal{C}_{i}\) where Figure 2: Geometry dual to the vacuum state of a BCFT on a half line. Bob gives Alice quantum systems \(A_{i}\). These may be entangled with some reference \(R\), which we take to be held by Bob. Alice carries out the task by manipulating the input systems and preparing a set of output systems \(B_{i}\), which she returns to Bob at another set of spacetime locations \(\mathcal{R}_{i}\). Bob then makes a measurement on the \(B=B_{1}B_{2}...B_{n}\) system with elements \(\{\Lambda_{B},\mathcal{I}_{B}-\Lambda_{B}\}\) and we declare Alice successful if she obtains outcome \(\Lambda_{B}\). Following Kent [29], who initially formalized this notion, we refer to these scenarios as _relativistic quantum tasks_. The input and output locations will be extended regions in spacetime, though we sometimes idealize these as spacetime points if we wish to take the regions to be small. We will consider relativistic quantum tasks in the bulk and boundary perspectives in AdS/CFT. By comparing the two perspectives, we can gain interesting insights into the AdS/CFT correspondence and into quantum information processing. The key ingredient in relating bulk and boundary perspectives on relativistic quantum tasks is the notion of the reconstruction wedge, which we reviewed briefly in the last section. In particular, given a task defined in the bulk geometry with input or output region \(X_{i}\), this will correspond to a task in the boundary which has an input or output region containing \(X_{i}\) in its reconstruction wedge. To make this more precise, consider a task \(\mathbf{T}\) defined in the bulk with input regions \(\{C_{i}\}_{i}\) and output regions \(\{R_{i}\}_{i}\). In the boundary, consider a task with input regions \(\{\mathcal{C}_{i}\}_{i}\), \(\{\mathcal{R}_{i}\}_{i}\) such that \[C_{i}\subseteq E_{\mathcal{C}_{i}},\] \[R_{i}\subseteq E_{\mathcal{R}_{i}}.\] Note that in general there can be many such choices of boundary regions, we can take any one of them. The boundary task with the input and output regions defined in this way is taken to have the same input and output systems \(A_{i}\), \(B_{i}\), and the same choice of measurement that defines the success or failure of the task. We call the resulting boundary task \(\hat{\mathbf{T}}\). The tasks \(\mathbf{T}\) and \(\hat{\mathbf{T}}\) are related in a simple way. In particular, we can notice that a strategy for completing the task in the bulk that succeeds with probability \(p_{suc}(\mathbf{T})\) implies the existence of a corresponding strategy in the boundary that succeeds with the same probability. This is because given access to boundary regions \(\mathcal{C}_{i}\), \(\mathcal{R}_{i}\), Alice can encode her inputs into bulk regions \(C_{i}\), allow boundary time evolution (which in the bulk picture implements the strategy for completing the task), then recover the outputs \(B_{i}\) from output regions \(\mathcal{R}_{i}\). We can summarize this as \[p_{suc}(\mathbf{T})\leq p_{suc}(\hat{\mathbf{T}}). \tag{13}\] This is the starting point for the various implications for AdS/CFT that can extracted from studying relativistic quantum tasks. This can also be strengthened in an interesting way, as pointed out in [15], although we won't use the stronger version here. Computation inside black holes In this section, we show that computations happening inside of planar black holes can be reproduced as non-local quantum computations. More precisely, computations inside of a particular subregion of the black hole, defined below, can be reproduced in the non-local form of figure 0(b) using a resource system with mutual information given by the black hole area. ### General construction The metric for the AdS\({}_{d+1}\) planar black brane can be written as [6] \[ds^{2}=-(\ell g(\rho))^{2}dt^{2}+\ell^{2}d\rho^{2}+(\ell h(\rho))^{2}\sum_{i=1 }^{d-1}dX_{i}^{2} \tag{10}\] where \[h(\rho) =\frac{2}{d}\left(\cosh\left(\frac{d\rho}{2}\right)\right)^{2/d}\] \[g(\rho) =h(\rho)\tanh\left(\frac{d\rho}{2}\right). \tag{11}\] These coordinates cover one exterior region for \(\rho\geq 0\). The temperature of the black hole has been fixed at \(1/2\pi\). A Penrose diagram showing the maximally extended geometry is shown in figure 3. The boundary dual is understood to be copies of a holographic CFT, placed in the thermofield double state. Each CFT lives on a \(d\) dimensional Minkowski space. We will consider the entanglement wedges of half spaces of the combined CFTs, defined by \(X_{1}>0\), \(X_{1}<0\). As discussed in [6], the corresponding minimal surfaces thread through the black hole. The explicit form of this surface is discussed there. We will give explicit formula for this surface in the low dimensional case treated below, but for now only need Figure 3: Penrose diagram of the planar black brane. \(d-1\) planar spatial directions are supressed. An extremal surface (blue) anchored to the boundary at \(x_{0}=0\) threads through the black hole. to note that this extremal surface exists. We will call this the connected extremal surface. rangamani2017holographic We will also consider truncated versions of this geometry, corresponding to the metric 3 but with \(X_{1}\) taken in a finite interval. To find the dual CFT description, Wick rotate \(t\to i\phi\) to obtain a Euclidean geometry. This is dual to a Euclidean BCFT on \(S^{1}\times[-X^{0}_{1},X^{0}_{1}]\times\mathbb{R}^{d-2}\). We assume boundary conditions at \(\pm X^{0}_{1}\) that have zero boundary entropy for simplicity. There are two possible bulk saddles dual to the path integral preparing this state, which we illustrate in figure 4, that feature zero tension branes in two possible configurations. For our construction to succeed, we will need to be in the regime where each BCFT edge is dual to its own bulk ETW brane, as in figure (b)b. In this solution, analytically continuing the \(\phi\) coordinate back to \(it\) we obtain two sided Lorentzian planar black brane geometries, whereas in the opposite saddle (figure (a)a) we obtain two disconnected geometries. We will assume that for large enough \(X^{0}_{1}\) the saddle with two branes will be of minimal action. The zero tension brane trajectory for the connected geometry is defined by \(X_{1}=\pm X^{0}_{1}\), as is easy to verify. In the next section we study this transition explicitly for AdS\({}_{2+1}\). Notice that the black hole area in the truncated geometry is now \(2X^{0}_{1}\), and in particular is finite. With these geometrical comments in place we can move on to establish the relationship between non-local computation and the black hole interior. The most explicit setting will be in the planar BTZ black hole, but the qualitative picture is the same in all dimensions. We first discuss the general setting, making some qualitative statements, then in the next section we give the more concrete BTZ case. To begin, consider quantum systems \(A_{L}\), \(A_{R}\) which are initially stored within the left and right asymptotic regions of the planar black hole, respectively. These systems fall into Figure 4: Euclidean gravity solutions corresponding to the thermofield double state of two BCFTs on intervals. (a) When the interval is short, the ETW brane connects the two BCFT edges. (b) When the interval is long, the ETW brane is in two pieces, and separately attached to each edge. We Wick rotate the angular \(\phi\) coordinate. The solution with two disconnected branes Wick rotates to a planar black hole. the black hole, interact, and produce output systems \(B_{+}\), \(B_{-}\). We then have these systems travel in the \(\dot{X}_{1}\) and \(-\dot{X}_{1}\) directions, respectively. The overall process is described in figure 6. We take all relevant systems to consist of fewer qubits than the black hole entropy. To describe this process in the boundary perspective, we first note that we can take as input regions \(\mathcal{C}_{L}\), \(\mathcal{C}_{R}\) the entire left and right CFTs. These then contain the inputs \(A_{L}\) and \(A_{R}\) in their reconstruction wedges, which will be the left and right exterior regions. For Figure 5: The planar BTZ black hole, obtained by Wick rotating the \(\phi\) coordinate in the Euclidean geometry of figure (b)b. The planar direction extends over \(X\in[-X_{1}^{0},X_{1}^{0}]\), where it is ended by ETW branes. Figure 6: Illustration of our thought experiment reproducing bulk interactions as non-local computations. \(\text{Alice}_{L}\) and \(\text{Alice}_{R}\) hold the left and right BCFTs of the solution shown in figure 5. (a) The Alice’s receive the inputs to the computation, and throw their respective systems into the black hole. The systems interact inside of the black hole interior. (b) After the two CFTs have evolved to time \(T\), \(\text{Alice}_{L}\) and \(\text{Alice}_{R}\) use a single, simultaneous round of communication to redistribute systems so that \(\text{Alice}_{L}\) holds the \((0,X_{1}^{0}]\) of both CFTs, and \(\text{Alice}_{R}\) holds both of the \([-X_{1}^{0},0)\) intervals. For sufficiently large \(X_{1}^{0}\), the portion of the bulk that can be recovered from the two \((0,X_{1}^{0}]\) intervals includes \(B_{L}\), and the systems recoverable from the two \([-X_{1}^{0},0)\) intervals includes \(B_{R}\). This construction shows computations happening in the black hole interior can be implemented as a non-local computation. output regions, we choose a time \(T\) in the CFT, and consider the CFT regions \[\mathcal{R}_{-}=\{p:t=T,X_{1}<0\},\] \[\mathcal{R}_{+}=\{p:t=T,X_{1}>0\}. \tag{11}\] In words \(\mathcal{R}_{-}\) is the \(X_{1}<0\) portion of both CFTs taken at time \(T\), \(\mathcal{R}_{+}\) is the \(X_{1}>0\) portions of both CFTs taken at the same time. The boundary of the reconstruction wedges of \(\mathcal{R}_{+}\) and \(\mathcal{R}_{-}\) is defined by the extremal surface anchored to the two boundary cuts \(\gamma_{+}\), \(\gamma_{-}\), defined at \(X_{1}=0,t=T\) in the two CFTs. For \(X_{1}^{0}\) large enough, we expect this extremal surface is the connected surface that threads through the black hole. For small \(X_{1}^{0}\) however, a new extremal surface can become the minimal one. Rather than thread the black hole, this alternative surface runs from the asymptotic region to one of the ETW branes. This extremal surface cannot cross the black hole horizon.6 Thus in the disconnected case, the reconstruction wedges of \(\mathcal{R}_{-}\) and \(\mathcal{R}_{+}\) do not see into the black hole interior. We will assume that we have chosen \(X_{1}^{0}\) large enough that the connected extremal surface is minimal, so that the wedge of \(\mathcal{R}_{-}\) and \(\mathcal{R}_{+}\) include a portion of the black hole interior. Footnote 6: One way to see this is to note that if it did, operators in the corresponding entanglement wedge could be used to signal the opposite asymptotic region (if it crosses the past horizon) or to receive signals from the opposite asymptotic region (if it crosses the future horizon). To relate the bulk interaction to a non-local computation, we exploit that the reconstruction wedges of our two half spaces reach into the interior. In particular, consider an interaction that takes place in the bulk subregion \[J_{T}=J^{-}(E_{\mathcal{R}_{+}})\cap J^{-}(E_{\mathcal{R}_{-}})\cap J^{+}(E_{ \mathcal{C}_{L}})\cap J^{+}(E_{\mathcal{C}_{R}}) \tag{12}\] which we can also express as \[J_{T}=J^{-}(E_{\mathcal{R}_{+}})\cap J^{-}(E_{\mathcal{R}_{-}})\cap I \tag{13}\] where \(I\) denotes the black hole interior. We refer to \(J_{T}\) as the _scattering region_. If the interaction occurs inside this region, then the output systems \(B_{-}\) and \(B_{+}\) will each enter one of the wedges \(E_{\mathcal{R}_{-}}\) and \(E_{\mathcal{R}_{+}}\). Notice that if the input systems are too large, or the interaction in the bulk is associated with a large backreaction, the minimal extremal surface may transition from a connected surface threading through the black hole to a disconnected, brane anchored surface. So long as the connected surface remains the minimal one, interactions happening within \(J_{T}\) can be reproduced as non-local computations. To see this, consider the non-local computation circuit shown in figure 0(b). Relating that picture to our setting, the left and right ends of the shared entangled state represent the left and right CFTs. The local operations on the left and right at the first time step can be used to insert the inputs \(A_{L}\) and \(A_{R}\) into the bulk, and allow the left and right CFTs to evolve under time evolution. Then, we consider recovery operations that act on \(\mathcal{R}_{\pm}\). Since systems \(B_{\pm}\) have been brought into the corresponding reconstruction wedges \(E_{\mathcal{R}_{\pm}}\), these systems can be extracted from the bulk by acting on \(\mathcal{R}_{\pm}\). The swap operation at the middle level then collects both the \(+\) halves and both the \(-\) halves of the CFTs, so that the final round operations can be taken to be the necessary recovery operations. Thus we can reproduce the bulk computation in the form of a non-local computation. Further, the needed entangled state was exactly the thermofield double, which has a black hole's entropy worth of mutual information. We can also express the relationship to non-local computation in more operational language. We consider two agents, \(\mathrm{Alice}_{L}\) and \(\mathrm{Alice}_{R}\), who initially hold the left and right CFTs respectively. \(\mathrm{Alice}_{L}\) acts on her CFT to insert \(A_{L}\) into the bulk, \(\mathrm{Alice}_{R}\) acts on her CFT to insert \(A_{R}\) into the bulk. Both locally apply the time evolution operators for their CFTs. Next, \(\mathrm{Alice}_{L}\) and \(\mathrm{Alice}_{R}\) divide their CFT degrees of freedom and exchange a round of communication to bring the \(\mathcal{R}_{+}\) degrees of freedom to \(\mathrm{Alice}_{R}\) and \(\mathcal{R}_{-}\) degrees of freedom to \(\mathrm{Alice}_{L}\). They then each apply the appropriate recovery operations to produce the \(B_{-}\) and \(B_{+}\) systems. Beyond the circuit 1b having the form of a non-local computation, our construction also shows it has an additional property. In particular, a non-local circuit capturing a bulk interaction can always be taken to have fixed recovery channels as its second round operations. This means bulk computation must be reproducible in a stricter form than the usual form of a non-local computation. This property also appears in the construction in [5], though isn't noted explicitly. We comment on this property further in the discussion. Notice that in relating bulk interaction and non-local computation we used two features of our geometries: that ETW brane configuration shown in figure 3(b) is minimal at large \(X_{1}^{0}\), and that the connected extremal surface that threads through the black hole is minimal at large \(X_{1}^{0}\). Both these statements must hold in the limit \(X_{1}^{0}\to\infty\), but to better understand this we find the explicit phase transitions for AdS\({}_{2+1}\) below. This in particular lets us study how much entanglement (as set by the black hole area) is required to support interactions within a given scattering region, and further how small of regions can be studied in this way. ### Non-local computation in the planar BTZ solution Our construction can be made more explicit in the planar BTZ black hole. We take the planar direction to be be a finite interval \(X\in[-X^{0},X^{0}]\). This is dual to a pair of BCFTs living on the spatial interval \([-X^{0},X^{0}]\). For simplicity, we take the bulk ETW branes to be zero tension. To construct our BTZ black hole solution, we begin with the Euclidean path integral on a cylinder \([0,2\pi)\times[-X^{0},X^{0}]\). Note that by taking the angular direction to run from \(0\) to \(2\pi\) we are fixing the black hole temperature to be \(1/2\pi\). The bulk solution is global Euclidean AdS, \[ds^{2}=\ell^{2}\left(\frac{\rho^{2}}{\ell^{2}}+1\right)dX^{2}+ \frac{d\rho^{2}}{\frac{\rho^{2}}{\ell^{2}}+1}+\rho^{2}d\phi^{2}. \tag{3.6}\] As studied in [30]7, at zero tension the ETW branes are in the disconnected configuration of figure 4b when \(X^{0}\geq\pi/2\), and otherwise the two branes connect. Explicitly, the branes sit at \(X=\pm X^{0}\). From here forward, we take \(X^{0}\geq\frac{\pi}{2}\). Footnote 4: The \(\mathcal{O}(1)\) symmetry is broken by the \(\mathcal{O}(1)\) symmetry. Considering the metric above, Wick rotate \(\phi\to it\). The result is the thermofield double state with temperature \(1/2\pi\), defined on two BCFTs on intervals \([-X^{0},X^{0}]\). The resulting metric \[ds^{2}=\ell^{2}\left(\frac{\rho^{2}}{\ell^{2}}+1\right)dX^{2}+ \frac{d\rho^{2}}{\frac{\rho^{2}}{\ell^{2}}+1}-\rho^{2}d\phi^{2} \tag{3.7}\] covers the exterior region \(\rho>0\). We can extend this geometry to the full Lorentzian black hole geometry, obtaining \[ds^{2}=\frac{\ell^{2}}{\cos^{2}(w)}\left(-ds^{2}+dw^{2}+\cos^{2} (s)dX^{2}\right). \tag{3.8}\] The two end of the world branes are still located at \(X=\pm X^{0}\). The asymptotic regions sit at \(w=\pm\pi/2\), and the black hole horizons at \(s=\pm w\). The coordinate change relating this metric to 3.7 is given in appendix A. In relating bulk interaction to non-local computation, consider the extremal surface anchored to the points \(X=0,s=T,w=\pm\pi/2\). There are two candidate extremal surfaces: a connected surface that threads through the black hole, and a disconnected surface, that connects each of the boundary anchor points to the ETW brane. The form of the connected surface \(\gamma_{T}\) is simple, it is just \[w =\arcsin(\tanh\lambda),\] \[s =T. \tag{3.9}\] In appendix C, we find the areas of this and the brane-anchored surfaces. We find the connected surface has area \[A_{c}[T]=2\ell\log\left(\frac{2}{\epsilon}\right), \tag{3.10}\] and the disconnected one has area \[A_{d}[T]=2\ell\log\left(\frac{2\sinh|X^{0}|\cos(T)}{\epsilon} \right). \tag{3.11}\] These give that the area difference is \[\Delta A=2\ell\log\left(\sinh|X^{0}|\cos T\right) \tag{3.12}\] and the connected surface is minimal when \[\sinh X^{0}\geq\sec T. \tag{3.13}\] Referring to the general discussion in the last section, we see that imposing condition 3.13 along with \(X^{0}\geq\pi/2\) suffice for bulk interactions to be reproduced as non-local computations. Finite dimensional model We argued above that, so long as we don't induce a large backreaction, computations which happen in the region \(J_{T}\) can be implemented as non-local computations, using the thermofield double state as the entangled resource system. While the entanglement as measured by the mutual information of the thermofield double is finite in our setting, the CFT Hilbert spaces are infinite dimensional. However results on non-local computation are typically proven in a finite dimensional setting. If we would like to apply them to holography then, we need to understand if the protocol carried out by the Alice's in the last section can be approximated by one involving only finite dimensional systems. To do this, a key ingredient is that in many holographic states the max entropy is close to the von Neumann entropy. Since the von Neumann entropy of our two CFTs is finite, this will let us show that the initial resource system, consisting of the entangled state of two CFTs, can be approximated closely by a finite dimensional system. In more detail, we study the following general form of a non-local computation protocol, which we argue below captures the holographic setting. **Definition 1**.: A **non-local computation protocol** takes the following form. 1. The inputs are recorded into Hilbert spaces \(\mathcal{H}_{A_{L}}\), \(\mathcal{H}_{A_{R}}\), each consisting of \(n\) qubits. 2. The resource system \(\ket{\phi}_{C_{L}C_{R}}\) lives in a Hilbert space \(\mathcal{H}_{C_{L}}\otimes\mathcal{H}_{C_{R}}\), where each tensor factor consists of \(E\) qubits. 3. In the first round an isometry \(\mathbf{V}^{L}_{A_{L}C_{L}\to C_{L,L}C_{L,R}}\otimes\mathbf{V}^{R}_{A_{R}C_{R} \to C_{R,L}C_{R,R}}\) is applied. 4. In the second round isometries \(\mathbf{W}^{L}_{C_{L,L}C_{R,L}}\otimes\mathbf{W}^{R}_{C_{L,R}C_{R,R}}\) are applied. A pair of the output subsystems are identified as \(B_{L}\) and \(B_{R}\). Notice that between the first and second rounds systems \(C_{L,R}\) and \(C_{R,L}\) have been exchanged. This interchange of systems corresponds to the communication round of the non-local computation. We illustrate this protocol in figure 7. We would like to justify modelling the protocol described in our black hole setting in this form, and in particular understand how large of Hilbert spaces are needed to capture the power of the holographic protocol. To do this, first note that the entropy of the left (or right) BCFT is \[S(L)=\frac{A_{bh}}{4G_{N}}=2\ell X^{0}. \tag{4.1}\] In particular, there is no UV divergence. Next recall that the smooth max entropy [31] is defined as \[S^{\epsilon}_{max}(A)_{\rho}=\min_{\sigma_{A}:||\rho_{A}-\sigma_{A}||_{1}\leq \epsilon}\log(\text{rank}\,\sigma_{A}) \tag{4.2}\] and that the smooth max entropy and von Neumann entropy are close for holographic states [9], \[S^{\epsilon}_{max}(A)=S(A)+a\frac{\log(1/\epsilon)}{\sqrt{G_{N}}} \tag{4.3}\] here \(a\) is independent of \(G_{N}\), but can depend on the state \(\rho\). This holds for holographic states as a consequence of the Renyi entropy taking the form, \[S_{\alpha}=\frac{s_{\alpha}}{G_{N}} \tag{4.4}\] where \(s_{\alpha}\) is independent of \(G_{N}\), but is otherwise an arbitrary function of \(\alpha\) and the choice of state. While [9] does not explicitly consider holographic BCFT states, it is straightforward to see that 4.4 continues to hold for these states, and hence we recover 4.3 as well in our setting.8 Footnote 8: One can see this by noting that the cosmic brane prescription of [32] applies to our setting, which yields a value of the Renyi entropy of the form 4.4. A comment is that in our setting the cosmic brane will anchor to the two ETW branes, so the Renyi entropy’s are UV divergence free. We can also note that the form 4.4 has been found directly from a CFT perspective in [25], though only for intervals ending on one CFT boundary (rather than anchored on both ends to CFT boundaries, as in our setting). Let the state of the two BCFTs appearing in our thought experiment be \(|\Psi\rangle_{C_{L}C_{R}}\). Consider the reduced density matrix \(\Psi_{C_{L}}\). Then as a consequence of 4.3, we have that there is a state \(\Psi^{\prime}_{C_{L}}\) with rank \(S^{\epsilon}_{max}(C_{L})\) which is \(\epsilon\) close in trace distance to the state \(\Psi_{C_{L}}\). Then using the Fuchs-Van-de-Graff inequality, \[1-\sqrt{F(\rho,\sigma)}\leq\frac{1}{2}||\rho-\sigma||_{1}\leq\sqrt{1-F(\rho, \sigma)} \tag{4.5}\] we have that \(\sqrt{F}(\Psi_{C_{L}},\Psi^{\prime}_{C_{L}})\geq 1-\epsilon/2\). Using Uhlmann's theorem [33] we have \[1-\epsilon/2\leq\sqrt{F(\Psi_{C_{L}},\Psi^{\prime}_{C_{L}})}=\max_{\Phi_{C_{L} C_{R}}}|\,\langle\Phi|\Psi\rangle_{C_{L}C_{R}}\,|. \tag{4.6}\] Figure 7: General form of a non-local computation, with systems labelled. Note that \(V^{L}\) and \(V^{R}\) are isometries. Thus there exists a purification of \(\Psi^{\prime}_{C_{L}}\), call it \(\ket{\Psi^{\prime}}_{C_{L}C_{R}}\), which has \(|\bra{\Psi^{\prime}}\ket{\Psi}_{C_{L}C_{R}}|\geq 1-\epsilon/2\), or using the Fuchs van de Graff inequality again we have \[||\ket{\Psi^{\prime}}-\ket{\Psi}\ket{|}\leq\sqrt{\epsilon/2}. \tag{10}\] Because this purifies \(\Psi^{\prime}_{C_{L}}\) we know \(\ket{\Psi^{\prime}}_{C_{L}C_{R}}\) has Schmidt rank \(S^{\epsilon}_{max}(C_{L})\equiv E\), so \[\ket{\Psi^{\prime}}_{C_{L}C_{R}}=\sum_{i=1}^{2^{E}}\sqrt{\lambda_{i}}\ket{i}_{ C_{L}}\ket{i}_{C_{R}}. \tag{11}\] Next, we define finite dimensional Hilbert spaces \(\mathcal{H}_{C^{\prime}_{L}}\otimes\mathcal{H}_{C^{\prime}_{R}}\) whose basis vectors are identified with the Schmidt vectors of \(\ket{\Psi^{\prime}}_{C_{L}C_{R}}\). These each have dimension \(2^{E}\). In our original thought experiment, each of \(\mathrm{Alice}_{L}\) and \(\mathrm{Alice}_{R}\) act on their inputs plus their respective BCFTs to insert the inputs into the bulk. In our finite dimensional model, we should first locally prepare the BCFT states from the finite dimensional state \(\ket{\Psi^{\prime}}\). This is possible because the finite dimensional entangled state shares the same Schmidt spectrum. Each Alice then insert the inputs into the bulk. This is captured by the application of the first round operations \(\mathbf{V}^{L}_{A_{L}C_{L}\to C_{L,L}C_{L,R}}\otimes\mathbf{V}^{R}_{A_{R}C_{R} \to C_{R,L}C_{R,R}}\). Next, The Alice's divide the BCFTs into \(X>0\) and \(X<0\) portions, and communicate half of the degrees of freedom. Naively this step presents a challenge for our finite dimensional model to capture, since the entanglement across the \(X=0\) cut in the CFT will be infinite, so we can't prepare states sharing approximately the same spectrum in a finite dimensional Hilbert space. However, a key feature of our finite dimensional model is that we allow isometries in the first round, and in particular systems \(C_{L,L}\), \(C_{L,R}\) and \(C_{R,L}\), \(C_{R,R}\) can be arbitrarily large. Thus we can choose the \(X_{+}\) and \(X_{-}\) regions to be separated by a small cut-off \(\delta\), and our protocol captures the entanglement across this cut-off arbitrarily well. The dimension of the needed system will grow as \(\delta\to 0\), but because we allow isometries, this is captured by our protocol above. Note that there is no dependence on the cut-off in the amount of entanglement used, which is fixed by the (finite) black hole area. ## 5 Computation in global AdS\({}_{2+1}\) In this section we study global AdS\({}_{2+1}\), and show that a similar claim to the two sided case holds: computation happening in the bulk of AdS can be reproduced as a non-local quantum computation, using entanglement controlled by the size of the scattering region. Towards showing this the main obstruction is that interactions in the bulk of global AdS\({}_{2+1}\) are most naturally related to augmented non-local computation, rather than non-local computation directly. This is related to the appearance of the "side-regions" that we define below, from which the extra systems discussed in figure 0(c) originate. To get around this, we approximate the original CFT with a pair of entangled BCFTs. The BCFTs live on the same geometry as the original CFT but with the side regions removed. We show that the dual of the BCFTs includes enough of the bulk geometry that they can still be used to perform the computation, and that they share similar entanglement to the relevant subregions of the original CFT. Quantitatively, they share entanglement that becomes close to the area of the scattering region as the scattering region becomes large.9 Footnote 9: This justifies the assumption made in [10] that the entanglement between the inputs is responsible for supporting the computations happening inside the scattering region in the limit where the side regions are becoming small. It also gives a quantitative bound on how much more entanglement can be necessary in the setting of large side regions. ### Bulk and boundary geometry of \(2\to 2\) task In this section we give the geometrical set-up of input and output points at the boundary of AdS\({}_{2+1}\). To begin, consider the picture in figure 7(a), showing an AdS\({}_{2+1}\) spacetime. In figure 7(b) we show the dual boundary spacetime. The basic observation is that, choosing four points in the boundary, we can have it happen that one can meet in the bulk but not Figure 8: **(a)** Global AdS\({}_{2+1}\) with an example choice of input and output points. These points have a non-empty scattering region in the bulk, but an empty scattering region in the boundary. **(b)** Boundary view of AdS\({}_{2+1}\) with an illustration of the input regions \(\mathcal{V}_{1},\mathcal{V}_{2}\) and side regions \(\mathcal{X}_{1},\mathcal{X}_{2}\). **(c)** Causal structure present in the bulk of AdS, with the choice of input and output regions shown in figure (a). **(d)** Causal structure present in the boundary of AdS with the same choice of input and output regions. in the boundary. To discuss this more carefully, define the bulk _scattering region_, \[J[c_{1},c_{2}\to r_{1},r_{2}]\equiv J^{+}(c_{1})\cap J^{+}(c_{2})\cap J^{-}(r_{1} )\cap J^{-}(r_{2}). \tag{108}\] The observation is that we can have geometries and choices of points \(c_{1},c_{2},r_{1},r_{2}\) such that \(J[c_{1},c_{2}\to r_{1},r_{2}]\) is non-empty, while the object \(\hat{J}[c_{1},c_{2}\to r_{1},r_{2}]\) obtained by replacing each bulk light cone with its boundary restriction is empty. We introduce some definitions to capture relevant aspects of the boundary geometry. Define the _input regions_ \[\mathcal{V}_{1} =\hat{J}^{+}(c_{1})\cap\hat{J}^{-}(r_{1})\cap\hat{J}^{-}(r_{2}), \tag{109}\] \[\mathcal{V}_{2} =\hat{J}^{+}(c_{2})\cap\hat{J}^{-}(r_{1})\cap\hat{J}^{-}(r_{2}). \tag{110}\] Further, define the _side regions_, \[\mathcal{X}_{1} =\hat{J}^{-}(r_{1})\cap[\mathcal{V}_{1}\stackrel{{ \text{\scalebox{0.8}{$\frown$}}}}{{\cup}}\mathcal{V}_{2}], \tag{111}\] \[\mathcal{X}_{2} =\hat{J}^{-}(r_{2})\cap[\mathcal{V}_{1}\stackrel{{ \text{\scalebox{0.8}{$\frown$}}}}{{\cup}}\mathcal{V}_{2}], \tag{112}\] where \(\bar{\mathcal{A}}\) denotes the spacelike complement of spacetime region \(\mathcal{A}\). See figure 7(b) for an illustration of these regions. Figure 7(c) shows the causal features of the bulk schematically, in the form of a causal network. The central vertex represents the (non-empty) scattering region. In figure 7(d), we show the causal network describing the boundary. Notice that in the boundary, we don't have exactly the usual network describing a non-local quantum computation. This is because of the regions \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) that sit between the regions \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\). We call this the _augmented NLQC_ scenario. This complicates the discussion, as it is only correlation between \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) that we will have control over, while e.g. \(I(V_{1}X_{1}:V_{2}X_{2})\) is divergent. We recall some further geometric facts established in [3]. First, an interesting observation was made that the scattering region sits inside of the entanglement wedge of \(\mathcal{V}_{1}\cup\mathcal{V}_{2}\). In notation, \[J[c_{1},c_{2}\to r_{1},r_{2}]\subseteq E_{\mathcal{V}_{1} \mathcal{V}_{2}}. \tag{113}\] Figure 9: A scattering region in AdS\({}_{2+1}\). The lower edge is the ridge, \(r\). Second, we will measure the size of the scattering region in terms of the area of its lower edge, which we call the _ridge_, defined by \[r=\partial J^{+}(c_{1})\cap\partial J^{+}(c_{2})\cap J^{-}(r_{1}) \cap J^{-}(r_{2}). \tag{100}\] We illustrate the scattering region and the placement of the ridge in figure 9. For the CFT vacuum state, [3] showed that the area of this surface has a simple boundary expression, \[I(V_{1}:V_{2})_{\Psi}=\frac{\text{area}(r)}{4G_{N}}. \tag{101}\] This becomes \(\geq\) away from the vacuum. Our goal below will be to relate computations happening inside the scattering region to (non-augmented) non-local computations. This connection was argued for based on the heuristic of ignoring the side regions in [2; 3], but here we address this more carefully. Following our discussion of the two sided black hole case, our strategy is to set boundary conditions at the edge of the \(\mathcal{V}_{1}\), \(\mathcal{V}_{2}\) regions, and remove the side region degrees of freedom from our system. This replaces our initial CFT with two, entangled BCFTs that (in a sense we make precise) approximate the original CFT, and in particular share much of the original bulk geometry. In fact, we will see that it shares enough of the original geometry so as to still support the same bulk computations. ### ETW brane solutions The particular solutions we need can be prepared via a Euclidean path integral. Recall that pure global AdS\({}_{2+1}\) is prepared by the Euclidean path integral on the infinite cylinder. After a Wick rotation, and using global coordinates, the resulting solution is described by the metric \[ds^{2}=\frac{\ell^{2}}{\cos^{2}(r)}\left(-dt^{2}+dr^{2}+\sin^{2} (r)d\phi^{2}\right). \tag{102}\] Figure 10: Lorentzian global AdS\({}_{2+1}\) showing a solution with two ETW branes. These solutions can be prepared via the Euclidean path integral, as studied in appendix D. The solution shown is the minimal action one when the intervals cut out by the brane have angular radius below \(24^{\circ}\). We modify the Euclidean path integral by cutting out two (roughly) disk shaped regions, centered at \(t=0\) and located antipodally. In appendix D we solve for the brane solution explicitly.10 For \(T=0\), we find the trajectory of each brane is described by Footnote 10: These brane solutions were also obtained in [34], although there multi-brane solutions are obtained by sewing together single brane Lorentzian solutions, and no explicit path integral giving the multi-brane solution is given. We prepare the two brane solution to allow studying the transition between a connected and disconnected geometry. \[\cos(t-t_{0})=\frac{1}{\cos(\Delta\phi)}\cos(\phi-\phi_{0})\sin r \tag{112}\] and the interior geometry is a portion of the pure AdS geometry described by equation 111. We construct these solutions in detail in appendix D. Similar solutions for general values of \(T\) exist. The parameter \(\Delta\phi\) is the angular radius of the brane at \(t=t_{0}\). We can intersect the interior regions of several branes, placed at different \(\phi_{0}\) and potentially with different widths, to construct multibrane solutions. Note that if we make the BCFT intervals too small, a new solution will dominate the path integral where the branes connect in the opposite way, so that the two BCFTs are not connected through the bulk geometry. We will assume we are in the configuration shown, with a connected geometry. In appendix D, we find that we are in the connected geometry whenever the branes occupy intervals with angular radius \(\Delta\phi\lesssim 24^{\circ}\). An important view on this brane is its intersection with the \(t=\pi/4\) slice. This is given by \[1=\frac{1}{\cos(\Delta\phi)}\cos(\phi-\phi_{0})\sin(r). \tag{113}\] We can observe that this is the same trajectory as the RT surface anchored to the same endpoints.11 Further, we can observe that the brane sits inside the causal future of this \(t=t_{0}\) surface -- this follows from a simple calculation showing that the constant \(\theta\) curves, which foliate the full brane, are everywhere timelike or null. Footnote 11: Compare e.g. to 6.1.26 in [35], after the coordinate transformation \(\rho=\tan r\). This is true for zero tension branes and when comparing to extremal surfaces in pure AdS, but not in general. The trajectory of the branes endpoint on the boundary is found by setting \(r=\pi/2\) in 112, which gives \[\cos(t-t_{0})=\frac{1}{\cos(\Delta\theta)}\cos(\phi-\phi_{0}). \tag{114}\] We are interested specifically in a setting with two branes, one centered at \(t=\pi/4\), \(\phi=\pi/2\), the other at \(t=\pi/4,\theta=\pi/2\). We find that these intersect at \[b_{1}=(t=3\pi/4,\phi=0), \tag{115}\] \[b_{2}=(t=3\pi/4,\phi=\pi). \tag{116}\] We show the boundary brane trajectories in figure 11. These BCFT geometries were studied in [34; 36], where it was pointed out that they approximate the original CFT state in a precise sense. In particular, as the radius of the circles on which boundary conditions have been set goes to zero, the reduced density matrix on the remaining degrees of freedom approaches the density matrix of the corresponding subsystems of the CFT. Because these BCFT states approximate the original CFT state while having the side regions removed, they are natural candidates to use to connect computation in the bulk to (un-augmented) non-local computation. We take this up in the next section. ### Non-local computation and the scattering region In this section we give a protocol that allows whatever computations that can happen in the scattering region to be performed non-locally, using a resource system with mutual information controlled by the size of the scattering region. This holds so long as the computation does not induce too large of a backreaction, and the angular radius of the extra regions is smaller than the threshold needed to obtain a connected brane geometry, \(\Delta\phi\lesssim 24^{\circ}\). To begin, suppose some quantum task \(T\) can be completed in the bulk, with some particular placement of input and output regions. This placement of points defines spacetime regions \(\mathcal{V}_{1}\), \(\mathcal{V}_{2}\) in the boundary. We take the background state to be the CFT vacuum, corresponding to pure AdS in the bulk. The bulk picture has input systems \(A_{1}\), \(A_{2}\) meet in the scattering region, interact, then exit the scattering region and travel towards the output points. This can be arranged by acting with unitaries \(\mathbf{V}_{1}\), \(\mathbf{V}_{2}\) on regions \(\mathcal{V}_{1},\mathcal{V}_{2}\) to insert the inputs and computing device into the bulk, then allowing the CFT to time evolve. From this process, we build a non-local computation protocol that completes the same task, in the non-augmented scenario. Our strategy is to use \(\left|\tilde{\Psi}\right>_{V_{1}V_{2}}\) as the resource system, which we define as the dual state to the two brane solution discussed in the last section. Figure 11: Boundary picture, where the \(\mathcal{X}_{i}\) regions have been removed by replacing them with boundary conditions on the \(\mathcal{V}_{i}\) regions. The brane trajectories are shown in blue. Note that the region behind the branes is empty — no spacetime or degrees of freedom are associated with it. These two boundary spacetime regions are connected through the bulk geometry. We choose the size and placement of the branes such that the side regions \(\mathcal{X}_{i}\) are cut out, as shown in figure 11. We denote the spacetime dual to \(|\Psi\rangle_{V_{1}V_{2}X_{1}X_{2}}\) as \(\mathcal{M}\), and the spacetime dual to \(\left|\tilde{\Psi}\right\rangle_{V_{1}V_{2}}\) as \(\tilde{\mathcal{M}}\). It will be convenient to view \(\tilde{\mathcal{M}}\) as a subset of \(\mathcal{M}\), whose boundary is defined by the brane trajectory. To describe the protocol, we will adopt the operational language briefly used in section 3.1, and consider \(\text{Alice}_{L}\) and \(\text{Alice}_{R}\), who will each initially hold one of the BCFTs. To construct the protocol, first observe that \(\tilde{\mathcal{M}}\) contains the scattering region. To see this, notice that the entangling surface \(\gamma_{\mathcal{V}_{1}\mathcal{V}_{2}}\) in the original geometry sits exactly on the two branes, along the moment of time symmetry. Because the branes follow timelike trajectories, they sit outside the domain of dependence of \(E_{\mathcal{V}_{1}\mathcal{V}_{2}}\). Finally, recall from section 5.1 that the scattering region sits inside of the entanglement wedge \(E_{\mathcal{V}_{1}\mathcal{V}_{2}}\), so that it also sits inside of \(\tilde{\mathcal{M}}\). The first round of the protocol then is to take the state \(\left|\tilde{\Psi}\right\rangle_{V_{1}V_{2}}\) and act on it with \(\mathbf{V}_{1}\otimes\mathbf{V}_{2}\) -- the same unitaries as in the original state \(\Psi\). In the bulk picture, the same computation happens inside of the scattering region as in the original geometry, and the outputs begin moving towards the output locations. In the communication round, \(\text{Alice}_{1}\) and \(\text{Alice}_{2}\) redistribute subsystems of the CFT according to the picture in figure 12. Label the systems held by \(\text{Alice}_{i}\) after the communication round by \(W_{i}\), and the corresponding subregions by \(\mathcal{W}_{i}\). It remains to understand if, in the second round, the output systems \(B_{i}\) can be recovered by \(\text{Alice}_{i}\). To understand this, we should ask where the entanglement wedges of the \(\mathcal{W}_{i}\) subregions sit. See figure 13 for an illustration of the surfaces appearing in this paragraph. Considering \(\gamma_{\mathcal{W}_{1}}\), there are two candidate geodesics for this surface: the line \(\gamma^{\prime}_{\mathcal{W}_{1}}\) at constant time connecting these two points straight through the bulk, or a surface that attaches to the Figure 12: In the communication round of the protocol, degrees of freedom on the left future boundary of \(\mathcal{V}_{1}\) and right future boundary of \(\mathcal{V}_{2}\) are passed to \(\text{Alice}_{1}\), these are shown in red. Degrees of freedom on the right future boundary of \(\mathcal{V}_{1}\) and left future boundary of \(\mathcal{V}_{2}\) are passed to \(\text{Alice}_{2}\), these are shown in orange. brane, call it \(\gamma^{\prime\prime}_{\mathcal{W}_{1}}\). Taking the brane radius \(R\) sufficiently small, we expect \(\gamma^{\prime}_{\mathcal{W}_{1}}\) is minimal. Explicitly, in appendix D we find that this happens whenever the angular radius of the extra regions \(\Delta\phi\) is less than \(39^{\circ}\). Since this is larger than the condition we already took to get a connected geometry, this adds no new constraint. Given that the radial line \(\gamma^{\prime}_{\mathcal{W}_{1}}\) defines the wedge of \(\mathcal{W}_{1}\) (and of its complement \(\mathcal{W}_{2}\)), there is a simple way to describe its entanglement wedge. The wedge \(E_{\mathcal{W}_{1}}\) is just the wedge of the interval \((0,\pi)\) at time \(t=\pi/2\) taken in the spacetime \(\mathcal{M}\), then restricted to \(\tilde{\mathcal{M}}\). This placement of entanglement wedges always ensures \(A_{1}\) enters the wedge of \(\mathcal{W}_{1}\), and \(A_{2}\) enters the wedge of \(\mathcal{W}_{2}\). To see this, observe that the two future boundaries of \(E_{\mathcal{V}_{1}\mathcal{V}_{2}}\) are the past boundaries of \(E_{\mathcal{W}_{1}}\) and \(E_{\mathcal{W}_{2}}\). Thus, \(A_{1}\), since it travels to \(r_{1}\), enters \(E_{\mathcal{W}_{1}}\), and similarly \(A_{2}\) enters \(E_{\mathcal{W}_{2}}\). We only need to see that this happens before these systems leave \(\tilde{\mathcal{M}}\) by colliding with the branes, which again is true because the brane sits (strictly) outside of \(E_{\mathcal{V}_{1}\mathcal{V}_{2}}\). We illustrate this in figure 14. Since \(A_{1}\) sits in \(E_{\mathcal{W}_{1}}\) and \(A_{2}\) in \(E_{\mathcal{W}_{2}}\), \(\mathrm{Alice}_{1}\) and \(\mathrm{Alice}_{2}\) can use entanglement wedge reconstruction to recover the output systems. Further, because we consider input systems living in a subspace small enough not to move the entangling surface, this can be done via a recovery channel which is universal for all states of the input [26]. **Entanglement in the BCFT state** So far, we have established that the state \(\ket{\tilde{\Psi}}_{V_{1}V_{2}}\) suffices to perform a computation in the standard non-local scenario whenever it can be completed in the bulk scattering region. Next, we study how much entanglement is available in \(\ket{\tilde{\Psi}}_{V_{1}V_{2}}\), and how this compares to the entanglement in the original CFT state \(\ket{\Psi}_{V_{1}V_{2}}\). To relate these quantities it is helpful to recall the definition of the _entanglement wedge cross section_, and its relationship to the mutual information. Informally, given two bound Figure 13: Connected (a) and disconnected (b) configurations of the minimal surface enclosing the red system shown in figure 12. The orange subsystem has a similar transition and entanglement wedge. ary subregions \(\mathcal{A}\), \(\mathcal{B}\), the entanglement wedge cross section is defined by finding the entanglement wedge of \(\mathcal{A}\cup\mathcal{B}\), call it \(E_{\mathcal{A}\mathcal{B}}\), and the minimal area extremal surface \(\gamma\) that divides \(E_{\mathcal{A}\mathcal{B}}\) into a portion homologous to \(\mathcal{A}\) and portion homologous to \(\mathcal{B}\). The entanglement wedge cross section is then \[E_{W}(A:B)=\frac{\text{area}(\gamma)}{4G_{N}}. \tag{5.15}\] In [37], twice the entanglement wedge cross section was shown to be equal to an entanglement quantity known as the reflected entropy, and denoted \(S_{R}(A:B)\), so that \[S_{R}(A:B)=2E_{W}(A:B). \tag{5.16}\] In [38], it was proven that for holographic states the reflected entropy and the mutual information are related by the inequality, \[S_{R}(A:B)-I(A:B)\geq\frac{\log(2)\ell}{2G_{N}}k \tag{5.17}\] where \(k\) is the number of boundary points of \(\gamma\). In the setting we apply this we have \(k=2\). This becomes an equality in the limit where \(\mathcal{A}\), \(\mathcal{B}\) occupy the entire boundary. Consider the entanglement wedge of \(\mathcal{V}_{1}\) in the BCFT geometry \(\tilde{\mathcal{M}}\). The minimal surface enclosing this will be the one shown in figure 15, that joins the two ETW branes. Because we've considered zero tension branes, the brane trajectory corresponds to the minimal surfaces that define \(E_{\mathcal{V}_{1}\mathcal{V}_{2}}\) in the original geometry \(\mathcal{M}\). The area of the minimal surface in \(\tilde{\mathcal{M}}\) is then equal to the entanglement wedge cross section measured in the original geometry \(\mathcal{M}\). This leads to \[I(V_{1}:V_{2})_{\tilde{\Psi}}=2E_{W}(V_{1}:V_{2})_{\Psi}. \tag{5.18}\] Figure 14: Cross section through the bulk of \(\tilde{\mathcal{M}}\). Branes \(b_{1}\), \(b_{2}\) travel in a timelike direction, meeting somewhere to the future of \(E_{\mathcal{V}_{1}\mathcal{V}_{2}}\). The shaded region is a cross section of the scattering region. Outputs pass through its two future boundaries, travelling towards \(r_{1}\) and \(r_{2}\) respectively. The entangling surface \(\gamma_{\mathcal{W}_{1}}=\gamma_{\mathcal{W}_{2}}\) sits at the future edge of \(E_{\mathcal{V}_{1}\mathcal{V}_{2}}\), so these outputs pass into \(E_{\mathcal{W}_{1}}\), \(E_{\mathcal{W}_{2}}\), and do so before reaching the ETW branes. This means that the mutual information in the BCFT state is equal to the canonical entropy in the original CFT state, which from equation 5.16 is twice the entanglement wedge cross section, giving \[I(V_{1}:V_{2})_{\bar{\Psi}}=S_{R}(V_{1}:V_{2})_{\Psi}. \tag{5.19}\] Finally, from [38] we have that for large regions \(\mathcal{V}_{1}\), \(\mathcal{V}_{2}\), the canonical entropy and mutual information become close, so as \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) become large we have \[I(V_{1}:V_{2})_{\bar{\Psi}}=S_{R}(V_{1}:V_{2})_{\Psi}\to I(V_{1}:V_{2})_{ \Psi}+\frac{\log(2)\ell}{G_{N}}. \tag{5.20}\] We find that the mutual information in the original CFT state and the introduced BCFT state are becoming close, up to an additive constant. Recapping this discussion, we saw in the last section that the entanglement in the BCFT state \(\tilde{\Psi}\) suffices to reproduce in the non-local form any computations happening inside the scattering region. The mutual information in the BCFT state is expressed in terms of the original CFT state as the reflected entropy \(S_{R}(V_{1}:V_{2})_{\Psi}\). In the limit of large scattering regions, \(S_{R}(V_{1}:V_{2})_{\Psi}\) is equal to the mutual information in the original CFT state, up to an additive constant, so in that limit the mutual information in the original CFT state controls the needed entanglement to support the computation happening inside the scattering region. ### Extension to higher dimensional global AdS In higher dimensional global AdS we can also relate bulk interactions to non-local computation, though (as happens with the two sided case in higher dimensions) we lose some of our quantitative statements. For concreteness, let's consider AdS\({}_{3+1}\)/CFT\({}_{2+1}\). Figure 15: A constant time slice of AdS\({}_{2+1}\) with two ETW branes (shown in gray). The minimal surface (blue) homologous to \(\mathcal{V}_{1}\), \(\gamma_{\mathcal{V}_{1}}\), connects the two branes. We show a single time slice of AdS\({}_{3+1}\) in figure 15(a). A basic obstruction to applying the ideas used above in this setting is that whenever we have in the bulk \[J_{12\to 12}=J^{+}(c_{1})\cap J^{+}(c_{2})\cap J^{-}(r_{1})\cap J^{-}(r_{2})\neq\emptyset \tag{119}\] we will also have \(\hat{J}_{12\to 12}\neq\emptyset\). Thus in this setting scattering in the bulk comes along with scattering in the boundary, and we can't immediately argue entanglement in the boundary is needed. However, our understanding of the AdS/CFT dictionary suggests the correct picture for how the boundary reproduces the bulk interaction is not that it is implemented directly as a local interaction in the boundary scattering region. Instead, since the bulk scattering region is recorded only into large boundary regions, we still expect some entanglement based boundary process is supporting this bulk physics. We can justify this intuition precise using a brane construction. To construct our brane solution, we replace the CFT with two BCFTs, defined on caps restricted to \(\phi<\pi/2-\epsilon\), \(\phi>\pi+\phi_{0}\), and look for brane solutions that end on \(\phi=\pi/2\pm\epsilon\). This corresponds to a CFT with a strip of angular width \(2\epsilon\) around the equator of the sphere removed. As the strip becomes thin enough we expect the bulk geometry connects the two caps, leaving the bulk geometry connected, and that we see this behaviour in any number of dimensions.12 We will assume the strip is chosen small enough such that this is the case, so that the brane solution takes on the topology shown in figure 15(b). Footnote 12: This was also discussed in detail in [36]. We can use this brane geometry to argue interactions happening in the bulk of global AdS\({}_{3+1}\) can be reproduced as non-local quantum computations as follows. We take the two caps on which our BCFTs are defined, call them \(\mathcal{C}_{L}\) and \(\mathcal{C}_{R}\), to be our input regions, and then define output regions \[\mathcal{R}_{+} =\{q:0<\theta<\pi,t=T\}\] \[\mathcal{R}_{-} =\{q:-\pi<\theta<0,t=T\} \tag{120}\] Figure 16: (a) A constant time slice of global AdS\({}_{2+1}\). (b) The CFT is replaced with two BCFTs defined on caps. The brane is expected to take on a connected configuration as shown when the caps are made large enough. to be the output regions. We define the scattering region as before, \[J_{LR\rightarrow+-}=J^{+}(\mathcal{C}_{R})\cap J^{+}(\mathcal{C}_{L})\cap J^{-}( \mathcal{R}_{+})\cap J^{+}(\mathcal{R}_{-}) \tag{108}\] which captures the portion of the bulk where interactions can be reproduced in the non-local form. Explicitly, to do this one inserts input systems into the bulk by acting on the input regions, and has them interact and send outputs towards the \(+\) and \(-\) regions. Using the communication round of the non-local computation to exchange degrees of freedom, the second round operations then act on the \(+\) and \(-\) regions to recover the outputs. It remains to argue that, at least for \(\epsilon\), \(T\) small enough, the scattering region is non-empty. To see this, first notice that if we make \(\epsilon\), \(T\) small enough the extremal surface attached to \(\theta=0\) and \(\theta=\pi\), \(t=T\) will connect the two BCFTs. We claim that whenever this is the case at least the point \(p\) at \(t=r=0\) is inside the scattering region. To see this, note that the extremal surface enclosing the left (right) BCFT at \(t=0\) is the \(\phi=\pi/2\) plane, which includes \(p\), so \[p\in J^{+}(\mathcal{C}_{L})\cap J^{+}(\mathcal{C}_{R}). \tag{109}\] It remains to show \(p\) is inside the past of the wedges \(E_{\mathcal{R}_{-}}\), \(E_{\mathcal{R}_{+}}\). To see this, define the regions \[\mathcal{R}^{\prime}_{+} =\{q:0<\theta<\pi,t=-T\}\] \[\mathcal{R}^{\prime}_{-} =\{q:-\pi<\theta<0,t=-T\} \tag{110}\] Notice that the entangling surface for \(E_{\mathcal{R}_{+}}\) must sit at \(t>0\) when it reaches \(r=0\). If it didn't, by symmetry it would cross the entangling surface for \(E_{\mathcal{R}^{\prime}_{+}}\), which can't happen by causality. Similarly, the entangling surface for \(E_{\mathcal{R}_{-}}\) must sit at \(t>0\) when it reaches \(r=0\). But then both entangling surface have \(p\) in their past, so \[p\in J^{+}(\mathcal{C}_{L})\cap J^{+}(\mathcal{C}_{R})\cap J^{-}(\mathcal{R}_ {-})\cap J^{-}(\mathcal{R}_{+}) \tag{111}\] and hence the scattering region is non-empty. ## 6 Discussion A basic lesson of [2; 3; 15] is that boundary entanglement plays a necessary role in supporting bulk interaction. Non-local computation provides a quantum information theoretic way to understand this, and in particular to explore the many questions around what interactions can be supported by entanglement in this way, and conversely what holography implies about non-local computation. In this work, we extended this basic lesson to higher dimensions and two sided black holes. With briefly conclude with a few comments on future directions or relationships to other work. **Complexity and the black hole singularity** There is a general expectation that high complexity unitaries should require large entanglement to implement in the non-local form. Assuming this, our construction would give that high complexity unitaries are forbidden from being implemented inside reasonably small sized subregions of the black hole. From a bulk perspective, this is a unsurprising claim: we might expect high complexity operations to require large physical time to implement,13 and there is finite time before reaching the singularity. Footnote 13: For another perspective on this, see [39]. What is perhaps more interesting is that the non-local computation perspective suggests a boundary understanding of the appearance of this finite time, and hence on the appearance of this singularity. We suggest that the CFT is limited by its finite entanglement to only allow low complexity computations to happen, and the bulk dual "geometrizes" this constraint via the appearance of the bulk singularity, which ends bulk time and limits computation.14 Footnote 14: We thank Steve Shenker for comments made to us in this direction. **Efficiency of non-local computation and holography** An interesting tension between the efficiency of the current best non-local computation strategies and expectations of what it should be possible to compute in a subregion of the bulk was raised in [3] and explored in [10]. One way out of this tension was, previously, to recall that holography is most naturally related to 'augmented' non-local computations, and that entanglement between the input regions is actually unnecessary in these augmented non-local computations. While it was argued earlier that the standard non-local computation scenario is the relevant one on heuristic grounds in [3; 10], and by going to the lattice setting in [5], our construction gives another sharp setting where the efficiency of existing non-local computation protocols is in tension with having reasonable bulk computations. Another route out of this tension was previously to point to the connection between non-local computation and holography being most precise in low dimensions, where it would be less surprising if bulk interactions were severely limited. This route out of the tension is now also closed. Because non-local quantum computations are cheating strategies in the cryptographic setting of position-verification [40], the suggestion from holography that many computations should be efficiently implementable has potential practical implications. Conversely, an interesting possibility is to use constraints on non-local computation to constrain bulk physics, and in particular constrain the complexity of computation happening in the bulk [10]. Our construction further supports that such entanglement constraints should constrain bulk physics. **Non-local computation with fixed second round** As noted in section 3.1, we related bulk interactions to non-local computations where the second round is fixed, in the sense that the second round doesn't depend on the choice of interaction or on the inputs. Non-local computation with a fixed second round has already appeared in the quantum information literature. In [41] the authors assumed a fixed second round and derived interesting new constraints on non-local computation, under some math ematical conjectures on type constants in Banach spaces. Fixed second rounds also appear naturally elsewhere in cryptography [42]15. We were not able to connect the constraints of [41] directly to holography and our setting, but this work does suggest that adding that the second round is fixed may allow for stronger constraints on bulk computation to be understood. Another comment is that all non-local computation protocols devised so far do involve fixed second rounds, and the observation here is just that protocols built from studying the holographic setting also share this property. Footnote 15: In this reference the analogous idea is of “universal reconstruction”. **Sub-AdS scale scattering regions** A basic limitation appearing here and in [5] to connecting bulk interaction and (non-augmented) non-local computation is the need to restrict to bulk regions of at least AdS scale. It would be interesting to explore further if this limitation can be overcome, which would provide a quantum information theoretic point of view on the emergence of sub-AdS local interactions in AdS/CFT. To construct such a situation within our setting, we would need to find geometries where our branes can come within an AdS distance of each other while remaining the dominant saddle in the path integral, and while keeping the connected (non-brane anchored) extremal surface minimal. Towards keeping the connected brane geometry dominant, we could consider variants of our geometries involving charged black hole and charged branes, and we could consider branes with non-zero (and perhaps differing) tensions, and we could adjust the black hole temperature, which here was implicitly fixed. This would give a richer phase space for the phase transition from connected to disconnected branes, potentially allowing the branes to come within a sub-AdS distance of each other while remaining disconnected. To keep the connected extremal surface minimal, we could consider adding a dilaton field to the branes, which can be used to raise the generalized entropy associated with minimal surfaces ending on the brane [43]. **ETW brane geometries and information processing** The work [36] proposes entangled BCFTs as an alternative set of degrees of freedom that, prepared in appropriate states, can approximate holographic CFTs. An advantage to this alternative description is the finite entanglement among subsystems. Combined with the perspective of [9] on compressing holographic states, this allowed us to move from a description of a quantum information processing protocol in a holographic CFT, involving infinite dimensional systems, to a (approximate) description using only finite dimensional systems. Finding a finite dimensional description of these holographic protocols was discussed in [5] under an assumption the holographic CFT is well approximated by a lattice system, and more commonly relating AdS/CFT to quantum information processing protocols has been discussed in the context of tensor network toy models. AdS/BCFT combined with compression results has a set of advantageous features not reproduced in those settings: it gives a concrete model for what bulk physics is being captured (the original geometry, ended by an ETW brane), can be justified and studied directly from the standpoint of the Euclidean path integral, and has top-down realizations. Moving beyond non-local computation, this seems a promising approach to relating other quantum information processing protocols to holography. Another recent example of AdS/BCFT being used to model interesting information theoretic manipulations of holographic CFTs is given in [44; 45]. **Acknowledgements:** We thank Patrick Hayden, David Perez-Garcia, Shreya Vardhan, Henry Lin, Stefano Antonini, and Raghu Mahajan for helpful discussions. AM is supported by the Simons Foundation It from Qubit collaboration, a PDF fellowship provided by Canada's National Science and Engineering Research council, and by Q-FARM. MX is supported by an NSF Graduate Research Fellowship and the Simons Foundation. ## Appendix A Coordinate systems We summarize the coordinate systems used in this article. We describe our coordinates and their relationships using the embedding space formalism. In particular, our coordinates are parameterizations of the surface \(X^{A}X_{A}=X_{0}^{2}-X_{1}^{2}-X_{2}^{2}+X_{3}^{2}=\ell^{2}\), in the metric \(\text{diag}(1,-1,-1,1)\). Lorentzian Poincare AdS\({}_{2+1}\) is described by coordinates. \[X_{0} =\frac{z}{2}\left(1+\frac{\ell^{2}+x^{2}-t^{2}}{z^{2}}\right)\] \[X_{1} =\frac{z}{2}\left(1-\frac{\ell^{2}-x^{2}+t^{2}}{z^{2}}\right)\] \[X_{2} =\ell\frac{x}{z}\] \[X_{3} =\ell\frac{t}{z} \tag{100}\] Lorentzian global AdS\({}_{2+1}\) is described by \[X_{0} =\ell\frac{\cos(t)}{\cos(r)}\] \[X_{1} =\ell\tan(r)\sin(\phi)\] \[X_{2} =\ell\tan(r)\cos(\phi)\] \[X_{3} =\ell\frac{\sin(t)}{\cos(r)} \tag{101}\] The global, Lorentzian coordinates of the planar BTZ black hole are \[X_{0} =\ell\cos(s)\sec(w)\cosh(X)\] \[X_{1} =\ell\cos(s)\sec(w)\sinh(X)\] \[X_{2} =\ell\tan(w)\] \[X_{3} =\ell\sin(s)\sec(w) \tag{102}\] One exterior region of the planar BTZ black hole is covered by the coordinates \[X_{0} =\ell\sqrt{1+\frac{\rho^{2}}{\ell^{2}}}\cosh X\] \[X_{1} =\ell\sqrt{1+\frac{\rho^{2}}{\ell^{2}}}\sinh X\] \[X_{2} =\rho\cosh t\] \[X_{3} =\rho\sinh t \tag{112}\] ## Appendix B Brane in Poincare AdS\({}_{2+1}\) ### Single brane solution The simplest brane solution we study is dual to the Euclidean path integral shown in figure (a)a. There, we are considering the Euclidean CFT path integral on the plane with no operator insertions, and a boundary condition set at \(x^{2}+\tau^{2}=R^{2}\). We will choose this boundary condition such that the boundary entropy is zero. The bulk dual then will be a zero tension brane extending into the bulk and meeting the asymptotic boundary at \(x^{2}+\tau^{2}=R^{2}\). The solution was studied already in [46], who found a Euclidean Poincare AdS\({}_{3}\) bulk metric \[ds^{2}=\frac{\ell^{2}}{z^{2}}(dx^{2}+d\tau^{2}+dz^{2}) \tag{113}\] and brane trajectory defined by \[x^{2}+z^{2}+\tau^{2}=R^{2}. \tag{114}\] Figure 17: (a) The Euclidean CFT path integral with boundary conditions set along a disk at \(x^{2}+\tau^{2}=R^{2}\). (b) The dual bulk geometry, with a zero tension brane at \(x^{2}+\tau^{2}+z^{2}=R^{2}\). Variations of this solution, obtained by coordinate transformations, adding a second circular boundary, or Wick rotating appear throughout this article. One Lorentzian brane solution we will use is \[x^{2}+z^{2}-t^{2}=R^{2}. \tag{110}\] With the metric being Lorentzian Poincare AdS\({}_{2+1}\). This is obtained by Wick rotating the \(\tau\) coordinate. ### Minimal surfaces Note that for \(R=1\) these results can be extracted from [46]. We are just modifying their calculation to allow arbitrary \(R\), and showing the calculation directly in Lorentzian signature. We start with minimal surfaces in the \(t=0\) slice of Lorentzian Poincare AdS. The brane is sitting at \[x^{2}+z^{2}=R^{2}. \tag{111}\] Consider an interval extending from the brane to \(x=L\). The minimal surface extending from this point is a portion of a circle, meeting the brane orthogonally. In figure 18, we do some elementary geometry to determine the radius of this circle and the angle it extends over. We find \[r_{H}=\frac{L^{2}-R^{2}}{2L},\] \[\tan\alpha=\frac{2LR}{L^{2}-R^{2}}. \tag{112}\] Thus the geodesic is given parametrically by \[x(\theta) =L-r_{H}+r_{H}\cos(\theta),\] \[t(\theta) =0,\] \[z(\theta) =r_{H}\sin(\theta). \tag{113}\] Figure 18: The \(t=0\) slice of Euclidean Poincare AdS\({}_{3}\). An ETW brane (gray) centered at \(x=0\) sits at \(x^{2}+z^{2}=R^{2}\). A minimal surface (blue) anchored to \(x=L\) is a portion of a semi-circle and meets the brane orthogonally. where \(0\leq\theta\leq\pi-\alpha\). Calculating the area of these curves, we find, for the area of a geodesic starting at \(x=x_{0},t=0\), \[A[x_{0},0]=\ell\log\left(\frac{x_{0}^{2}-R^{2}}{R\delta}\right) \tag{104}\] Note that here we use a cut-off at \(z=\delta\), which means \(\theta=\delta/r_{H}\). Observing that the brane trajectory is invariant under boosts in the \((x,t)\) plane, we can obtain the area of geodesics starting at general coordinates \(x_{0},t_{0}\) by writing the area in terms of the invariant interval, \[A[x_{0},t_{0}]=\ell\log\left(\frac{x_{0}^{2}-t_{0}^{2}-R^{2}}{R \delta}\right) \tag{105}\] We will make use of this below, by exploiting a local equivalence of the ETW branes in the planar BTZ black hole to this Poincare AdS brane. ## Appendix C Planar BTZ black hole ### Brane trajectories Consider the Euclidean path integral on a finite cylinder, \[ds^{2}=dX^{2}+d\phi^{2}\qquad\phi\in[0,2\pi) \tag{106}\] with boundary conditions set at \(X=\pm X^{0}\). Choose the boundary conditions to have zero boundary entropy, dual to a bulk solution with a zero tension brane. There are two possible bulk solutions corresponding to this Euclidean path integral, which correspond to having either two separate branes ending on each CFT edge, or a single brane that attaches the two edges together. Putting boundary conditions at \(X=\pm X^{0}\), a result of [30] (see equation 103 there) implies the disconnected solution is the minimal action one for \[X^{0}\geq\frac{\pi}{2}. \tag{107}\] With this condition, the bulk metric dual to the path integral on the cylinder is \[ds^{2}=\left(\frac{\rho^{2}}{\ell^{2}}+1\right)dX^{2}+\frac{d \rho^{2}}{\frac{\rho^{2}}{\ell^{2}}+1}+\rho^{2}d\phi^{2}, \tag{108}\] and the brane trajectory is just \[X=\pm X^{0}. \tag{109}\] Assuming we are in the disconnected phase, Wick rotate \(\phi\to it\) in the metric C.3 to obtain \[ds^{2}=\left(\frac{\rho^{2}}{\ell^{2}}+1\right)dX^{2}+\frac{d\rho^ {2}}{\frac{\rho^{2}}{\ell^{2}}+1}-\rho^{2}dt^{2}\] (C.5) This covers one exterior region of the black hole. A global metric for this black hole, covering the full spacetime, is \[ds^{2}=\frac{\ell^{2}}{\cos^{2}(w)}\left(-ds^{2}+dw^{2}+\cos^{2} (s)dX^{2}\right)\] (C.6) where \(s\in[-\pi/2,\pi/2]\), \(w\in[-\pi/2,\pi/2]\). From section A we have that these two coordinate systems are related by \[X_{0}/\ell =\sqrt{1+\frac{\rho^{2}}{\ell^{2}}}\cosh X=\cos(s)\sec(w)\cosh(X)\] \[X_{1}/\ell =\sqrt{1+\frac{\rho^{2}}{\ell^{2}}}\sinh X=\cos(s)\sec(w)\sinh(X)\] \[X_{2}/\ell =\frac{\rho}{\ell}\cosh t=\tan(w)\] \[X_{3}/\ell =\frac{\rho}{\ell}\sinh t=\sin(s)\sec(w)\] (C.7) Note that in particular the \(X\) coordinates in the two spacetimes are identified trivially. ### Minimal surfaces We'll use the global coordinates C.6 for the two sided planar BTZ black hole, \[ds^{2}=\frac{\ell^{2}}{\cos^{2}(w)}\left(-ds^{2}+dw^{2}+\cos^{2} (s)dX^{2}\right).\] (C.8) From the coordinate transformations in section A, this is related to Lorentzian Poincare coordinates by the transformation \[t =\ell\,e^{X}\tan(s),\] \[x =\ell\,e^{X}\sec(s)\sin(w),\] \[z =\ell\,e^{X}\sec(s)\cos(w).\] (C.9) We would like to understand when the connected surface (the one that threads through the black hole) is minimal. In global coordinates, the minimal surface attached to \(w=\pm\pi/2\), \(s=T\), and at constant \(X=0\) is \[s =T,\] \[w =\arcsin(\tanh(\lambda)).\] (C.10) The area, setting a cut-off \(w\in[-\pi/2+\epsilon,\pi/2-\epsilon]\), is \[A_{c}[T]=2\ell\log\left(\frac{2}{\epsilon}\right).\] (C.11) Next, we look for the trajectory of the brane anchored geodesic. We find this by starting with the solution in Poincare, and transforming to global coordinates. In Poincare, the endpoint of our geodesic is \[x_{0} =\pm\sec(T)\] \[t_{0} =\tan(T) \tag{112}\] and the brane radius \(R\) is related to the position of the brane in the BTZ black hole geometry by \[X_{0}=\ln R. \tag{113}\] We would like to use the area formula B.8 to determine the area of these brane attached surfaces, but to do so need to know what the \(w=\pi/2-\epsilon\) cutoff translates to in terms of a \(z=\delta\) cutoff. The point \(s=T,X=0,w=\pi/2-\epsilon\) translates to \[z=\delta=\epsilon\sec(T) \tag{114}\] Now using the area formula for the disconnected geodesic, we have \[A_{d}[T]=2\ell\log\left(\frac{2\sinh|X_{0}|\cos(T)}{\epsilon} \right). \tag{115}\] To be in the connected phase, we need \[2\ell\log\left(\frac{2\sinh|X_{0}|\cos T}{\epsilon}\right)-2 \ell\log\left(\frac{2}{\epsilon}\right)\geq 0 \tag{116}\] which simplifies to \[\boxed{\sinh|X_{0}|\geq\sec T}. \tag{117}\] ## Appendix D Global AdS\({}_{2+1}\) ### Euclidean brane trajectories We are interested in finding branes ending the Euclidean global AdS spacetime \[ds^{2}=\frac{\ell^{2}}{\cos^{2}(r)}(d\tau_{G}^{2}+dr^{2}+\sin^{2} (r)d\phi^{2}). \tag{118}\] Heuristically, we would like solutions that have boundary conditions set along roughly circular curves located antipodally on the cylinder. To construct a precise brane geometry, consider the coordinate transformation to Poincare AdS, given by \[\tan(r) =\frac{\sqrt{x^{2}+\tau^{2}}}{z},\] \[\tanh(\tau_{G}) =\frac{z^{2}+x^{2}+\tau^{2}-\ell^{2}}{z^{2}+x^{2}+\tau^{2}+\ell^{ 2}},\] \[\tan(\phi) =\frac{\tau}{x}. \tag{119}\] Or, inverting this, \[\tau =\ell\sin(r)\sin(\phi)e^{\tau_{G}},\] \[x =\ell\sin(r)\cos(\phi)e^{\tau_{G}},\] \[z =\ell\cos(r)e^{\tau_{G}}. \tag{110}\] The resulting Poincare AdS metric is \[ds^{2}=\frac{\ell^{2}}{z^{2}}(dz^{2}+dx^{2}+d\tau^{2}). \tag{111}\] We will choose a very simple placement of the boundary conditions in Poincare AdS, which will map to branes in global AdS with appropriate qualitative features, and in particular will give the brane trajectory stated in the main text as equation 109. In Poincare, we set boundary conditions at \[(x\pm\ell x_{0})^{2}+\tau^{2}=R^{2} \tag{112}\] There is a solution consisting of two separate hemispheres attached to each of these edges, and a solution where the brane connects the two edges. We would like to understand when the disconnected solution has minimal action. By doing a conformal transformation, we can observe that this phase transition is the same one as was studied in [30], and which we also exploited in appendix C. In particular we can map the exterior of the two disks to the finite cylinder. The disk radii and separation fix the cylinder height. **Conformal map to the cylinder** To map our plane with two disks removed to the cylinder, we will first map to the stereographic sphere, then project back down into a rotated plane. In the rotated plane, the resulting region is an annulus, which is conformal to a cylinder with finite height by the usual exponential map. The basic idea is shown in figure 19. To go from the stereographic coordinates to Cartesian coordinates, we have the map \[x =\frac{2r\cos\phi}{1-\sin\phi\cos\theta},\] \[\tau =\frac{2r\sin\phi\sin\theta}{1-\sin\phi\cos\theta}. \tag{113}\] We can check that constant \(\phi=\phi_{0}\) curves map to circles in the \((x,\tau)\) plane. The circle parameters \((x_{0},R)\) are related to \((r,\phi_{0})\) by \[R =2r\tan\phi_{0},\] \[x_{0} =2r\sec\phi_{0}. \tag{114}\] Recall that to make the Wick rotation, we needed to set \(x_{0}^{2}-R^{2}=1\). Here this condition amounts to \(r=1/2\). We can also invert the above (keeping \(r\) free), to find \[r =\frac{1}{2}\sqrt{x_{0}^{2}-R^{2}},\] \[\tan\phi_{0} =\frac{R}{\sqrt{x_{0}^{2}-R^{2}}}. \tag{115}\] Next, we will map to a second plane, this one tilted 90 degrees compared to the first. We show the setting again in figure 19. The new plane coordinates \(\bar{x},\bar{\tau}\) are given in terms of the spherical coordinates by \[\bar{x} =\frac{2r}{1-\cos\phi}\sin\phi\cos\theta,\] \[\bar{\tau} =\frac{2r}{1-\cos\phi}\sin\phi\sin\theta. \tag{113}\] This takes the two caps of the sphere to a disk and an punctured plane, so that the remaining CFT lives on an annulus. The inner and outer radii are \[r_{+} =\frac{2r\sin\phi_{0}}{1-\cos\phi_{0}}=\frac{R\sqrt{x_{0}^{2}-R^{2 }}}{x_{0}-\sqrt{x_{0}^{2}-R^{2}}},\] \[r_{-} =\frac{2r\sin\phi_{0}}{1+\cos\phi_{0}}=\frac{R\sqrt{x_{0}^{2}-R^{ 2}}}{x_{0}+\sqrt{x_{0}^{2}-R^{2}}}. \tag{114}\] We can also invert 113 to obtain \[\tan\theta =\frac{\bar{\tau}}{\bar{x}}\] \[\sin\phi =\frac{4r\sqrt{\bar{x}^{2}+\bar{\tau}^{2}}}{\bar{x}^{2}+\bar{\tau }^{2}+4r^{2}} \tag{115}\] Figure 19: The stereographic mapping from the plane with two disks removed to the sphere is conformal, and takes twice punctured plane to a band around the sphere. A second stereographic map from the sphere to the dashed plane takes the band to an annulus. The radial conformal map then maps the annulus to the cylinder. The coordinate change from the initial \((x,\tau)\) plane to the \((\bar{x},\bar{\tau})\) plane is \[\bar{x} =2r\frac{x^{2}+\tau^{2}-4r^{2}}{(x-2r)^{2}+\tau^{2}}\] \[\bar{\tau} =8r^{2}\frac{\tau}{(x-2r)^{2}+\tau^{2}} \tag{101}\] We can check explicitly that this is conformal, and maps circles centered around the origin in one set of coordinates to circles offset in the \(x\) direction in the other coordinates. Finally, we go to the cylinder. Using radial coordinates \((r,\phi)\) in the plane, the map to the cylinder is given by \(r=e^{X}\). The height of the cylinder is therefore \[H=X_{+}-X_{-}=\ln\left(\frac{r_{+}}{r_{-}}\right)=\ln\left(\frac{x_{0}+\sqrt{ x_{0}^{2}-R^{2}}}{x_{0}-\sqrt{x_{0}^{2}-R^{2}}}\right). \tag{102}\] **Comparison to the connected solution** Recall from section C equation C.2 that the finite cylinder is in the disconnected phase when \[H\geq\pi \tag{103}\] which using 102 leads to \[1\leq x_{0}\leq\frac{e^{\pi}+1}{e^{\pi}-1}. \tag{104}\] Note the first inequality is automatically satisfied by our previous condition for Wick rotation, \(x_{0}^{2}-R^{2}=1\). In the next section, we will relate the shift parameter \(x_{0}\) to the angular opening of the brane when viewed in global AdS. In particular, we find \(x_{0}=\sec\Delta\phi\) for \(\Delta\phi\) the angular radius of the region. The condition on \(\Delta\phi\) then is \(\Delta\phi\lesssim 24^{\circ}\). ### Disconnected branes in global AdS\({}_{2+1}\) The disconnected solution is \[(x\pm\ell x_{0})^{2}+\tau^{2}+z^{2}=R^{2} \tag{105}\] Now we use the coordinate transformation 100 again to express this solution in global coordinates, finding \[\frac{1}{2}(e^{\tau_{G}}+e^{-\tau_{G}}(x_{0}^{2}-R^{2}))=\mp x_{0}\sin(r)\cos(\phi) \tag{106}\] To obtain Lorentzian global AdS, we will Wick rotate \(\tau_{G}\to it_{G}\). To ensure this brane solution is well defined after the Wick rotation, we need to set \(x_{0}^{2}-R^{2}=1\), so that the surface remains real. Doing so and Wick rotating, we obtain \[\cos(t_{G})=\mp x_{0}\sin(r)\cos(\phi) \tag{107}\] We can identify the two choices of sign with a shift in the angular coordinate, so that we have two branes in the geometry, with \[\cos(t_{G}) =x_{0}\sin(r)\cos(\phi)\] \[\cos(t_{G}) =x_{0}\sin(r)\cos(\phi-\pi) \tag{119}\] Finally, we can identify \(x_{0}\) with \(1/\cos(\Delta\phi)\) by looking at this equation restricted to \(r=\pi/2\), \(t=0\). ### Minimal surfaces Next we need to find minimal surfaces in the global geometry. We consider an interval ending at \(t_{G}\), and \(\phi=0,\pi\), and look for the minimal surface that encloses it. The transformation from global to Poincare, in Lorentzian signature, is \[t =\frac{\ell\sin(t_{G})}{\cos(t_{G})-\sin(\phi)\sin(r)}\] \[x =\frac{\ell\cos(\phi)\sin(r)}{\cos(t_{G})-\sin(\phi)\sin(r)}\] \[z =\frac{\ell\cos(r)}{\cos(t_{G})-\sin(\phi)\sin(r)} \tag{120}\] Under this coordinate transformation, we get to the global AdS metric 5.9 with the brane \[\cos(t_{G})=\frac{\ell^{2}+R^{2}}{\ell^{2}-R^{2}}\sin(\phi)\sin(r). \tag{121}\] Notice that \(R\) is related to \(\Delta\phi\) differently than when we transformed from Euclidean Poincare to Euclidean global coordinates, in particular \[\frac{\ell^{2}+R^{2}}{\ell^{2}-R^{2}}=\sec(\Delta\phi). \tag{122}\] We are interested in minimal surfaces anchored to \(\phi=\pm\pi/2\), \(t_{G}=s\). This means \(t=\tan(s)\), \(x=\pm\sec(s)\), so that, using equation 108 for areas of geodesics in Poincare, the brane anchored surface has area \[A_{b}=\ell\log\left(\frac{\ell^{2}-R^{2}}{R\delta}\right) \tag{123}\] where the cutoff is at \(z=\delta\). To compare this to the connected solution, we will need to translate this to a \(r\) cut-off. Using 120 with \(\phi=0\), we find \[\delta=\frac{\epsilon}{\cos(t_{G})+1} \tag{124}\] so that the disconnected minimal surface has area \[A_{b}=2\ell\log\left(\frac{\cos(t)+1}{\epsilon\tan\Delta\phi}\right) \tag{125}\] where we used D.22 to replace \(R\) in favor of \(\Delta\phi\). The minimal surface extending directly though the bulk at constant \(t\), \(\phi\), has area \[A_{0}=2\ell\log\left(\frac{2}{\epsilon}\right).\] (D.26) Requiring \(A_{0}\leq 2A_{b}\) we have \[\boxed{\cos(t)\geq 2\tan\Delta\phi-1}\] (D.27) Recall that in our protocol, we placed the cut at the top of the diamonds \(V_{i}\), which will be at time \(t=\pi/2-\Delta\phi\). Inserting this into the condition above, we find that we need \(\Delta\phi\leq 39^{\circ}\) for the connected surface to be minimal.
2301.11982
Strategy evolution on dynamic networks
Models of strategy evolution on static networks help us understand how population structure can promote the spread of traits like cooperation. One key mechanism is the formation of altruistic spatial clusters, where neighbors of a cooperative individual are likely to reciprocate, which protects prosocial traits from exploitation. But most real-world interactions are ephemeral and subject to exogenous restructuring, so that social networks change over time. Strategic behavior on dynamic networks is difficult to study, and much less is known about the resulting evolutionary dynamics. Here, we provide an analytical treatment of cooperation on dynamic networks, allowing for arbitrary spatial and temporal heterogeneity. We show that transitions among a large class of network structures can favor the spread of cooperation, even if each individual social network would inhibit cooperation when static. Furthermore, we show that spatial heterogeneity tends to inhibit cooperation, whereas temporal heterogeneity tends to promote it. Dynamic networks can have profound effects on the evolution of prosocial traits, even when individuals have no agency over network structures.
Qi Su, Alex McAvoy, Joshua B. Plotkin
2023-01-27T20:43:01Z
http://arxiv.org/abs/2301.11982v3
# Strategy evolution on dynamic networks ###### Abstract Models of strategy evolution on static networks help us understand how population structure can promote the spread of traits like cooperation. One key mechanism is the formation of altruistic spatial clusters, where neighbors of a cooperative individual are likely to reciprocate, which protects prosocial traits from exploitation. But most real-world interactions are ephemeral and subject to exogenous restructuring, so that social networks change over time. Strategic behavior on dynamic networks is difficult to study, and much less is known about the resulting evolutionary dynamics. Here, we provide an analytical treatment of cooperation on dynamic networks, allowing for arbitrary spatial and temporal heterogeneity. We show that transitions among network structures can favor the spread of cooperation, even if each individual social network would inhibit cooperation when static. Furthermore, we show that spatial heterogeneity tends to inhibit cooperation, whereas temporal heterogeneity tends to promote it. Dynamic networks can have profound effects on the evolution of prosocial traits, even when individuals have no agency over network structures. \({}^{1}\)Department of Mathematics, University of Pennsylvania, Philadelphia, PA 19104, USA \({}^{2}\)Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA 19104, USA \({}^{3}\)Department of Biology, University of Pennsylvania, Philadelphia, PA 19104, USA ## 1 Introduction The geographic locations of individuals, together with their social or physical connections, constrain interactions and shape behavioral evolution in a population. A network is a useful model of a population's structure, where nodes represent individuals and edges capture interactions. How network structure affects evolutionary dynamics has been extensively investigated over the last several decades, using techniques including computer simulations, mathematical analysis, and experimental studies with human subjects. A well-known and illustrative finding [1] is that population structure can favor cooperation provided the ratio of the benefit from cooperative behavior, \(b\), to its cost, \(c\), exceeds the average number of neighbors, \(d\). The mechanism underlying this cooperation-promoting effect is that spatial structure enables the formation of cooperative clusters of individuals, who have high payoffs and are capable of resisting invasion by defectors. Most existing studies are based on a static network, where the duration and intensity of interactions remain unchanged throughout the evolutionary process. In contrast, empirical networks frequently vary over time [2]. Representative examples include communication networks involving telephone calls or emails [3, 4]; networks of physical proximity, where individuals encounter different people as they move through space [5, 6]; and ecological networks that change with the seasons as organisms go through different phases of their life cycles [7, 8, 9]. Temporal features can even reverse the evolutionary outcomes. For example, whether an idea or information diffuses throughout a society depends not only on the structure of the network guiding interactions but also on the timing of those interactions, as the coexistence of individuals with different active timing maximizes diffusion [10]. In the context of epidemics, high concurrency (the number of neighbors of a node) leads to a lower epidemic threshold under susceptible-infected-susceptible dynamics, while low concurrency can suppress epidemics [11]. Despite the attention that other dynamical processes have received on time-varying networks, the evolution of cooperation in this setting remains much less studied. One reason to discount any positive effect of dynamic structures comes from intuition on static networks: since cooperators spread via clusters, network transitions will tend break up these clusters, likely leading to diminished reciprocity and exploitation by defectors. Another impediment to undertaking research in this area is the lack of mathematical tools for analyzing strategic interactions on dynamic networks. In static networks, mathematical approaches provide general conditions for how structure affects evolutionary dynamics [12, 13]. They also allow for extensive, efficient numerical explorations into example networks, both artificial and empirical [14]. Whether these approaches can be extended to dynamic networks remains unknown. Endogenous network transitions often produce predictable results for the evolution of cooperation. For example, if cooperators can selectively seek out new connections with other cooperators ("cooperation begets friends") and sever ties with defectors, then it is not surprising to find that these endogenous network changes favor the spread cooperation. But it is much less clear how exogenous transitions in network structure will affect the evolution of cooperation, and so this is the main focus of our study. There is also substantial evidence for the prevalence of exogenous network transitions in nature, ranging from weather fluctuations to human-induced changes to ecosystems [15]. The scope of models with dynamic networks is broad and can include environmental feedback and ecosystem engineering [16]. And even when an organism has some agency over the structure of their environment, the behavioral trait of interest might be unrelated to these changes (e.g. movement between cities need not be tied to altruistic tendencies). Finally, exogenous network transitions that are not dependent on individual behavior provide the most natural point of comparison to static structures. In this paper, we study the evolution of strategic behavior in a population whose structure of social interactions change over time. At any point in time, the population structure is described by a network whose nodes represent individuals and edges represent interactions. Individuals may change their strategies over time, imitating neighbors who have higher payoffs; and the network of interactions itself may also change over time. The interaction network changes at random times, unrelated to the current composition of strategies in the population. We derive general mathematical results for when cooperative behavior is favored, which apply to any stochastic transition pattern among any number of networks, each with arbitrary structure. Surprisingly, we find that in a large class of networks, stochastic transitions among networks can strongly promote cooperation, even though they tend to disrupt cooperative clusters in each network. In fact, even if each individual static network would disfavor cooperation, transitions among them can rescue cooperation. We conclude by analyzing spatial and temporal burstiness, which we show have opposite effects on the evolution of cooperation. ## 2 Model Our model consists of a finite population of size \(N\), with individuals engaged in pairwise social interactions. The structure of the population varies over time, and at each discrete time it is represented by one of \(L\) weighted networks, each with \(N\) nodes. For network \(\beta\in\{1,\ldots,L\}\), we let \(w_{ij}^{[\beta]}\) denote the weight of the edge between nodes \(i\) and \(j\). We assume that all networks are undirected, meaning \(w_{ij}^{[\beta]}=w_{ji}^{[\beta]}\) for all \(i,j\in\{1,\ldots,N\}\) and \(\beta\in\{1,\ldots,L\}\). Each individual in the population can adopt one of two types, or strategies: "cooperator" (\(C\)) or "defector" (\(D\)). Individuals interact in pairwise donation games, with cooperators paying a cost \(c\) to generate benefit \(b\) for their co-player. Defectors pay no costs and generate no benefits. In each time step, everyone plays a donation game with each of their neighbors in the current network, \(\beta\). We denote the state of the population by \(\mathbf{x}\), where \(x_{i}\in\{0,1\}\) indicates the type of individual \(i\), with \(0\) and \(1\) representing types \(D\) and \(C\), respectively. The accumulated payoff to individual \(i\) in network \(\beta\) is then \[u_{i}\left(\mathbf{x},\beta\right)=\sum_{j=1}^{N}w_{ij}^{[\beta]}\left(-cx_{i }+bx_{j}\right). \tag{1}\] In other words, individual \(i\) receives a benefit \(w_{ij}^{[\beta]}b\) from of each of its neighbors \(j\) who are cooperators (\(x_{j}=1\)), and \(i\) pays a cost \(w_{ij}^{[\beta]}c\) to each \(j\) if \(i\) is itself a cooperator (\(x_{i}=1\)). An individual's accumulated payoff in network \(\beta\) is transformed into fecundity, which represents \(i\)'s propensity to reproduce or, equivalently, to be imitated by another individual. The fecundity is given by \(F_{i}\left(\mathbf{x},\beta\right)=1+\delta u_{i}\left(\mathbf{x},\beta\right)\), where \(\delta\) is called the selection intensity, which we assume to be small (\(\delta\ll 1\)). This assumption, called "weak selection," is common in the literature and it aims to capture scenarios in which the social trait (\(C\) or \(D\)) has a small effect on reproductive success. After all pairwise games are played in network \(\beta\) and individuals accumulate payoffs, a random individual \(i\) is selected uniformly from the population to update his or her strategy. This individual then imitates the type of a neighbor, \(j\), with probability proportional to \(j\)'s fecundity. In other words, in network \(\beta\), the probability that \(i\) copies \(j\)'s type is \[e_{ji}\left(\mathbf{x},\beta\right)=\frac{1}{N}\frac{F_{j}\left(\mathbf{x}, \beta\right)w_{ji}^{[\beta]}}{\sum_{k=1}^{N}F_{k}\left(\mathbf{x},\beta\right) w_{ki}^{[\beta]}}. \tag{2}\] Here, the factor of \(1/N\) represents the probability that \(i\) is chosen to update in the first place. After each strategic update, the population structure itself then undergoes a transition step. The probability of moving from network \(\beta\) to network \(\gamma\) is independent of the strategic composition of the population, and it depends only on the current network state, \(\beta\). The stochastic process governing these transitions is described by an \(L\times L\) matrix \(Q=\big{(}q_{\beta\gamma}\big{)}\), where \(q_{\beta\gamma}\) is the probability of transitioning from network \(\beta\) to network \(\gamma\). Note that there may be (and we often assume) a positive chance that the network will remain unchanged at the transition stage, e.g. \(q_{\beta\beta}>0\). The pairwise social interactions, strategic update, and network transition, which comprise a single time step, are depicted in Fig. 1. Figure 1: **Evolutionary games on dynamic networks.****a**, The population structure at any time is described by a network, which may change from one time point to the next. (The figure illustrates an example with two possible networks.) **b**, Each individual (node) in the population adopts the strategy cooperate (\(C\)) or defect (\(D\)) in games played with each neighbor. Each individual \(i\) accumulates a total payoff \(u_{i}\) across pairwise interactions with neighbors, which determine their reproductive rate \(F_{i}=1+\delta u_{i}\). **c**, An individual (marked by “?”) is selected uniformly at random to update its strategy, and all neighboring individuals, indicated by black circles, compete to be imitated by the focal node, with probability proportional to reproductive rates. **d**, After an individual updates its strategy, the population structure itself either changes (from network \(1\) to network \(2\) with probability \(q_{12}\), or from network \(2\) to network \(1\) with probability \(q_{21}\)) or remains the same. **e**, Social interactions and strategy updates repeat on the population structure at the next time step, \(n+1\). Results Without mutation, the population must eventually reach a monomorphic strategic state in which all individuals have the same type, either cooperate or defect. The duration that the population spends in each network is proportional to the corresponding value in stationary distribution \(v\), which is determined by the network transition matrix \(Q\) (see Methods). We assume that a mutant appears in network \(\beta\) with probability \(v\left(\beta\right)\), and it is located at a node chosen uniformly at random. We let \(\rho_{C}\) denote the probability that a single cooperator mutant eventually takes over a resident population of defectors. Likewise, we let \(\rho_{D}\) be the probability that a single defector mutant takes over a resident population of cooperators. We use the condition \(\rho_{C}>\rho_{D}\) to measure whether selection favors cooperation relative to defection [17]. ### Selection condition for the evolution of cooperation We first derive a general result applicable to almost any transition pattern, \(Q\), among any finite number of networks, each with arbitrary spatial structure. This result combines several different quantities describing the dynamics under neutral drift (\(\delta=0\)), together with the payoffs for the game [13, 18]. Let \(p_{ij}^{\left[\beta\right]}:=w_{ij}^{\left[\beta\right]}/\sum_{k=1}^{N}w_{ik}^ {\left[\beta\right]}\) be the one-step random-walk probability of moving from \(i\) to \(j\) on network \(\beta\). This quantity can be interpreted as the probability that \(i\) imitates the strategy of \(j\) under neutral drift, conditioned on \(i\) being chosen for an update. In other words, \(p\) can be seen as defining an ancestral process, tracking replacement backwards in time under neutral drift. The most fundamental neutral quantity is the reproductive value of individual \(i\) in network \(\beta\), which can be interpreted as the probability that a mutant introduced at node \(i\) in network \(\beta\) generates a lineage that eventually takes over the population. This quantity, denoted by \(\pi_{i}^{\left[\beta\right]}\) is independent of the payoffs and thus independent of the particular mutant that arises in the population. The version of reproductive value that we use is a generalization of Fisher's classical notion [19, 20] that also takes into account environmental changes. It can be calculated using Equation 5 in Methods. Another neutral quantity we use is related to coalescence times. Under neutral drift, we can look backward in time and ask how long it takes, on average, before two or more lineages meet at a common ancestor. Starting in network \(\beta\), let \(T^{\left[\beta\right]}\) be the expected number of steps to the most recent common ancestor of the entire population. If \(\tau_{ij}^{\left[\beta\right]}\) is the expected time to the most recent common ancestor of \(i\) and \(j\), then the mean amount of time that \(i\) and \(j\) are identical by descent is \(T^{\left[\beta\right]}-\tau_{ij}^{\left[\beta\right]}\). The pairwise times to a common ancestor, \(\tau\), can be calculated using Equation 8 in Methods. In terms of the neutral quantities \(\pi\), \(\tau\), and \(T\), the general condition for cooperation to be favored over defection under weak selection is given by \[\sum_{i,j=1}^{N}\sum_{\beta=1}^{L}v\left(\beta\right)\left(\sum_{ \gamma=1}^{L}q_{\beta\gamma}\pi_{i}^{[\gamma]}\right)\,p_{ij}^{[\beta]}\sum_{ \ell=1}^{N}\begin{pmatrix}-\left(T^{[\beta]}-\tau_{ij}^{[\beta]}\right)w_{ \ell^{[\ell]}}^{[\beta]}c\\ +\left(T^{[\beta]}-\tau_{ij}^{[\beta]}\right)w_{\ell^{[\beta]}}^{[\beta]}b \end{pmatrix}\] \[>\sum_{i,j,k=1}^{N}\sum_{\beta=1}^{L}v\left(\beta\right)\left(\sum _{\gamma=1}^{L}q_{\beta\gamma}\pi_{i}^{[\gamma]}\right)\,p_{ij}^{[\beta]}\,p_ {ik}^{[\beta]}\sum_{\ell=1}^{N}\begin{pmatrix}-\left(T^{[\beta]}-\tau_{jk}^{[ \beta]}\right)w_{k\ell^{[\beta]}}^{[\beta]}c\\ +\left(T^{[\beta]}-\tau_{ij}^{[\beta]}\right)w_{\ell^{[\beta]}}^{[\beta]}b \end{pmatrix}\,. \tag{3}\] Broadly speaking, what Equation 3 says is that an individual \(i\) is chosen, a cooperator is placed at a neighbor \(j\) of \(i\), and another neighbor \(k\) of \(i\) is chosen to compare its (weighted) payoff with that of the cooperator. If \(j\)'s weighted payoff exceeds that of \(k\), then selection favors the evolution of cooperation. The condition above reflects a similar intuition behind the corresponding condition for static networks (\(L=1\); see Allen et al. [14] or Fig. 1 of McAvoy & Wakeley [21]), but there are a few notable effects of network transitions in Equation 3. The first effect is that the network \(\beta\) is chosen with probability \(v\left(\beta\right)\), where \(v\) is the stationary distribution of the structure-transition chain defined by \(Q\). Moreover, whereas individual \(i\) is chosen with probability based on reproductive value \(\pi_{i}\) on a static network, here \(i\) is chosen based on reproductive value in the _next_ network following imitation, \(\sum_{\gamma=1}^{L}q_{\beta\gamma}\pi_{i}^{[\gamma]}\). The reason for this is natural, because once an individual replaces \(i\) in network \(\beta\), the network immediately transitions to network \(\gamma\), and so the resulting reproductive value of \(i\) must be understood within the context of \(\gamma\). Once \(\beta\) and \(i\) are chosen, the probabilities of choosing neighbors \(j\) and \(k\) are \(p_{ij}^{[\beta]}\) and \(p_{ik}^{[\beta]}\), respectively. Moreover, if \(j\) is a cooperator, then individual \(k\) is also a cooperator for \(T^{[\beta]}-\tau_{jk}^{[\beta]}\) time steps, and during each such step \(k\) pays \(cw_{k\ell}^{[\beta]}\) to provide \(\ell\) with a benefit of \(bw_{k\ell}^{[\beta]}\). This property accounts for the weighting of benefits and costs in Equation 3. Note that the term \(T^{[\beta]}\) cancels out in Equation 3, and so although this quantity is helpful for gathering intuition, it is not strictly needed to evaluate whether cooperators are favored by selection. Given the vast number of networks with \(N\) nodes, as well as the vast space of possible transitions among them, we focus most of our analysis on transitions between a pair of networks (i.e. \(L=2\)). For a given network transition matrix \(Q\), the value \(1/q_{12}\) (resp. \(1/q_{21}\)) gives the expected time during which the population remains in network \(1\) (resp. network \(2\)) before transitioning to network \(2\) (resp. network \(1\)). We denote \(1/q_{12}\) and \(1/q_{21}\) by \(t_{1}N\) and \(t_{2}N\), respectively, so that \(t_{1}\) and \(t_{2}\) correspond to the expected number of times each individual updates prior to a transition to a different network. Small values of \(t_{1}\) and \(t_{2}\) correspond to frequent changes in the population structure. Sufficiently large values of \(t_{1}\) and \(t_{2}\) indicate that the population structure is nearly fixed, so that the population will reach an absorbing strategic state (all \(C\) or all \(D\)) before the network transitions to a different state. The regime \(t_{1}=1\) (resp. \(t_{2}=1\)) means that, on average, each individual updates their strategy once in network \(1\) (resp. network \(2\)) before the network structure changes. ### Dynamic networks with dense and sparse cliques We begin by studying dynamic transitions between a pair of networks where each network is comprised of two cliques. One clique is a star graph, which is sparse, and the other clique is a complete graph, which is dense. In each network, the two cliques are connected by a single edge. When the population transitions from one network to another, the star clique becomes the complete clique and _vice versa_ (see Figure 2**a**). This kind of dynamic network models a situation in which a portion of the population is densely connected while the remainder of the population is connected to only a single node; and which portion is dense versus sparse changes over time, as the state transitions between the two networks. When the population evolves on either network \(1\) or network \(2\) alone, the fixation probability of cooperators is always lower than that of defectors, i.e. \(\rho_{C}<\rho_{D}\),meaning that cooperation is disfavored by selection regardless of the benefit-to-cost ratio \(b/c\) (Figure 2**b**). Nonetheless, when the population transitions dynamically between networks \(1\) and \(2\), cooperation is favored Figure 2: **Transitions between networks that contain dense and sparse cliques.** We consider dynamic transitions between two networks, each of which is comprised of two cliques containing \(aN\) and \((1-a)\)\(N\) nodes, respectively. **a**, Each network has a star graph comprising one clique and a complete graph comprising the other clique, with a single edge connecting the two cliques. When network \(1\) transitions to network \(2\), the star clique becomes the complete clique and _vice versa_. **b**, The fixation probability of cooperation versus defection, \(\rho_{C}-\rho_{D}\), as a function of the benefit \(b\) in the donation game. Selection favors cooperation over defection if \(\rho_{C}-\rho_{D}\) exceeds the horizontal line, i.e., \(\rho_{C}>\rho_{D}\). Dots indicate the results of Monte Carlo simulations on dynamic networks (solid dots) and on a static network (open dots). The vertical lines correspond to analytical predictions for the critical benefit-to-cost ratio \((b/c)^{*}\) on dynamic networks, above which we predict cooperation will be favored. The results show that cooperation is always disfavored in both static network \(1\) and static network \(2\), but dynamic transition between these networks can favor cooperation. Here, we show two examples with different clique sizes, \(a=0.5\) (blue) and \(a=0.7\) (green). The beneficial effect of structure transitions is strongest when cliques have equal size (\(a=0.5\); see Supplementary Figure 1). Parameter values: \(N=40\), \(t=1\), and \(c=1.0\). Fixation probabilities are computed across an ensemble of \(10^{7}\) runs with selection intensity \(\delta=0.002\). provided the benefit-to-cost ratio \(b/c\) exceeds the critical value \(\left(b/c\right)^{*}\approx 7\). As a result, we see that dynamic population structures can favor cooperation, even when all networks involved would each individually suppress cooperation were they static. Dynamic population structure facilities cooperation across a wide range of population sizes for the pair of networks shown in Figure 2**a**. When \(t=1\), which means that individuals each update their strategy once, on average, before the network changes, cooperation can be favored by selection regardless of network size \(N\) (Figure 3**a**). By contrast, if the network is static, then cooperation is favored only when the population size is very small (\(N<17\))-and, even then, only if the benefit-to-cost ratio is large. For larger population sizes, \(N\geqslant 17\), the critical benefit-to-cost ratio is negative on a static network, \(\left(b/c\right)^{*}<0\), which means that selection actually favors the evolution of spite, a behavior in which individuals pay a cost \(c\) to decrease the fitness of their opponent by \(b\). For this static network we can prove that \(\left(b/c\right)^{*}\approx-N/2\) in large populations (see Methods), compared to \(\left(b/c\right)^{*}\approx 7\) for any population size in a dynamic network. Consequently, we see that the effects of dynamic population structures are dramatic, capable of converting a spiteful outcome into a cooperative one, and they persist across a wide range of population sizes. Dynamic networks also facilitate cooperation across a wide range of structural transition rates. For a sufficiently large population size, \(N\), on a single static network of the type shown in Figure 2**a**, the critical benefit-to-cost ratio is negative (\(\left(b/c\right)^{*}\approx-N/2\)), which means that selection favors the evolution of spite. By contrast, dynamic transitions between networks 1 and 2 can favor cooperation, especially when they occur rapidly (Figure 3**b**). When the transition rate is very slow-in particular, when \(t\) exceeds \(\left(\sqrt{2}+1\right)N\)-the population stays in one network for so long that the evolutionary dynamics are similar to those of a static network, and the critical benefit-to-cost ratio becomes negative (Figure 3**b**). In the limit of the transition rate approaching zero (\(t\rightarrow\infty\)), the "dynamic" network is actually static and our dynamic calculations agree with those of a static network. ### How dynamic structures can facilitate the spread of cooperation To further understand how dynamic structures can favor cooperation more than their static counterparts, we inspect evolutionary trajectories on the dense-sparse graph of Figure 2**a**. When the network is static, the process is depicted in Figure 4**a**. Starting from a specific configuration of cooperators in both hubs and two leaf nodes, cooperation will initially tend to spread in the star clique while shrinking in the complete clique. After cooperation fixes within the star clique, selection strongly suppresses further spread to the complete clique because the node connected to the star clique is exploited by multiple defectors. If ever a defector manages to diffuse to the hub of the star clique, however, defection will then rapidly spread within the star and ultimately fix in the entire network. By contrast, if the population undergoes structural transitions between networks (e.g. \(n_{2}\to n_{3}\) in Figure 4**b**), the star clique of network \(1\) will transition into the complete clique of network 2, which promotes the exploitation of cooperators and allows defectors to spread (\(n_{3}\to n_{4}\)). Figure 3: **Dynamic structures facilitate cooperation for a broad range of population sizes and network transition rates.** We consider transitions between the two networks shown in Figure 2**a**, each composed of a sparse clique and a dense clique. **a**, The critical benefit-to-cost ratio required to favor cooperation as a function of population size, \(N\), for \(a=0.5\) and \(t=1\). Dynamic networks can favor cooperation for any population size, \(N\), provided \(b/c>7\). In contrast, the corresponding static networks favor cooperation only in small populations (\(N<17\)), and they favor the evolution of spite \(\left(\left(b/c\right)^{*}<0\right)\) in larger populations. Dots show exact analytical computations for finite \(N\) (Equation 3), and lines show analytical approximations for large \(N\). **b**, The critical benefit-to-cost ratio as a function of the mean duration between network transitions, \(t\), for \(a=0.5\) and \(N=10\),000. Whereas a static network always disfavors cooperation, dynamic networks can favor cooperation provided they do not transition too slowly (\(t<\left(\sqrt{2}+1\right)N\)). Dots show exact analytical computations for arbitrary \(t\); the blue line shows an analytical approximation in the regime \(t\ll N\); and the red line shows an analytical approximation in the regime \(t=O\left(N\right)\). Meanwhile, the complete clique of network \(1\) transitions into the star clique of network \(2\), which stimulates the expansion of cooperators. The rate of cooperator expansion in one clique exceeds their exploitation in the other clique so that, overall, network transitions facilitate cooperation. ### Other dynamic structures The examples of dynamic structure considered so far may seem highly specialized because the networks each contain two stylized cliques with a single edge between them. But we find similar results on networks with many cliques and with more complicated connections between them. In Figure 5**a,b**, we analyze networks comprised of multiple star and complete cliques, connected by either hub nodes or by leaf nodes. In both cases, we again find that dynamic transitions between networks reduce the critical benefit-to-cost ratio for the evolution of cooperation, compared to any single static network. This effect is increasingly strong as the network size grows (see Supplementary Figure 2). For the networks in Figure 5**a** with \(N=1\),200, for example, the critical benefit-to-cost ratio to favor cooperation is \(\left(b/c\right)^{*}\approx 188.1\) when the network is static, which is reduced to \(\left(b/c\right)^{*}\approx 3.49\) when the network is dynamic. In addition to networks comprised of star and complete cliques, we also investigated networks with cliques defined by various types of random graphs, such as Erdos-Renyi and scale-free networks. In the former case, node degrees within a clique do not vary substantially, while the latter exhibits large variation in degree. For both classes of random networks, we still find that dynamic transitions between random networks tends to promote cooperation, compared to each static network (Figure 5**c,d**). In all examples of dynamic networks considered thus far, transitions between networks involve dense regions of a network swapping with sparse regions. Regardless of the exact structure of the cliques, this general feature of structural transitions conforms to the underlying intuition for why dynamic networks can facilitate cooperation (Figure 4). Dynamic structures can still facilitate cooperation even when networks differ in only a small fraction of connections, although the strength of the effect is weakened. Furthermore, these effects also persist (and can be quite strong) when populations transition between three or more network structures. We give illustrations in Supplementary Figure 3. ### The probability and time to fixation of cooperation We have studied dynamic structures by comparing the fixation probability of a cooperator to that of a defector, and by calculating the critical benefit-to-cost ratio \(\left(b/c\right)^{*}\) that ensures \(\rho_{C}>\rho_{D}\). We can also study the fixation probability \(\rho_{C}\) in absolute terms. We find that a dynamic population structure increases the fixation probability of cooperators, making them more likely to overtake the population, compared to a static network. Dynamic population structures also tend to decrease the duration before one type or another fixes (see Supplementary Figure 4), as well as shorten the mean conditional time until cooperators fix. The underlying intuition for these results is evident in Figure 4: on a static network, the population will tend to be stuck at stage \(n_{3}\) for a long time, before defectors eventually diffuse to the sparse clique; whereas Figure 4: **Intuition for how dynamic structures can facilitate cooperation.** Starting from a configuration in which the hub and two leaf nodes are cooperators (time point \(n_{1}\) in **a** and **b**), we illustrate how cooperation can be favored in dynamic structures even when it is inhibited in each static structure. Initially, cooperators are expected to spread in the star clique and shrink in the complete clique, and the rate of spreading exceeds that of shrinking. **a**, The evolutionary process on a static network. Cooperators rapidly take over the star clique and nearly die out in the complete clique (\(n_{1}\to n_{3}\)). The system tends to stay in this state until defectors spread throughout the star clique (\(n_{4}\)). **b**, The evolutionary process with network transitions. Initially, cooperators spread in the star clique and shrink in the complete clique (\(n_{1}\to n_{2}\)). However, when the network changes, the star clique transitions to the complete clique and _vice versa_ (\(n_{2}\to n_{3}\)). This transition is followed by the rapid spread of cooperators in the star clique and (relatively slower) shrinking of cooperators in the complete clique (\(n_{3}\to n_{4}\)). From \(n_{1}\) to \(n_{5}\), the frequency of cooperators increases in both cliques so that, under dynamic structure transitions, the population tends to result in cooperators being fixed in both cliques (\(n_{8}\)). Figure 5: **Evolution of cooperation on diverse dynamic structures.****a**, Each individual network comprises four star cliques and four complete cliques, where each star clique in one network corresponds to a complete clique in the other network, and clique hubs are fully connected to each other. **b** is similar to **a**, but cliques are now sparsely connected via leaf nodes. Network transitions facilitate cooperation compared to a static structure. **c**, Each individual network comprises two sparse and two dense cliques of Erdős-Rényi (ER) random networks [22], with cliques connected by random nodes. **d**, Each individual network comprises two sparse and two dense cliques of Goh-Kahng-Kim scale-free networks (GKK) [23] with exponent \(2.5\), with cliques connected by nodes of the highest degree. In all these examples, network transitions reduce the benefit-to-cost ratio \((b/c)^{*}\) required for cooperation compared to each static network. Parameters: \(t=1\) and \(N=64\) for **a** and **b**. For panels **c** and **d**, in network \(1\), the two sparse cliques have 30 nodes and average degree 4, and the two dense cliques have 40 nodes and average degree 30; in network 2, the two sparse cliques have 40 nodes and average degree 4, and the two dense cliques have 30 nodes and average degree 20. on dynamic networks, cooperators spread rapidly by selection in both cliques. Thus, dynamic networks increase the likelihood that cooperators sweep the population as well as the rate at which they do so. ### Spatial and temporal burstiness We can adapt our method of analysis to study the effects of spatial and temporal burstiness. For dynamically changing networks, spatial burstiness arises when there is temporal variation in the density of network edges (node degree), whereas temporal burstiness arises when there are periods of rapidly changing network structures along with periods in which structures change more slowly. Empirical networks of both human and non-human (e.g. honeybee) interactions are known to exhibit both spatial and temporal burstiness [10, 24], but the effects of these two forms of over-dispersion for behavior remains an active area of current research. To study spatial burstiness, we consider the following minimal model of dynamically varying networks that differ in their average node degree. We construct a pair of networks as follows (see Figure 6**a**): _(i)_ we first generate a single network with \(N\) nodes and \(E\) edges drawn from one of several classical families of networks (e.g. Erdos-Reyni random networks [22], Watts-Strogatz small-world networks [25], Barabasi-Albert scale free networks [26], etc.); _(ii)_ we decompose this network into two networks, by randomly selecting a fraction \(\varepsilon\in[0,1/2]\) of the edges for network 1 and using the remaining \((1-\varepsilon)\,E\) edges for network 2. If \(\varepsilon=1/2\) then the resulting networks 1 and 2 have the same density of interactions, and there is no spatial burstiness. For all other values of \(\varepsilon\neq 1/2\), the network exhibits spatial burstiness, and we study a simple stochastic transition pattern between these networks, with \(t_{1}=t_{2}=1\) so that each individual updates his strategy once, on average, before the network switches. We find that spatial burstiness tends to inhibit the evolution of cooperation, whereas spatial regularity (equal network densities) is more beneficial for cooperation (Figure 6**c**). In particular, regardless of the class of network from which networks 1 and 2 are derived, the critical ratio \(\left(b/c\right)^{*}\) required to favor cooperation is substantially increased (roughly by a factor of two) in the regime \(\varepsilon\to 0\) compared to the spatially homogeneous regime \(\varepsilon=1/2\). We also study the effects of temporal burstiness, in which case networks 1 and 2 are chosen to have the same edge density (\(\varepsilon=1/2\)), but there are periods of rapid transitions between the two networks, punctuated by periods of slow transitions. To construct this scenario, instead of having a single transition matrix, \(Q\), we consider two such matrices, \(Q^{f}\) and \(Q^{s}\), corresponding to fast and slow epochs. At any time, the population is either in hidden state \(f\), so that network transitions occur according to \(Q^{f}\), or alternatively in hidden state \(s\), so that network transitions occur according to \(Q^{s}\). Whenever the population transitions to a new network, the hidden state is drawn uniformly-at-random from \(\{f,s\}\) (see Figure 6**b**). (Note that the hidden state \(s\) or \(f\) is re-sampled only when the network changes, from 1 to 2 or from 2 to 1.) The speed of network transitions in each hidden state, \(s\) and \(f\), is governed by a parameter \(\tilde{t}\in[0,1]\), so that transitions are fast in state \(f\) and slow in state \(s\). When the population enters Figure 6: **Effects of spatial and temporal burstiness on cooperation.** We consider transitions between two networks, with either **a**, spatial burstiness (different edge densities) or **b**, temporal burstiness (periods of both rapid and slow transitions). **c**, The critical benefit-to-cost ratio \(\left(b/c\right)^{*}\) as a function of spatial heterogeneity, \(\varepsilon\). When the two networks have the same edge density, \(\varepsilon=0.5\), cooperation is most readily favored. When the networks that differ in their edge densities (\(\varepsilon\ll 0.5\)), much larger values of \(b/c\) are required to support cooperation. **d** The critical benefit-to-cost ratio \(\left(b/c\right)^{*}\) required to favor cooperation as a function of temporal heterogeneity, \(\bar{t}\). The case \(\bar{t}=1\) means that networks transition at the same rate, regardless of the hidden state. When \(\bar{t}<1\), the networks transition more rapidly in state \(f\) than in state \(s\), so that there is temporal burstiness. Results on spatial and temporal burstiness are shown for six classes of networks: random regular networks (RR), Erdös-Renyi networks (ER) [22], Watts-Strogatz small-world networks (SW) [25] with rewiring probability 0.1, Barabasi-Albert scale-free networks (BA) [26], Goh-Kahng-Kim scale-free networks (GKK) [23] with exponent 2.5, and Holme–Kim scale-free networks (HK) [27] with triad formation probability 0.1. For each such class, we generate 2,000 networks, each with 100 nodes and average degree 20. We take \(\bar{t}=1\) in **c** and \(\varepsilon=0.5\) in **d**. state \(f\), the expected duration before a network transition is small, namely \(\bar{t}N\). Whereas when the population enters state \(s\) the expected duration of the current network is longer, \((2-\bar{t})\,N\) (see Figure 6**b**). The case \(\bar{t}=1\) means that the current network has the same expected duration, regardless of the hidden state, and there is no temporal burstiness. When \(\bar{t}<1\), the networks transition more quickly in state \(f\) than they do in state \(s\). Regardless of the value of \(\bar{t}\), however, the total accumulated time spent in network \(1\) is the same as in network \(2\), throughout the evolutionary process. Temporal burstiness tends to facilitate cooperation, regardless of the overall structure of underlying networks (Figure 6**d**). In particular, the critical benefit-to-cost required to favor cooperation is largest when temporal burstiness is absent (\(\bar{t}=1\)), and it is reduced (typically by \(~{}20\%\)) when temporal burstiness is large (\(\bar{t}=0\)). Therefore, even when two networks have the same edge density (\(\varepsilon=1/2\)) and the accumulated time is spent on each network is the same, temporal burstiness facilitates the spread of cooperation, in stark contrast to our findings for spatial burstiness. ## 4 Discussion Many real-world interactions are ephemeral, and the entire network of social interactions may be subject to exogenous changes. Seasonal changes in a species' environment, for example, can lead to active and dormant periods, as can diurnal cycles. Such periodic transitions are widely used to model temporal networks [28, 29, 30, 31]. Stochastic transitions in social structures can arise from the effects of weather, animal migration and movement, and role reversal [32]. Motivated by the ubiquity of structural variation in nature, we provide a treatment of dynamic social networks that allows for arbitrary stochastic transitions between structures, with arbitrary networks within each time step. Our main mathematical result (Equation 3) predicts when cooperation will evolve on dynamic networks, under weak selection. The population structure in every time step need not be connected; all that we require is that the population satisfy a coherence condition so that it does not become fragmented into multiple sub-populations (see SSSI.1.1 in Supplementary Information). In addition to probabilistic transitions, our analysis also extends to deterministic and periodic network transitions (see Equation SI.33 in Supplementary Information). Our work can also cover other scenarios for changing structures, such as when the direction of public goods or information flow changes over time [33]; the number of active nodes or edges varies; or the population size fluctuates (in fact, the results in Supplementary Information allow for arbitrary patterns of replacement). Although prosocial behaviors in different strategic domains may manifest in different ways, such as trust games or dictator games, the desire to pay costs to benefit others has a substantial degree of domain generality [34]. Our conclusions, based on donation games, are thus indicative of how dynamic networks may broadly impact prosocial behavior. In the donation game, we have seen that changing social structures can promote cooperation, and that these effects can be dramatic. Even if every network individually disfavors cooperators, transitions between them can facilitate the evolution of cooperation - a result that is reminiscent of Parrondo's paradox [35]. Figure 4 illustrates the mechanism for how this phenomenon arises, as transitions move individuals between regions of the network that are dense to those that are sparse. These types of changing social structures are common in real-world settings. Groups and communities are more likely to form among people with close geographical locations and similar religion, culture, and affiliations [36, 37]; but connection density will be altered when individuals migrate or change social groups. Changes in connection densities in different communities may alternatively result from a phase difference, e.g. in online social networks across different time zones. Spatio-temporal heterogeneity of interaction density within a community also leads to time-varying connection densities, from sparse to dense and _vice versa_[2]. We find that each kind of burstiness has a clear effect on cooperation, either hindering it in the case of spatial burstiness or promoting it in the case of temporal burstiness. Broadly speaking, our work highlights the significance of integrating multiple communities into one system, since treating communities individually and independently may lead to erroneous conclusions about behavioral dynamics [38]. All of our results are based on exogenous network transitions, which means that individuals cannot selectively engineer their neighborhoods based on the traits of others. There are, of course, many interesting models involving endogenous transitions, in which cooperators can selectively form links with other cooperators and break links with defectors. In such models cooperation can flourish when structure transitions are rapid enough [39, 40, 41], for the simple reason that this endogenous dynamic establishes cooperative clusters. Such "form follows function" models are frequently aimed at answering the question: what kinds of networks arise from certain traits, and how do these networks serve the greater good? By contrast, our focus is not the coevolutionary dynamics of trait and structure, but on a different question altogether: what is the impact of exogenous structural changes on the evolution of behavior? This approach is more closely related to classical studies of network effects on cooperation: given a (dynamic) network, what behavioral traits evolve? Since exogenous structural changes do not provide any explicit advantage or disadvantage to cooperators relative to defectors, the resulting evolutionary dynamics of social traits are all the more intriguing. We have aimed for generality in framing our mathematical results, but a natural limitation of our study is the scope of networks we have analyzed, compared to the vast space of possible population structures and transitions among them. For this reason, even static structures are still an active topic of current research in evolutionary game theory. We have therefore chosen to consider a limited number of representative examples of dynamic networks, which showcase the interesting effects they can have on the evolution of cooperation. Areas for future investigation include the effects of fluctuating resources on cooperation, alternative evolutionary update rules, stronger selection, and environments that involve both endogenous and exogenous transitions. In fact, although we use cooperation as an example, our analysis is framed quite generally to allow the study of other traits on dynamic structures. To the best of our knowledge, our analytical findings constitute the first general results for behavioral evolution on dynamic networks, and we hope that they will be valuable tools in future work. Methods ### Analysis of weak selection Here, we outline a derivation of the critical benefit-to-cost ratio \(\left(b/c\right)^{*}\) for selection to favor cooperation, based on an extension of the methods of McAvoy & Allen [18]. Complete mathematical details are provided in Supplementary Information. For \(i,j\in\mathcal{N}\), let \(w_{ij}^{\left[\beta\right]}\) be the weight of edge between nodes \(i\) and \(j\) in network \(\beta\in\{1,\ldots,L\}\). We assume that the network is undirected, meaning \(w_{ij}^{\left[\beta\right]}=w_{ji}^{\left[\beta\right]}\) for all \(i,j\in\{1,\ldots,N\}\) and \(\beta\in\{1,\ldots,L\}\). If \(i\) and \(j\) share an edge, then they interact. The class of models we are interested in here involve social goods [52] in which, on network \(\beta\), an individual of type \(A\) at \(i\) pays a cost of \(C_{ij}^{\left[\beta\right]}\) to donate \(B_{ij}^{\left[\beta\right]}\) to the individual at \(j\). In state \(\left(\mathbf{x},\beta\right)\), the total payoff to the individual at \(i\) is \[u_{i}\left(\mathbf{x},\beta\right)=\sum_{j=1}^{N}\left(-x_{i}C_{ij}^{\left[ \beta\right]}+x_{j}B_{ji}^{\left[\beta\right]}\right). \tag{4}\] This net payoff is converted to reproductive rate via the formula \(F_{k}\left(\mathbf{x},\beta\right)=e^{\delta u_{k}\left(\mathbf{x},\beta\right)}\). If the population structure is \(\beta\), then a node in \(\beta\) is first selected uniformly-at-random to die. Subsequently, all neighboring nodes in \(\beta\) compete to produce an offspring to fill the vacancy at node \(i\). The probability that \(j\) replaces \(i\) in state \(\left(\mathbf{x},\beta\right)\) is given by Equation 2. Let \(p_{ij}^{\left[\beta\right]}\coloneqq w_{ij}^{\left[\beta\right]}/\sum_{k=1}^{N }w_{ik}^{\left[\beta\right]}\) be the probability of moving from \(i\) to \(j\) in one step of a random walk on network \(\beta\). Under neutral drift, the probability \(\pi_{i}^{\left[\beta\right]}\) that, starting in network \(\beta\), \(i\) generates a lineage that takes over the population (i.e. the reproductive value of \(i\) in \(\beta\)) satisfies \[\pi_{i}^{\left[\beta\right]}=\frac{1}{N}\sum_{j=1}^{N}p_{ji}^{\left[\beta \right]}\sum_{\gamma=1}^{L}q_{\beta\gamma}\pi_{j}^{\left[\gamma\right]}+\left( 1-\frac{1}{N}\sum_{j=1}^{N}p_{ij}^{\left[\beta\right]}\right)\sum_{\gamma=1}^{ L}q_{\beta\gamma}\pi_{i}^{\left[\gamma\right]}, \tag{5}\] subject to the constraint \(\sum_{i=1}^{N}\pi_{i}^{\left[\beta\right]}=1\). \(\pi\) is thus determined by a linear system of size \(O\left(LN\right)\). For the initial state, we choose the network according to the stationary distribution of the network-transition chain, and a mutant appears uniformly-at-random within that network. There are two mutant-appearance distributions overall, one for \(C\) arising after the all-\(D\) state (denoted \(\mu_{C}\)) and one for \(D\) arising after the all-\(C\) state (denoted \(\mu_{D}\)). Associated to each \(\mu\in\{\mu_{C},\mu_{D}\}\) is a quantity \(\eta_{I}^{\left[\beta\right]}\left(\mu\right)\) related to the co-occurrence of a trait in \(\beta\) among the nodes in \(I\subseteq\{1,\ldots,N\}\), which is defined formally in SSI.1.5 of Supplementary Information. For our purposes, we need \(\eta_{I}^{[\beta]}\left(\mu\right)\) only for \(I\) containing one or two nodes, in which case \(\eta_{I}^{[\beta]}\left(\mu_{C}\right)=\eta_{I}^{[\beta]}\left(\mu_{D}\right)\) and \[\eta_{ij}^{[\beta]}=\begin{cases}0&i=j,\\ \\ \dfrac{1}{N}v\left(\beta\right)+\sum_{\gamma=1}^{L}q_{\gamma\beta} \left(\dfrac{1}{N}\sum_{k=1}^{N}p_{ik}^{[\gamma]}\eta_{kj}^{[\gamma]}+\dfrac{ 1}{N}\sum_{k=1}^{N}p_{jk}^{[\gamma]}\eta_{ik}^{[\gamma]}+\left(1-\dfrac{2}{N} \right)\eta_{ij}^{[\gamma]}\right)&i\neq j.\end{cases} \tag{6}\] We refer the reader to Equation SI.32 in Supplementary Information for details. It turns out that a scaled version of \(\eta\), namely \(\tau_{ij}^{[\beta]}\coloneqq\eta_{ij}^{[\beta]}/v\left(\beta\right)\), allows for a more intuitive interpretation of the selection condition. Consider the time-reversed structure transition chain defined by \[\widetilde{q}_{\beta\gamma}\coloneqq\dfrac{v\left(\gamma\right)}{v\left(\beta \right)}q_{\gamma\beta}. \tag{7}\] Using this time-reversed chain in conjunction with Equation 6, we see that \[\tau_{ij}^{[\beta]}=\begin{cases}0&i=j,\\ \\ \dfrac{1}{N}+\sum_{\gamma=1}^{L}\widetilde{q}_{\beta\gamma}\left(\dfrac{1}{N} \sum_{k=1}^{N}p_{ik}^{[\gamma]}\tau_{kj}^{[\gamma]}+\dfrac{1}{N}\sum_{k=1}^{N }p_{jk}^{[\gamma]}\tau_{ik}^{[\gamma]}+\left(1-\dfrac{2}{N}\right)\tau_{ij}^{ [\gamma]}\right)&i\neq j.\end{cases} \tag{8}\] In the ancestral process, looking backward in time under neutral drift, \(N\tau_{ij}^{[\beta]}\) has the interpretation as the expected number of update steps until \(i\) and \(j\) coalesce. Equivalently, since one of \(N\) individuals is updated in each time step, \(\tau_{ij}^{[\beta]}\) can be seen as the mean number of generations needed for \(i\) and \(j\) to coalesce. If, conditioned on the population being in state \(\beta\), \(T^{[\beta]}\) is the mean time to reach the most recent common ancestor going backward in time, then the mean time that \(i\) and \(j\) spend identical by descent is \(T^{[\beta]}-\tau_{ij}^{[\beta]}\). Finding \(\tau\) for all structures and pairs of individuals involves solving a linear system of size \(O\left(LN^{2}\right)\). (Although \(T\) aids in the interpretation of \(\tau\) as determining identity by descent, it does not need to be calculated in order to understand the first-order effects of selection on fixation probability.) We now have all of the neutral quantities we need to state the selection condition. The final piece is the connection between the payoffs and the replacement probabilities under weak selection. A straightforward calculation gives \(e_{ji}\left(\mathbf{x},\beta\right)=\frac{1}{N}p_{ij}^{[\beta]}+\delta\sum_{k =1}^{N}c_{k}^{ji}\left(\beta\right)x_{k}+O\left(\delta^{2}\right)\), where \[c_{k}^{ji}\left(\beta\right)=\begin{cases}\dfrac{1}{N}p_{ij}^{[\beta]}\left(- \sum_{\ell=1}^{N}C_{j\ell}^{[\beta]}+B_{jj}^{[\beta]}+p_{ij}^{[\beta]}\sum_{ \ell=1}^{N}C_{j\ell}^{[\beta]}-\sum_{\ell=1}^{N}p_{i\ell}^{[\beta]}B_{j\ell}^ {[\beta]}\right)&k=j,\\ \\ \dfrac{1}{N}p_{ij}^{[\beta]}\left(B_{kj}^{[\beta]}+p_{ik}^{[\beta]}\sum_{ \ell=1}^{N}C_{k\ell}^{[\beta]}-\sum_{\ell=1}^{N}p_{i\ell}^{[\beta]}B_{k\ell}^ {[\beta]}\right)&k\neq j.\end{cases} \tag{9}\] Putting everything together using Equation SI.31 in Supplementary Information, we see that \[\frac{d}{d\delta}\bigg{|}_{\delta=0}\rho_{C} =\frac{1}{N}\sum_{i,j=1}^{N}\sum_{\beta=1}^{L}\upsilon\left(\beta \right)\left(\sum_{\gamma=1}^{L}q_{\beta\gamma}\tau_{i}^{[\gamma]}\right)p_{ij}^ {[\beta]}\sum_{\ell=1}^{N}\begin{pmatrix}-\left(T^{[\beta]}-\tau_{ij}^{[\beta]} \right)C_{\ell}^{[\beta]}\\ +\left(T^{[\beta]}-\tau_{j\ell}^{[\beta]}\right)B_{\ell j}^{[\beta]}\end{pmatrix}\] \[\quad-\frac{1}{N}\sum_{i,j,k=1}^{N}\sum_{\beta=1}^{L}\upsilon \left(\beta\right)\left(\sum_{\gamma=1}^{L}q_{\beta\gamma}\tau_{i}^{[\gamma]} \right)p_{ij}^{[\beta]}p_{ik}^{[\beta]}\sum_{\ell=1}^{N}\begin{pmatrix}-\left( T^{[\beta]}-\tau_{jk}^{[\beta]}\right)C_{k\ell}^{[\beta]}\\ +\left(T^{[\beta]}-\tau_{j\ell}^{[\beta]}\right)B_{\ell k}^{[\beta]}\end{pmatrix}. \tag{10}\] Moreover, an analogous calculation for \(D\) gives \(\left.\frac{d}{d\delta}\right|_{\delta=0}\rho_{D}=-\left.\frac{d}{d\delta} \right|_{\delta=0}\rho_{C}\), which means that the condition \(\left.\frac{d}{d\delta}\right|_{\delta=0}\rho_{C}>0\) is equivalent to \(\left.\frac{d}{d\delta}\right|_{\delta=0}\rho_{C}>\left.\frac{d}{d\delta} \right|_{\delta=0}\rho_{D}\). In the donation game, we have \(B_{ij}^{[\beta]}=w_{ij}^{[\beta]}b\) and \(C_{ij}^{[\beta]}=w_{ij}^{[\beta]}c\), and Equation 10 gives \[\left.\frac{d}{d\delta}\right|_{\delta=0}\rho_{C}>\left.\frac{d}{d\delta} \right|_{\delta=0}\rho_{D}\iff b\mu_{2}-c\nu_{2}>b\mu_{0}-c\nu_{0}, \tag{11}\] where \[\mu_{0} =\frac{1}{N}\sum_{i,j=1}^{N}\sum_{\beta=1}^{L}\upsilon\left( \beta\right)\left(\sum_{\gamma=1}^{L}q_{\beta\gamma}\tau_{i}^{[\gamma]} \right)p_{ij}^{[\beta]}\sum_{\ell=1}^{N}w_{\ell j}^{[\beta]}\tau_{j\ell}^{[ \beta]}; \tag{12a}\] \[\nu_{0} =\frac{1}{N}\sum_{i,j=1}^{N}\sum_{\beta=1}^{L}\upsilon\left( \beta\right)\left(\sum_{\gamma=1}^{L}q_{\beta\gamma}\tau_{i}^{[\gamma]} \right)p_{ij}^{[\beta]}\sum_{\ell=1}^{N}w_{j\ell}^{[\beta]}\tau_{ij}^{[\beta]};\] (12b) \[\mu_{2} =\frac{1}{N}\sum_{i,j,k=1}^{N}\sum_{\beta=1}^{L}\upsilon\left( \beta\right)\left(\sum_{\gamma=1}^{L}q_{\beta\gamma}\tau_{i}^{[\gamma]} \right)p_{ij}^{[\beta]}p_{ik}^{[\beta]}\sum_{\ell=1}^{N}w_{\ell k}^{[\beta]} \tau_{j\ell}^{[\beta]};\] (12c) \[\nu_{2} =\frac{1}{N}\sum_{i,j,k=1}^{N}\sum_{\beta=1}^{L}\upsilon\left( \beta\right)\left(\sum_{\gamma=1}^{L}q_{\beta\gamma}\tau_{i}^{[\gamma]} \right)p_{ij}^{[\beta]}p_{ik}^{[\beta]}\sum_{\ell=1}^{N}w_{k\ell}^{[\beta]} \tau_{jk}^{[\beta]}. \tag{12d}\] The critical benefit-to-cost ratio is therefore \(\left(b/c\right)^{*}=\left(\nu_{2}-\nu_{0}\right)/\left(\mu_{2}-\mu_{0}\right)\). Note that, for simplicity, we have assumed that any node can be selected for death in a given network. In reality, this assumption might not hold because each individual network need not be connected, which can lead to isolated nodes. If an isolated node is chosen for death, then the individual at this node cannot be immediately replaced by the offspring of a neighbor. All of our calculations can be modified to allow for only non-isolated nodes to be chosen for death, although in practice we do not need to do so in any of our examples. ### Specific examples We study the transition between two networks, with transition probabilities given by \[q_{\beta\gamma}=\begin{cases}1-p&\beta=1,\gamma=1;\\ p&\beta=1,\gamma=2;\\ q&\beta=2,\gamma=1;\\ 1-q&\beta=2,\gamma=2.\end{cases} \tag{13}\] The expected durations in networks \(1\) and \(2\) are \(q/\left(p+q\right)\) and \(p/\left(p+q\right)\), respectively. We study evolution on dynamic two-clique networks. The two-clique network is made up of a star clique and a complete clique, with the hubs connected (see Figure 2**a**). Let \(n\) and \(m\) denote the numbers of nodes in the star and complete cliques, respectively, so that \(n+m=N\). We denote by \(1,\ldots,n\) the nodes in the star clique and by \(n+1,\ldots n+m\) the nodes in the complete clique, where \(n\) is the hub of the star and \(n+m\) is the node of the complete clique connected to the hub of the star. The other network is obtained by swapping the star and complete cliques. The adjacency matrix for the first network satisfies \(w_{ij}^{[1]}=1\) only if _(i)_\(i=n\) and \(j<n\), or \(i<n\) and \(j=n\); _(ii)_\(i=n\) and \(j=n+m\), or \(i=n+m\) and \(j=n\); or _(iii)_\(i,j\geqslant n+1\) and \(i\neq j\). The adjacency for the second network satisfies \(w_{ij}^{[2]}=1\) only if _(i)_\(i=n+1\) and \(j>n+1\), or \(i>n+1\) and \(j=n+1\); _(ii)_\(i=n\) and \(j=n+m\), or \(i=n+m\) and \(j=n\); or _(iii)_\(i,j\leqslant n\) and \(i\neq j\). Using the results of the previous section, we can directly calculate \(\pi\) and \(\tau\) and calculate the critical benefit-to-cost ratio. Here, we provide explicit mathematical results for representative cases. Assuming \(p=q=1/\left(tN\right)\) and letting \(a\coloneqq n/\left(n+m\right)\), we find that \[\left(\frac{b}{c}\right)^{*}=\begin{cases}\frac{\left(2a^{2}-2a+1\right)t^{3} +\left(8a^{2}-8a+7\right)t^{2}+\left(8a^{2}-8a+15\right)t+2a^{2}-2a+10}{2a(1-a) \left(t^{2}+4t+3\right)}&N\to\infty,\\ \\ \frac{t^{3}+10t^{2}+26t+19}{t^{2}+4t+3}&N\to\infty,\,a=\frac{1}{2},\\ \\ \frac{20a^{2}-20a+33}{16a(1-a)}&N\to\infty\,,t=1,\\ \\ \frac{t(I+1)}{-2T^{2}+2I+1}N&N\to\infty,\,t/N=\bar{t},\,a=\frac{1}{2},\\ \\ \left(\begin{array}{c}210N^{16}-520N^{15}-1034N^{14}+1770N^{13}\\ +14028N^{12}-93440N^{11}+300848N^{10}-330944N^{9}\\ -663040N^{8}+2230528N^{7}-1096448N^{6}-4570112N^{5}\\ +10000384N^{4}-9265152N^{3}+425980N^{2}-786432N\\ \left(\begin{array}{c}30N^{16}-79N^{15}+225N^{14}+1756N^{13}\\ -15088N^{12}-13128N^{11}+247296N^{10}-365152N^{9}\\ -849344N^{8}+2987392N^{7}-1801984N^{6}-5024768N^{5}\\ +11302912N^{4}-9949184N^{3}+3940352N^{2}-327680N-131072\end{array}\right)\quad a =\frac{1}{2},\,t=1.\end{cases} \tag{14}\] In particular, when \(N\rightarrow\infty\) and \(a=1/2\), \(\left(b/c\right)^{*}\) is a monotonically increasing function of \(t\). We can compare this critical ratio to that of just a single network, which is the same for either network \(1\) or network \(2\) and satisfies \[\left(\frac{b}{c}\right)^{*}=\begin{cases}-\left(1-a\right)N&N\rightarrow\infty,\\ \\ \frac{-3N^{9}-40N^{8}-204N^{7}-848N^{6}-2464N^{5}+1920N^{4}+15872N^{3}-40960N^ {2}+24576N}{6N^{8}-64N^{7}-520N^{6}-1232N^{5}+3872N^{4}+6272N^{3}-24320N^{2}+22528 N+4096}&a=\frac{1}{2}.\end{cases} \tag{15}\] **Supplementary Figure 1: The cooperation-promoting effects of structure transitions as the sizes of the two cliques vary.** The dynamic network is illustrated in Figure 2**a**, with a fraction \(a\) (resp. \(1-a\)) of nodes in the top (resp. bottom) clique. The critical benefit-to-cost ratio, \(\left(b/c\right)^{*}\), is shown as a function of \(a\). The dots are the results of numerical calculations with \(N=10\),000 and the lines are analytical approximations for sufficiently large \(N\). The rescaled duration is \(t=1\). **Supplementary Figure 2: Cooperation-promoting effects of dynamic multi-clique networks.** We consider networks made up of eight cliques connected via hub nodes (see Figure 5**a**; panels **a** and **b** here) and via leaf nodes (see Figure 5**b**; panels **c** and **d** here). **a,c**, The critical ratio \(\left(b/c\right)^{*}\) as a function of population size \(N\), for the rescaled duration \(t=1\). **b,d**, The critical ratio \(\left(b/c\right)^{*}\) as a function of the rescaled duration \(t\), for \(N=200\). **Supplementary Figure 3: Cooperation-promoting effects of structure transitions among more than two networks, and when networks differ in a small fraction of connections.****a**, Structure transitions among three networks. Every network transitions to another network with probability \(1/\left(2tN\right)\) and remains unchanged otherwise. **b**, Structure transitions between multi-clique networks in which the two networks differ in only two cliques. We take \(N=150\) in **a** and \(N=64\) in **b**, and the rescaled duration is \(t=1\). **Supplementary Figure 4: Dynamic networks promote and accelerate the fixation of cooperators.** We consider the network with a star clique and a complete-graph clique with \(N=16\) and \(a=0.5\) (see Figure 2a). **a**, Fixation probability of cooperators as a function of the rescaled duration, \(t\), in network \(1\) and in the dynamic network. The dynamic network leads to the larger fixation probability of cooperators than in network \(1\). **b**, Conditional and unconditional fixation times as functions of the rescaled duration, \(t\). Both the conditional and unconditional times in the dynamic networks are smaller than in network \(1\).We take selection intensity \(\delta=0.1\). **Supplementary Information** ## SI.1 Modeling evolution on dynamic networks ### Assumptions, definitions, and notation We consider a population of \(N\) individuals (labeled \(\mathcal{N}=\{1,2,\ldots,N\}\)), residing at any point in time on one of \(L\) structures (labeled \(\mathcal{L}=\{1,2,\ldots,L\}\)). Implicitly, this means that each of these \(L\) structures is a network on \(N\) nodes, although each network need not be connected, and some nodes can be isolated. Each individual has type \(A\) or \(B\), and the state of population is tracked by a pair \((\mathbf{x},\beta)\in\{0,1\}^{\mathcal{N}}\times\mathcal{L}\), where \(x_{i}=1\) means \(i\) has type \(A\) and \(x_{i}=0\) means \(i\) has type \(B\). At each time step, a set of individuals to be replaced, \(R\subseteq\mathcal{N}\), is chosen, together with an offspring-to-parent map, \(\alpha:R\rightarrow\mathcal{N}\). Let \(p_{(R,\alpha)}\left(\mathbf{x},\beta\right)\) denote the probability of replacement event \((R,\alpha)\) in state \((\mathbf{x},\beta)\). Once \((R,\alpha)\) is chosen, the type configuration, \(\mathbf{x}\), is updated to \(\mathbf{y}\), where \(y_{i}=x_{\alpha(i)}\) if \(i\in R\) and \(y_{i}=x_{i}\) if \(i\not\in R\). This update can be specified more succinctly using an extended mapping \(\widetilde{\alpha}:\mathcal{N}\rightarrow\mathcal{N}\) defined by \(\widetilde{\alpha}\left(j\right)=\alpha\left(j\right)\) if \(j\in R\) and \(\widetilde{\alpha}\left(j\right)=j\) if \(j\not\in R\), which leads to the updated state \(\mathbf{x}_{\widetilde{\alpha}}\), where \((\mathbf{x}_{\widetilde{\alpha}})_{i}=x_{\widetilde{\alpha}(i)}\) for \(i\in\mathcal{N}\). The network, \(\beta\), is updated via a transition matrix, \(Q=\left(q_{\beta\gamma}\right)_{\beta,\gamma\in\mathcal{L}}\), where \(q_{\beta\gamma}\) is the probability of transitioning from network \(\beta\) to network \(\gamma\). An important feature of the model is that network transitions are independent of \(\mathbf{x}\); thus, the population structure is exogenous and not influenced by traits. We assume that \(Q\) is irreducible, which guarantees that it has a unique stationary distribution, \(\upsilon\). We assume that for each replacement event, \((R,\alpha)\), type configuration, \(\mathbf{x}\), and network, \(\beta\), the probability \(p_{(R,\alpha)}\left(\mathbf{x},\beta\right)\) is a smooth function of a selection intensity parameter, \(\delta\geqslant 0\), in a small neighborhood of \(\delta=0\). Moreover, when \(\delta=0\) ("neutral drift"), we assume that \(p_{(R,\alpha)}\left(\mathbf{x},\beta\right)\) is independent of \(\mathbf{x}\) (but it can depend on \(\beta\)). We denote by \(p_{(R,\alpha)}^{\circ}\left(\beta\right)\) the probability of choosing \((R,\alpha)\) under neutral drift. The chain defined by \(Q\) does not depend on the selection intensity. We also make the following assumption, which ensures that for every starting configuration and network, there exists at least one individual whose lineage can take over the population: **Fixation Axiom.** For all network structures \(\beta_{0}\in\mathcal{L}\), there exists a location \(i\in\mathcal{N}\), an integer \(m\geqslant 1\), and sequences of replacement events \(\left\{(R_{k},\alpha_{k})\right\}_{k=1}^{m}\) and networks \(\left\{\beta_{k}\right\}_{k=1}^{m-1}\) for which * \(p_{(R_{k},\alpha_{k})}\left(\mathbf{x},\beta_{k-1}\right)>0\) for every \(k\in\left\{1,\ldots,m\right\}\) and \(\mathbf{x}\in\left\{0,1\right\}^{\mathcal{N}}\); * \(q_{\beta_{k-1}\beta_{k}}>0\) for every \(k\in\left\{1,\ldots,m-1\right\}\); * \(i\in R_{k}\) for some \(k\in\left\{1,\ldots,m\right\}\); * \(\widetilde{\alpha}_{1}\circ\widetilde{\alpha}_{2}\circ\cdots\circ\widetilde{ \alpha}_{m}\left(j\right)=i\) for all locations \(j\in\mathcal{N}\). These conditions are similar to those used by Allen & McAvoy [13] and McAvoy & Allen [18], except here it is modified to account for dynamic networks. Informally, it guarantees that no individual lives forever and that the process eventually reaches a state in which all individuals are identical by descent. We note that here it does not require each network to be connected. Since there is no mutation of traits, all individuals must have the same type when they are identical by descent. The configurations \(\mathbf{A}\coloneqq(1,1,\ldots,1)\) and \(\mathbf{B}\coloneqq(0,0,\ldots,0)\) are the only absorbing configurations. (Note that while the configuration of types cannot leave \(\mathbf{A}\) or \(\mathbf{B}\), the state itself, which includes the network structure, can still change.) We denote by \(\mathds{B}^{\mathcal{N}}\) the set of all configurations, \(\left\{0,1\right\}^{\mathcal{N}}\), and by \(\mathds{B}^{\mathcal{N}}_{\mathsf{T}}\) the set of all transient configurations, \(\left\{0,1\right\}^{\mathcal{N}}-\left\{\mathbf{A},\mathbf{B}\right\}\). From the Fixation Axiom, we see that given any starting configuration-network pair, \(\left(\mathbf{x},\beta\right)\in\mathds{B}^{\mathcal{N}}\times\mathcal{L}\), there is a well-defined probability, \(\rho_{A}\left(\mathbf{x},\beta\right)\) (resp. \(\rho_{B}\left(\mathbf{x},\beta\right)\)), that the population eventually reaches the monomorphic state \(\mathbf{A}\) (resp. \(\mathbf{B}\)). The behavior of these fixation probabilities (under weak selection, meaning \(\delta\ll 1\)) is the main focus of this study. We follow the workflow proposed by McAvoy & Allen [18] for analyzing mutation-free evolutionary dynamics under weak selection. We first study the assortment of traits under neutral drift (\(\delta=0\)). Subsequently, we link these findings to the game using a martingale perturbation argument. We avoid reproducing the entire derivation in [18]; instead, we highlight the main modifications to those arguments necessary to accommodate stochastic network transitions. #### Network-mediated reproductive value With the main assumptions in place, we now introduce some derived, demographic quantities that we will refer to throughout the analysis of the model. If the population is in state \(\left(\mathbf{x},\beta\right)\), then the marginal probability that \(i\) produces an offspring that replaces \(j\) in the next update is \[e_{ij}\left(\mathbf{x},\beta\right)\coloneqq\sum_{\begin{subarray}{c}\left(R,\alpha\right)\\ j\in R,\ \alpha\left(j\right)=i\end{subarray}}p_{\left(R,\alpha\right)}\left( \mathbf{x},\beta\right).\] (SI.1) The expected change in the abundance of \(A\) in state \(\left(\mathbf{x},\beta\right)\) can be expressed as \[\Delta\left(\mathbf{x},\beta\right) \coloneqq\sum_{i\in\mathcal{N}}x_{i}\sum_{j\in\mathcal{N}}e_{ij} \left(\mathbf{x},\beta\right)+\sum_{i\in\mathcal{N}}x_{i}\left(1-\sum_{j\in \mathcal{N}}e_{ji}\left(\mathbf{x},\beta\right)\right)-\sum_{i\in\mathcal{N}} x_{i}\] \[=\sum_{i,j\in\mathcal{N}}e_{ji}\left(\mathbf{x},\beta\right) \left(x_{j}-x_{i}\right).\] (SI.2) One inconvenient aspect of dealing with the true abundance of \(A\) is that it is generally not a martingale under neutral drift. This property is well-known even in models without dynamic structure [13] and it necessitates working with a weighted frequency instead. The notion of reproductive value, which can be (informally) interpreted as the expected contribution of an individual to future generations, turns out to give the proper weighting. For our purposes, we interpret the reproductive value of \(i\in\mathcal{N}\) as the probability that, under neutral drift, \(i\) generates a lineage that eventually takes over the population. Because our interest is in fixation probabilities in the first place, it is not surprising that such a quantity should appear. This quantity depends on the network structure, but it is independent of the type configuration due to the drift assumption. Formally, we define the reproductive value of \(i\) in network \(\beta\), denoted \(\pi_{i}^{\left[\beta\right]}\), to be the probability that under neutral drift and starting in structure \(\beta\), a mutant in node \(i\) eventually takes over the whole population. Let \(e_{ij}^{\circ}\left(\beta\right)\) denote the probability, that under neutral drift and in structure \(\beta\), individual \(i\) spreads her strategy to \(j\). A one-step analysis of the neutral Markov chain gives \[\pi_{i}^{[\beta]} =\sum_{j\in\mathcal{N}}e_{ij}^{\circ}\left(\beta\right)\sum_{ \gamma\in\mathcal{L}}q_{\beta\gamma}\pi_{j}^{[\gamma]}+\left(1-\sum_{j\in \mathcal{N}}e_{ji}^{\circ}\left(\beta\right)\right)\sum_{\gamma\in\mathcal{L}} q_{\beta\gamma}\pi_{i}^{[\gamma]};\] (SI.3a) \[\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]} =1\] (SI.3b) for all \(i\in\mathcal{N}\) and \(\beta\in\mathcal{L}\). There is one point of subtlety in relation to reproductive value on static networks, which relates to the normalization condition \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}=1\) for all \(\beta\in\mathcal{L}\). The Fixation Axiom guarantees that there is a unique \(\pi\) satisfying Equation SI.3a up to a scalar multiple. In this case, for any fixed \(C\in\mathbbm{R}\), requiring \(\sum_{i\in\mathcal{N}}\sum_{\beta\in\mathcal{L}}\pi_{i}^{[\beta]}=C\) yields a unique solution to Equation SI.3a. Summing both sides of Equation SI.3a over \(i\in\mathcal{N}\) yields \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}=\sum_{\gamma\in\mathcal{L}}q_{\beta \gamma}\sum_{i\in\mathcal{N}}\pi_{i}^{[\gamma]}\). Since the chain \(\mathcal{Q}\) is irreducible, it follows that \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}\) is independent of \(\beta\in\mathcal{L}\), and thus it must be equal to \(C/L\). Therefore, asserting that \(\sum_{i\in\mathcal{N}}\sum_{\beta\in\mathcal{L}}\pi_{i}^{[\beta]}=L\) is equivalent to the requirement that \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}=1\) for all \(\beta\in\mathcal{L}\). As a result, \(\pi\), which we refer to as _network-mediated reproductive value_ due to its dependence on network transitions, is uniquely defined by Equation SI.3. Finally, the change in \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}x_{i}\), the \(\pi\)-weighted abundance of \(A\), is \[\widehat{\Delta}\left(\mathbf{x},\beta\right) =\sum_{i\in\mathcal{N}}x_{i}\sum_{j\in\mathcal{N}}e_{ij}\left( \mathbf{x},\beta\right)\sum_{\gamma\in\mathcal{L}}q_{\beta\gamma}\pi_{j}^{[ \gamma]}\] \[\quad+\sum_{i\in\mathcal{N}}x_{i}\left(1-\sum_{j\in\mathcal{N}}e_ {ji}\left(\mathbf{x},\beta\right)\right)\sum_{\gamma\in\mathcal{L}}q_{\beta \gamma}\pi_{i}^{[\gamma]}-\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}x_{i}\] \[=\sum_{i,j\in\mathcal{N}}e_{ji}\left(\mathbf{x},\beta\right)\sum _{\gamma\in\mathcal{L}}q_{\beta\gamma}\pi_{i}^{[\gamma]}\left(x_{j}-x_{i} \right)+\sum_{i\in\mathcal{N}}x_{i}\left(\sum_{\gamma\in\mathcal{L}}q_{\beta \gamma}\pi_{i}^{[\gamma]}-\pi_{i}^{[\beta]}\right).\] (SI.4) It follows from Equation SI.3 that, under neutral drift, \(\widehat{\Delta}^{\circ}\left(\mathbf{x},\beta\right)=0\), for all \(\mathbf{x}\in\mathbbm{B}^{\mathcal{N}}\) and \(\beta\in\mathcal{L}\). This property will play a key role in our subsequent weak-selection analysis of the process (Equation SI.13). #### A mutation-modified evolutionary process The process under consideration is mutation-free. However, following Ref. [18], in order to get an idea of the assortment of types prior to hitting an absorbing configuration, it is convenient to introduce an artificial mutation that makes the chain ergodic and gives it a unique stationary distribution. The idea is to choose a state \(\left(\mathbf{z},\lambda\right)\) with \(\mathbf{z}\in\mathbbm{B}_{\mathsf{T}}^{\mathcal{N}}\), and let mutations bring absorbing configurations into \(\left(\mathbf{z},\lambda\right)\) with some small probability \(u>0\). If \(P_{\left(\mathbf{x},\beta\right)\rightarrow\left(\mathbf{y},\gamma\right)}\) denotes the probability of transitioning from \(\left(\mathbf{x},\beta\right)\) to \(\left(\mathbf{y},\gamma\right)\) in the original (mutation-free) chain over the course of one time step, then the transition probabilities for the mutation-modified chain are given by \[P_{(\mathbf{x},\beta)\rightarrow(\mathbf{y},\gamma)}^{\circlearrowright(\mathbf{z}, \lambda)}=\begin{cases}u&\mathbf{x}\in\left\{\mathbf{A},\mathbf{B}\right\},\; \left(\mathbf{y},\gamma\right)=\left(\mathbf{z},\lambda\right),\\ \\ \left(1-u\right)P_{(\mathbf{x},\beta)\rightarrow(\mathbf{y},\gamma)}&\mathbf{x }\in\left\{\mathbf{A},\mathbf{B}\right\},\;\left(\mathbf{y},\gamma\right) \neq\left(\mathbf{z},\lambda\right),\\ \\ P_{(\mathbf{x},\beta)\rightarrow(\mathbf{y},\gamma)}&\mathbf{x}\not\in\left\{ \mathbf{A},\mathbf{B}\right\}.\end{cases}\] (SI.5) As a result of the Fixation Axiom, there is a unique stationary distribution, \(\pi_{\circlearrowright(\mathbf{z},\lambda)}\), such that \[\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left( \mathbf{x},\beta\right) =\sum_{\gamma\in\mathcal{L}}\left(\pi_{\circlearrowright( \mathbf{z},\lambda)}^{\circ}\left(\mathbf{A},\gamma\right)P_{(\mathbf{A}, \gamma)\rightarrow(\mathbf{x},\beta)}^{\circlearrowright(\mathbf{z},\lambda)}+ \pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left(\mathbf{B},\gamma \right)P_{(\mathbf{B},\gamma)\rightarrow(\mathbf{x},\beta)}^{\circlearrowright( \mathbf{z},\lambda)}\] \[\quad+\sum_{\mathbf{y}\in\mathbf{B}_{\gamma}^{\mathcal{N}}}\sum_ {\gamma\in\mathcal{L}}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ} \left(\mathbf{y},\gamma\right)P_{(\mathbf{y},\gamma)\rightarrow(\mathbf{x}, \beta)}^{\circlearrowright(\mathbf{z},\lambda)}\] \[=\sum_{\gamma\in\mathcal{L}}\pi_{\circlearrowright(\mathbf{z}, \lambda)}^{\circ}\left(\mathbf{A},\gamma\right)\left(u\delta_{\mathbf{z}, \mathbf{x}}\delta_{\lambda,\beta}+\left(1-u\right)\delta_{\mathbf{A},\mathbf{x }}q_{\gamma\beta}\right)\] \[\quad+\sum_{\gamma\in\mathcal{L}}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left(\mathbf{B},\gamma\right)\left(u\delta_{\mathbf{z}, \mathbf{x}}\delta_{\lambda,\beta}+\left(1-u\right)\delta_{\mathbf{B},\mathbf{x }}q_{\gamma\beta}\right)\] \[\quad+\sum_{\mathbf{y}\in\mathbf{B}_{\gamma}^{\mathcal{N}}}\sum_ {\gamma\in\mathcal{L}}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ} \left(\mathbf{y},\gamma\right)P_{(\mathbf{y},\gamma)\rightarrow(\mathbf{x}, \beta)}\] (SI.6) for all \(\mathbf{x}\in\mathbb{B}\) and \(\beta\in\mathcal{L}\). In one step after state \((\mathbf{x},\beta)\), the expected change in the \(\pi\)-weighted abundance of \(A\) is \[\widehat{\Delta}_{\circlearrowright(\mathbf{z},\lambda)}\left( \mathbf{x},\beta\right) =\begin{cases}-u\left(1-\sum_{i\in\mathcal{N}}\pi_{i}^{[\lambda]}z_{i} \right)&\mathbf{x}=\mathbf{A},\\ \\ u\sum_{i\in\mathcal{N}}\pi_{i}^{[\lambda]}z_{i}&\mathbf{x}=\mathbf{B},\\ \\ \widehat{\Delta}\left(\mathbf{x},\beta\right)&\mathbf{x}\not\in\left\{\mathbf{A },\mathbf{B}\right\}.\end{cases}\] (SI.7) Averaging this expected change over the stationary distribution of the modified chain gives \[0 =\mathbb{E}_{\circlearrowright(\mathbf{z},\lambda)}\left[ \widehat{\Delta}_{\circlearrowright(\mathbf{z},\lambda)}\right]\] \[=\mathbb{E}_{\circlearrowright(\mathbf{z},\lambda)}\left[ \widehat{\Delta}\right]-u\sum_{\beta\in\mathcal{L}}\pi_{\circlearrowright( \mathbf{z},\lambda)}\left(\mathbf{A},\beta\right)\left(1-\sum_{i\in\mathcal{N}} \pi_{i}^{[\lambda]}z_{i}\right)\] \[\quad+u\sum_{\beta\in\mathcal{L}}\pi_{\circlearrowright(\mathbf{z },\lambda)}\left(\mathbf{B},\beta\right)\sum_{i\in\mathcal{N}}\pi_{i}^{[ \lambda]}z_{i}.\] (SI.8) Owing to a result of Fudenberg & Imhof [53], we know that, in the low-mutation limit, \[\lim_{u\to 0}\sum_{\beta\in\mathcal{L}}\pi_{\circlearrowright( \mathbf{z},\lambda)}\left(\mathbf{A},\beta\right) =\rho_{A}\left(\mathbf{z},\lambda\right);\] (SI.9a) \[\lim_{u\to 0}\sum_{\beta\in\mathcal{L}}\pi_{\circlearrowright( \mathbf{z},\lambda)}\left(\mathbf{B},\beta\right) =\rho_{B}\left(\mathbf{z},\lambda\right).\] (SI.9b) Therefore, taking the derivative of both sides of Equation SI.8 with respect to \(u\) at \(u=0\) gives \[\rho_{A}\left(\mathbf{z},\lambda\right) =\sum_{i\in\mathcal{N}}\pi_{i}^{\left[\lambda\right]}z_{i}+\frac{d} {du}\Bigg{|}_{u=0}\mathbb{E}_{\circlearrowright\left(\mathbf{z},\lambda\right)} \left[\widehat{\Delta}\right].\] (SI.10) Let \(\left\langle\cdot\right\rangle_{\left(\mathbf{z},\lambda\right)}\coloneqq\frac {d}{du}\Big{|}_{u=0}\mathbb{E}_{\circlearrowright\left(\mathbf{z},\lambda \right)}\left[\cdot\right]\). By the argument given in Ref. [18] Corollary 1, we see that for any function \(\varphi:\mathds{B}^{\mathcal{N}}\times\mathcal{L}\rightarrow\mathbb{R}\) satisfying \(\varphi\left(\mathbf{A},\beta\right)=\varphi\left(\mathbf{B},\beta\right)=0\) for all \(\beta\in\mathcal{L}\), \[\left\langle\varphi\right\rangle_{\left(\mathbf{z},\lambda\right)} =\sum_{t=0}^{\infty}\mathbb{E}\left[\varphi\left(\mathbf{x}^{t}, \beta^{t}\right)\,|\,\left(\mathbf{x}^{0},\beta^{0}\right)=\left(\mathbf{z}, \lambda\right)\right],\] (SI.11) where the summation on the right-hand side converges absolutely. In particular, this equation holds for the expected change in the \(\pi\)-weighted abundance of \(A\), \(\varphi=\widehat{\Delta}\). Since we also have \[\frac{d}{d\delta}\Bigg{|}_{\delta=0}e_{ij}\left(\mathbf{x},\beta \right) =\sum_{I\subseteq\mathcal{N}}c_{I}^{ij}\left(\beta\right)\mathbf{x}_{I}\] (SI.12) for unique coefficients \(c_{I}^{ij}\left(\beta\right)\), where \(\mathbf{x}_{I}\coloneqq\prod_{i\in I}x_{i}\), it follows that \[\frac{d}{d\delta}\Bigg{|}_{\delta=0}\rho_{A}\left(\mathbf{z}, \lambda\right) =\left.\frac{d}{d\delta}\right|_{\delta=0}\left\langle\widehat{ \Delta}\right\rangle_{\left(\mathbf{z},\lambda\right)}\] \[=\left\langle\frac{d}{d\delta}\right|_{\delta=0}\widehat{\Delta} \right\rangle_{\left(\mathbf{z},\lambda\right)}^{\circ}\] \[=\left\langle\frac{d}{d\delta}\right|_{\delta=0}\sum_{i,j\in \mathcal{N}}e_{ji}\left(\mathbf{x},\beta\right)\sum_{\gamma\in\mathcal{L}}q_{ \beta\gamma}\pi_{i}^{\left[\gamma\right]}\left(\mathbf{x}_{j}-x_{i}\right) \right\rangle_{\left(\mathbf{z},\lambda\right)}^{\circ}\] \[=\sum_{i,j\in\mathcal{N}}\sum_{I\subseteq\mathcal{N}}\left\langle c _{I}^{ji}\left(\beta\right)\sum_{\gamma\in\mathcal{L}}q_{\beta\gamma}\pi_{i}^ {\left[\gamma\right]}\left(\mathbf{x}_{I\cup\{j\}}-\mathbf{x}_{I\cup\{i\}} \right)\right\rangle_{\left(\mathbf{z},\lambda\right)}^{\circ},\] (SI.13) where the interchange of the two limits is possible due to Equation SI.11 and the absolute convergence of its summation. The second line of Equation SI.13 is where we use the fact that \(\widehat{\Delta}^{0}\left(\mathbf{x},\beta\right)=0\) for all \(\mathbf{x}\in\mathds{B}^{\mathcal{N}}\) and \(\beta\in\mathcal{L}\), highlighting the importance of network-mediated reproductive value. As a result of these calculations, what remains in order to understand the first-order effects of selection on a mutant type's fixation probability is an analysis of the neutral operator \(\left\langle\cdot\right\rangle_{\left(\mathbf{z},\lambda\right)}^{\circ}\). ### Analysis of neutral drift Throughout this section, we denote the stationary distribution of the structure-transition chain, \(Q\), by \(\upsilon\). We also suppress either the configuration or the network when we marginalize. For example, we write \(\pi_{\circlearrowright\left(\mathbf{z},\lambda\right)}\left(\mathbf{x}\right)\) for \(\sum_{\beta\in\mathcal{L}}\pi_{\circlearrowright\left(\mathbf{z},\lambda\right)} \left(\mathbf{x},\beta\right)\) and \(\pi_{\circlearrowright\left(\mathbf{z},\lambda\right)}\left(\beta\right)\) for \(\sum_{\mathbf{x}\in\mathds{B}^{\mathcal{N}}}\pi_{\circlearrowright\left( \mathbf{z},\lambda\right)}\left(\mathbf{x},\beta\right)\). In the limit of low mutation, we know \(\pi_{\circlearrowright\left(\mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{A}\right)\) converges to \(\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)\) and \(\pi_{\circlearrowright\left(\mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{B}\right)\) converges to \(\rho_{B}^{\circ}\left(\mathbf{z},\lambda\right)\). The following lemma is a slightly stronger version of this result: **Lemma 1**.: For all networks \(\beta\in\mathcal{L}\), \[\lim_{u\to 0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left( \mathbf{A},\beta\right) =\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)v\left(\beta\right);\] (SI.14a) \[\lim_{u\to 0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ} \left(\mathbf{B},\beta\right) =\rho_{B}^{\circ}\left(\mathbf{z},\lambda\right)v\left(\beta\right).\] (SI.14b) Proof.: Letting \(\mathbf{x}=\mathbf{A}\) in Equation SI.6 and taking \(u\to 0\) gives \[\lim_{u\to 0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left( \mathbf{A},\beta\right) =\sum_{\gamma\in\mathcal{L}}\left(\lim_{u\to 0}\pi_{\circlearrowright( \mathbf{z},\lambda)}^{\circ}\left(\mathbf{A},\gamma\right)\right)q_{\gamma \beta}.\] (SI.15) It follows that \(\lim_{u\to 0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left( \mathbf{A},\beta\right)\) is proportional to \(v\left(\beta\right)\), for all \(\beta\in\mathcal{L}\). The constant of proportionality must be \(\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)\) due to the fact that \(\lim_{u\to 0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left( \mathbf{A}\right)=\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)\). The result for \(\lim_{u\to 0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left( \mathbf{B},\beta\right)\) follows from analogous reasoning and is omitted here. **Remark 1**.: Neutral fixation probabilities, \(\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)\) and \(\rho_{B}^{\circ}\left(\mathbf{z},\lambda\right)\), can be calculated using reproductive values and the identities \(\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)=\sum_{i\in\mathcal{N}}\pi_{i} ^{[\lambda]}z_{i}\) and \(\rho_{B}^{\circ}\left(\mathbf{z},\lambda\right)=1-\sum_{i\in\mathcal{N}}\pi_{ i}^{[\lambda]}z_{i}\). The following is an immediate consequence of Lemma 1: **Corollary 1**.: \(\lim_{u\to 0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left( \beta\right)=v\left(\beta\right)\)_._ The next lemma establishes a recurrence for \(\frac{d}{du}\Big{|}_{u=0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ} \left(\beta\right)\): **Lemma 2**.: For every \(\beta\), we have \[\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ} \left(\beta\right) =\delta_{\beta,\lambda}-v\left(\beta\right)+\sum_{\gamma\in\mathcal{L }}\left(\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright(\mathbf{z},\lambda)}^ {\circ}\left(\gamma\right)\right)q_{\gamma\beta}.\] (SI.16) Proof.: Summing both sides of Equation SI.6 over all \(\mathbf{x}\in\mathds{B}^{\mathcal{N}}\) gives \[\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left(\beta\right) =u\sum_{\gamma\in\mathcal{L}}\left(\pi_{\circlearrowright( \mathbf{z},\lambda)}^{\circ}\left(\mathbf{A},\gamma\right)+\pi_{\circlearrowright (\mathbf{z},\lambda)}^{\circ}\left(\mathbf{B},\gamma\right)\right)\left(\delta _{\beta,\lambda}-q_{\gamma\beta}\right)\] \[\quad+\sum_{\gamma\in\mathcal{L}}\pi_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left(\gamma\right)q_{\gamma\beta}.\] (SI.17) Differentiating this equation with respect to \(u\) at \(u=0\) and using Lemma 1 yields Equation SI.16. Since the state of the process consists of both a configuration of traits and a network structure, the next result gives a recurrence for calculating a modified version of \(\left\langle\cdot\right\rangle_{\left(\mathbf{z},\lambda\right)}^{\circ}\), using conditioning on the network structure. In particular, for a function \(\varphi:\mathds{B}^{\mathcal{N}}\rightarrow\mathds{R}\) defined on _just_ configurations, we let \(\left\langle\varphi\mid\beta\right\rangle_{\left(\mathbf{z},\lambda\right)}^{ \circ}=\frac{d}{du}\Big{|}_{u=0}\mathds{E}_{\circlearrowright(\mathbf{z}, \lambda)}^{\circ}\left[\varphi\mid\beta\right]\). This quantity can be calculated as follows: **Proposition 1**.: For every function \(\varphi:\mathbb{B}^{\mathcal{N}}\rightarrow\mathbb{R}\), we have \[v\left(\beta\right)\left\langle\varphi\mid\beta\right\rangle_{ \left(\mathbf{z},\lambda\right)}^{\circ} =\delta_{\lambda,\beta}\left(\varphi\left(\mathbf{z}\right)- \rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)\varphi\left(\mathbf{A}\right)- \rho_{B}^{\circ}\left(\mathbf{z},\lambda\right)\varphi\left(\mathbf{B}\right)\right)\] \[\quad+\sum_{\gamma\in\mathcal{L}}v\left(\gamma\right)\sum_{ \left(R,\alpha\right)}p_{\left(R,\alpha\right)}^{\circ}\left(\gamma\right)q_{ \gamma\beta}\left\langle\varphi_{\widetilde{\alpha}}\mid\gamma\right\rangle_ {\left(\mathbf{z},\lambda\right)}^{\circ},\] (SI.18) where, for \(\widetilde{\alpha}:\mathcal{N}\rightarrow\mathcal{N}\), \(\varphi_{\widetilde{\alpha}}:\mathbb{B}^{\mathcal{N}}\rightarrow\mathbb{R}\) is the map defined by \(\varphi_{\widetilde{\alpha}}\left(\mathbf{x}\right)=\varphi\left(\mathbf{x} _{\widetilde{\alpha}}\right)\) for \(\mathbf{x}\in\mathbb{B}^{\mathcal{N}}\). Proof.: For \(\mathbf{x}\in\mathbb{B}_{\mathsf{T}}^{\mathcal{N}}\), differentiating both sides of Equation SI.6 with respect to \(u\) at \(u=0\) gives \[\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright\left(\mathbf{z}, \lambda\right)}^{\circ}\left(\mathbf{x},\beta\right)\] \[=\delta_{\mathbf{z},\mathbf{x}}\delta_{\lambda,\beta}+\sum_{ \mathbf{y}\in\mathbb{B}_{\mathsf{T}}^{\mathcal{N}}}\sum_{\gamma\in\mathcal{L}} \left(\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright\left(\mathbf{z}, \lambda\right)}^{\circ}\left(\mathbf{y},\gamma\right)\right)P_{\left(\mathbf{y},\gamma\right)\rightarrow\left(\mathbf{x},\beta\right)}^{\circ}\] \[=\delta_{\mathbf{z},\mathbf{x}}\delta_{\lambda,\beta}+\sum_{ \mathbf{y}\in\mathbb{B}_{\mathsf{T}}^{\mathcal{N}}}\sum_{\gamma\in\mathcal{L}} \left(\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright\left(\mathbf{z}, \lambda\right)}^{\circ}\left(\mathbf{y},\gamma\right)\right)\sum_{\begin{subarray} {c}\left(R,\alpha\right)\\ \mathbf{y}_{\widetilde{\alpha}}=\mathbf{x}\end{subarray}}p_{\left(R,\alpha \right)}^{\circ}\left(\gamma\right)q_{\gamma\beta}.\] (SI.19) Doing so for \(\mathbf{x}\in\left\{\mathbf{A},\mathbf{B}\right\}\) gives \[\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright\left(\mathbf{z}, \lambda\right)}^{\circ}\left(\mathbf{A},\beta\right) =\sum_{\gamma\in\mathcal{L}}\left(\frac{d}{du}\Bigg{|}_{u=0}\pi_{ \circlearrowright\left(\mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{A}, \gamma\right)\right)q_{\gamma\beta}-\rho_{A}^{\circ}\left(\mathbf{z},\lambda \right)v\left(\beta\right)\] \[\quad+\sum_{\mathbf{y}\in\mathbb{B}_{\mathsf{T}}^{\mathcal{N}}} \sum_{\gamma\in\mathcal{L}}\left(\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright \left(\mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{y},\gamma\right)\right) \sum_{\begin{subarray}{c}\left(R,\alpha\right)\\ \mathbf{y}_{\widetilde{\alpha}}=\mathbf{A}\end{subarray}}p_{\left(R,\alpha \right)}^{\circ}\left(\gamma\right)q_{\gamma\beta};\] (SI.20a) \[\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright\left( \mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{B},\beta\right) =\sum_{\gamma\in\mathcal{L}}\left(\frac{d}{du}\Bigg{|}_{u=0}\pi_{ \circlearrowright\left(\mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{B}, \gamma\right)\right)q_{\gamma\beta}-\rho_{B}^{\circ}\left(\mathbf{z},\lambda \right)v\left(\beta\right)\] \[\quad+\sum_{\mathbf{y}\in\mathbb{B}_{\mathsf{T}}^{\mathcal{N}}} \sum_{\gamma\in\mathcal{L}}\left(\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright \left(\mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{y},\gamma\right)\right) \sum_{\begin{subarray}{c}\left(R,\alpha\right)\\ \mathbf{y}_{\widetilde{\alpha}}=\mathbf{B}\end{subarray}}p_{\left(R,\alpha \right)}^{\circ}\left(\gamma\right)q_{\gamma\beta}.\] (SI.20b) If \(\varphi:\mathbb{B}^{\mathcal{N}}\rightarrow\mathbb{R}\) is a fixed function, then, by definition, \[v\left(\beta\right)\left\langle\varphi\mid\beta\right\rangle_{\left(\mathbf{z},\lambda\right)}^{\circ}=\sum_{\mathbf{x}\in\mathbb{B}^{\mathcal{N}}}v \left(\beta\right)\frac{d}{du}\Bigg{|}_{u=0}\frac{\pi_{\circlearrowright\left( \mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{x},\beta\right)}{\pi_{ \circlearrowleft(\mathbf{z},\lambda\right)}^{\circ}\left(\beta\right)}\varphi \left(\mathbf{x}\right).\] (SI.21) Combining Lemma 2 and Eqs. SI.19-SI.20 with the fact that \[v\left(\beta\right)\frac{d}{du}\Bigg{|}_{u=0}\frac{\pi_{\circlearrowright \left(\mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{x},\beta\right)}{\pi_{ \circlearrowleft(\mathbf{z},\lambda\right)}^{\circ}\left(\beta\right)}\] \[\quad=\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright\left( \mathbf{z},\lambda\right)}^{\circ}\left(\mathbf{x},\beta\right)-\left(\delta_{ \mathbf{A},\mathbf{x}}\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right)+\delta_{ \mathbf{B},\mathbf{x}}\rho_{B}^{\circ}\left(\mathbf{z},\lambda\right)\right) \frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright\left(\mathbf{z},\lambda \right)}^{\circ}\left(\beta\right)\] (SI.22) then gives Equation SI.18 after some tedious but straightforward simplifications. **Corollary 2**.: With \(I\subseteq\mathcal{N}\) and \(\eta_{I}^{[\beta]}\left(\mathbf{z},\lambda\right)\coloneqq v\left(\beta\right) \left\langle\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}x_{i}-\mathbf{x}_{I}\mid \beta\right\rangle^{\circ}_{\left(\mathbf{z},\lambda\right)}\), we have \[\eta_{I}^{[\beta]}\left(\mathbf{z},\lambda\right)=\delta_{\lambda,\beta}\left( \sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}z_{i}-\mathbf{z}_{I}\right)+\sum_{ \gamma\in\mathcal{L}}\sum_{\left(R,\alpha\right)}p_{\left(R,\alpha\right)}^{ \circ}\left(\gamma\right)q_{\gamma\beta}\eta_{\widetilde{\alpha}\left(I \right)}^{\left[\gamma\right]}\left(\mathbf{z},\lambda\right).\] (SI.23) Subject to \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}\eta_{i}^{[\beta]}\left(\mathbf{z}, \lambda\right)=0\) for some \(\beta\in\mathcal{L}\), the solution to Equation SI.23 is unique. Proof.: Setting \(\varphi\left(\mathbf{x}\right)=\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}x_{i}- \mathbf{x}_{I}\) in Proposition 1 gives Equation SI.23. Conversely, we know that \(\eta_{I}^{[\beta]}\left(\mathbf{z},\lambda\right)\coloneqq v\left(\beta \right)\left\langle\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}x_{i}-\mathbf{x}_{I }\mid\beta\right\rangle^{\circ}_{\left(\mathbf{z},\lambda\right)}\) solves Equation SI.23, so that there is at least one solution to Equation SI.23. By the Fixation Axiom, the dimensionality of the space of solutions to Equation SI.23 is determined by that of the case \(\left|I\right|=1\). (The reason is that all subsets of size greater than one are transient under the ancestral process.) Specifically, the recurrence for \(I=\left\{i\right\}\) is \[\eta_{i}^{[\beta]}\left(\mathbf{z},\lambda\right) =\delta_{\lambda,\beta}\left(\rho_{A}^{\circ}\left(\mathbf{z}, \lambda\right)-z_{i}\right)+\sum_{\gamma\in\mathcal{L}}\sum_{j\in\mathcal{N}} e_{ji}^{\circ}\left(\gamma\right)q_{\gamma\beta}\eta_{j}^{\left[\gamma\right]} \left(\mathbf{z},\lambda\right)\] \[\quad+\sum_{\gamma\in\mathcal{L}}\left(1-\sum_{j\in\mathcal{N}} e_{ji}^{\circ}\left(\gamma\right)\right)q_{\gamma\beta}\eta_{i}^{\left[\gamma \right]}\left(\mathbf{z},\lambda\right).\] (SI.24) If \(\widetilde{\eta}\left(\mathbf{z},\lambda\right)\) is another solution to Equation SI.24, then \(\chi\left(\mathbf{z},\lambda\right)\coloneqq\eta\left(\mathbf{z},\lambda \right)-\widetilde{\eta}\left(\mathbf{z},\lambda\right)\) satisfies \[\chi_{i}^{[\beta]}\left(\mathbf{z},\lambda\right)=\sum_{\gamma\in\mathcal{L} }\sum_{j\in\mathcal{N}}e_{ji}^{\circ}\left(\gamma\right)q_{\gamma\beta}\chi_ {j}^{\left[\gamma\right]}\left(\mathbf{z},\lambda\right)+\sum_{\gamma\in \mathcal{L}}\left(1-\sum_{j\in\mathcal{N}}e_{ji}^{\circ}\left(\gamma\right) \right)q_{\gamma\beta}\chi_{i}^{\left[\gamma\right]}\left(\mathbf{z},\lambda \right).\] (SI.25) Noting that any constant function is a solution to Equation SI.25, and the space of solutions to this equation is one-dimensional as a result of the Fixation Axiom, there must exist \(K\in\mathbb{R}\) such that \(\eta\left(\mathbf{z},\lambda\right)=\widetilde{\eta}\left(\mathbf{z},\lambda \right)+K\). Since the solution \(\eta_{i}^{[\beta]}\left(\mathbf{z},\lambda\right)=v\left(\beta\right)\left\langle x _{i}\mid\beta\right\rangle^{\circ}_{\left(\mathbf{z},\lambda\right)}\) satisfies \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}\eta_{i}^{[\beta]}\left(\mathbf{z},\lambda \right)=0\) for all \(\beta\in\mathcal{L}\), it follows that \(K=0\) and \(\eta\left(\mathbf{z},\lambda\right)=\widetilde{\eta}\left(\mathbf{z},\lambda\right)\) whenever \(\widetilde{\eta}\left(\mathbf{z},\lambda\right)\) satisfies Equation SI.23 and \(\sum_{i\in\mathcal{N}}\eta_{i}^{[\beta]}\widetilde{\eta}_{i}^{[\beta]}\left( \mathbf{z},\lambda\right)=0\) for some \(\beta\in\mathcal{L}\). We note that \(\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}\eta_{i}^{[\beta]}\left(\mathbf{z}, \lambda\right)=0\) for _some_\(\beta\in\mathcal{L}\) ensures that this equation holds for _all_\(\beta\in\mathcal{L}\). #### S1.1.5 Calculating first-order effects of selection on fixation probabilities ##### S1.1.5 Fixed initial configurations Note that for functions \(\varphi:\mathbbm{B}^{\mathcal{N}}\rightarrow\mathbb{R}\) and \(\phi:\mathcal{L}\rightarrow\mathbb{R}\), we have \[\left\langle\phi\varphi\right\rangle_{(\mathbf{z},\lambda)}^{ \circ} =\left.\frac{d}{du}\right|_{u=0}\sum_{\beta\in\mathcal{L}}\pi_{ \circlearrowright(\mathbf{z},\lambda)}^{\circ}\left(\beta\right)\phi\left( \beta\right)\mathbb{E}_{\circlearrowright(\mathbf{z},\lambda)}^{\circ}\left[ \varphi\mid\beta\right]\] \[=\sum_{\beta\in\mathcal{L}}v\left(\beta\right)\phi\left(\beta \right)\left\langle\varphi\mid\beta\right\rangle_{(\mathbf{z},\lambda)}^{\circ}\] \[\quad+\left(\rho_{A}^{\circ}\left(\mathbf{z},\lambda\right) \varphi\left(\mathbf{A}\right)+\rho_{B}^{\circ}\left(\mathbf{z},\lambda\right) \varphi\left(\mathbf{B}\right)\right)\sum_{\beta\in\mathcal{L}}\phi\left( \beta\right)\frac{d}{du}\Bigg{|}_{u=0}\pi_{\circlearrowright(\mathbf{z}, \lambda)}^{\circ}\left(\beta\right).\] (SI.26) Therefore, we may rewrite Equation SI.13 as \[\left.\frac{d}{d\delta}\right|_{\delta=0}\rho_{A}\left(\mathbf{z},\lambda\right)\] \[\quad=\sum_{i,j\in\mathcal{N}}\sum_{I\subseteq\mathcal{N}}\sum_{ \beta\in\mathcal{L}}v\left(\beta\right)c_{I}^{ji}\left(\beta\right)\sum_{ \gamma\in\mathcal{L}}q_{\beta\gamma}\pi_{i}^{[\gamma]}\left(\left\langle\mathbf{ x}_{I\cup\{j\}}\mid\beta\right\rangle_{(\mathbf{z},\lambda)}^{\circ}-\left\langle \mathbf{x}_{I\cup\{i\}}\mid\beta\right\rangle_{(\mathbf{z},\lambda)}^{\circ} \right).\] (SI.27) Defining \(\eta_{I}^{[\beta]}\left(\mathbf{z},\lambda\right):=v\left(\beta\right) \left\langle\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}x_{i}-\mathbf{x}_{I}\mid \beta\right\rangle_{(\mathbf{z},\lambda)}^{\circ}\), we then have \[\left.\frac{d}{d\delta}\right|_{\delta=0}\rho_{A}\left(\mathbf{z},\lambda \right)=\sum_{i,j\in\mathcal{N}}\sum_{I\subseteq\mathcal{N}}\sum_{\beta\in \mathcal{L}}c_{I}^{ji}\left(\beta\right)\sum_{\gamma\in\mathcal{L}}q_{\beta \gamma}\pi_{i}^{[\gamma]}\left(\eta_{I\cup\{i\}}^{[\beta]}\left(\mathbf{z}, \lambda\right)-\eta_{I\cup\{j\}}^{[\beta]}\left(\mathbf{z},\lambda\right) \right),\] (SI.28) where, by Corollary 2, the terms \(\eta\) are uniquely determined by \[\eta_{I}^{[\beta]}\left(\mathbf{z},\lambda\right) =\delta_{\lambda,\beta}\left(\sum_{i\in\mathcal{N}}\pi_{i}^{[ \beta]}z_{i}-\mathbf{z}_{I}\right)+\sum_{\gamma\in\mathcal{L}}\sum_{\left(R, \alpha\right)}p_{\left(R,\alpha\right)}^{\circ}\left(\gamma\right)q_{\gamma \beta}\eta_{\widetilde{\alpha}\left(I\right)}^{[\gamma]}\left(\mathbf{z}, \lambda\right);\] (SI.29a) \[\sum_{i\in\mathcal{N}}\pi_{i}^{[\beta]}\eta_{i}^{[\beta]}\left( \mathbf{z},\lambda\right) =0\text{ for some }\beta\in\mathcal{L}.\] (SI.29b) ##### S1.1.5.2 Probabilistic initial configurations Up until this point, we have focused on fixation probabilities given some fixed initial state, \((\mathbf{z},\lambda)\in\mathcal{N}\times\mathcal{L}\). We now allow mutant types to arise stochastically and consider mean fix ation probabilities for both types. For two distributions, \(\mu_{A},\mu_{B}\in\Delta\left(\mathbb{B}_{\mathsf{T}}^{\mathcal{N}}\times \mathcal{L}\right)\), we let \[\rho_{A}\left(\mu_{A}\right) \coloneqq\mathbb{E}_{\left(\mathbf{z},\lambda\right)\sim\mu_{A}} \left[\rho_{A}\left(\mathbf{z},\lambda\right)\right];\] (SI.30a) \[\rho_{B}\left(\mu_{B}\right) \coloneqq\mathbb{E}_{\left(\mathbf{z},\lambda\right)\sim\mu_{B}} \left[\rho_{B}\left(\mathbf{z},\lambda\right)\right].\] (SI.30b) By the results of SSI.5.1, for any \(\mu\in\Delta\left(\mathbb{B}_{\mathsf{T}}^{\mathcal{N}}\times\mathcal{L}\right)\), we have \[\left.\frac{d}{d\delta}\right|_{\delta=0}\rho_{A}\left(\mu\right)=\sum_{i,j \in\mathcal{N}}\sum_{I\subseteq\mathcal{N}}\sum_{\beta\in\mathcal{L}}c_{I}^{ ji}\left(\beta\right)\sum_{\gamma\in\mathcal{L}}q_{\beta\gamma}\pi_{i}^{\left[ \gamma\right]}\left(\eta_{I\cup\{i\}}^{\left[\beta\right]}\left(\mu\right)- \eta_{I\cup\{j\}}^{\left[\beta\right]}\left(\mu\right)\right),\] (SI.31) where \[\eta_{I}^{\left[\beta\right]}\left(\mu\right) =\mathbb{E}_{\left(\mathbf{z},\lambda\right)\sim\mu}\left[\delta_ {\lambda,\beta}\left(\sum_{i\in\mathcal{N}}\pi_{i}^{\left[\beta\right]}z_{i}- \mathbf{z}_{I}\right)\right]\] \[\quad+\sum_{\gamma\in\mathcal{L}}\sum_{\left(R,\alpha\right)}p_{ \left(R,\alpha\right)}^{\circ}\left(\gamma\right)q_{\gamma\beta}\eta_{\widetilde {\alpha}\left(I\right)}^{\left[\gamma\right]}\left(\mu\right);\] (SI.32a) \[\sum_{i\in\mathcal{N}}\pi_{i}^{\left[\beta\right]}\eta_{i}^{ \left[\beta\right]}\left(\mu\right) =0\text{ for some }\beta\in\mathcal{L}.\] (SI.32b) Letting \(\mu=\mu_{A}\) gives the mean fixation probability for type \(A\), while the mean fixation probability for type \(B\) can be calculated analogously using the equation \(\rho_{B}\left(\mu_{B}\right)=1-\rho_{A}\left(\mu_{B}\right)\). Although the main focus of our study is on network-transition chains that are both aperiodic and irreducible, we do also consider periodic structures. Suppose that among the \(L\) networks in \(\mathcal{L}\), network \(\beta\) transitions deterministically to network \(\beta+1\) for \(\beta\in\{1,\ldots,L-1\}\), and network \(L\) transitions deterministically to network \(1\). We can then write Equation SI.32 more explicitly as \[\eta_{I}^{\left[1\right]}\left(\mu\right) =\mathbb{E}_{\left(\mathbf{z},\lambda\right)\sim\mu}\left[\delta_ {\lambda,1}\left(\sum_{i\in\mathcal{N}}\pi_{i}^{\left[1\right]}z_{i}-\mathbf{ z}_{I}\right)\right]+\sum_{\left(R,\alpha\right)}p_{\left(R,\alpha\right)}^{\circ} \left(L\right)\eta_{\widetilde{\alpha}\left(I\right)}^{\left[L\right]}\left( \mu\right);\] (SI.33a) \[\eta_{I}^{\left[\beta\right]}\left(\mu\right) =\mathbb{E}_{\left(\mathbf{z},\lambda\right)\sim\mu}\left[\delta_ {\lambda,\beta}\left(\sum_{i\in\mathcal{N}}\pi_{i}^{\left[\beta\right]}z_{i}- \mathbf{z}_{I}\right)\right]\] \[\quad+\sum_{\left(R,\alpha\right)}p_{\left(R,\alpha\right)}^{ \circ}\left(\beta-1\right)\eta_{\widetilde{\alpha}\left(I\right)}^{\left[\beta- 1\right]}\left(\mu\right);\quad\left(1<\beta\leqslant L\right)\] (SI.33b) \[\sum_{i\in\mathcal{N}}\pi_{i}^{\left[\beta\right]}\eta_{i}^{ \left[\beta\right]}\left(\mu\right) =0\text{ for some }\beta\in\mathcal{L}.\] (SI.33c)
2302.08983
Universal spectral correlations in interacting chaotic few-body quantum systems
The emergence of random matrix spectral correlations in interacting quantum systems is a defining feature of quantum chaos. We study such correlations in terms of the spectral form factor in interacting chaotic few- and many-body systems, modeled by suitable random-matrix ensembles, and obtain exact results for large Hilbert space dimensions. The transition of the spectral form factor from the non-interacting to the strongly interacting case can be described as a simple combination of these two limiting cases, which we confirm by extensive numerical studies in few-body systems. This transition is universally governed by a single scaling parameter. Moreover, our approach accurately captures spectral correlations in actual physical system, which we demonstrate for coupled kicked rotors.
Felix Fritzsch, Maximilian F. I. Kieler
2023-02-17T16:37:08Z
http://arxiv.org/abs/2302.08983v3
# Universal spectral correlations in interacting chaotic few-body quantum systems ###### Abstract The emergence of random matrix spectral correlations in interacting quantum systems is a defining feature of quantum chaos. We study such correlations in terms of the spectral form factor in interacting chaotic few- and many-body systems, modeled by suitable random-matrix ensembles, and obtain exact results for large Hilbert space dimensions. The transition of the spectral form factor from the non-interacting to the strongly interacting case can be described as a simple combination of these two limiting cases, which we confirm by extensive numerical studies in few-body systems. This transition is universally governed by a single scaling parameter. Moreover, our approach accurately captures spectral correlations in actual physical system, which we demonstrate for coupled kicked rotors. The quantum chaos conjecture [1; 2; 3] predicts statistical properties of energy levels in quantum systems whose classical limit is chaotic to follow random matrix theory [4; 5; 6]. Using semiclassical periodic orbit theory this connection has been shown to follow from only a few basic properties of the chaotic classical dynamics [7; 8; 9; 10]. Subsequently random-matrix like spectral statistics has become one of the most widely used definitions of quantum chaos even in the absence of a classical limit. A distinguished feature of the spectrum of such chaotic quantum systems and the corresponding random matrix ensembles is the presence of correlations between energy levels in contrast to the uncorrelated Poissonian spectrum of integrable [11] or (many-body [12]) localized systems [13; 14]. These correlations are conveniently detected by the spectral form factor (SFF) [15] which has received growing attention in recent years in, e.g., high energy physics [16; 17; 18; 19] as well as condensed matter and many-body systems [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. Recently, the SFF has been shown to follow random matrix theory in various solvable instances of chaotic many-body systems involving both homogeneous [20; 21; 22] and random quantum circuit models [25; 26; 27; 28; 29]. The latter constitute random matrix ensembles which incorporate the (spatial) locality of typical many-body systems. We study the SFF in a similar random matrix model, which is built from large independent chaotic subsystems subject to an all-to-all, and hence spatially non-local, interaction of tunable strength. For the bipartite case of just two subsystems our setting reduces to the so-called random matrix transition ensembles (RMTE) introduced in Ref. [38]. We therefore refer to our setting as the extended RMTE henceforth. The bipartite RMTE models a universal transition from an uncorrelated Poissonian spectrum with exponentially distributed level spacings in the non-interacting case to a correlated spectrum whose spacings follow Wigner Dyson statistics at strong interaction [38]. This universal transition has been observed subsequently also in the average eigenstate entanglement [39; 40; 41] and in the entanglement generation after a quench [42; 43]. In the extended RMTE we describe the full transition of the SFF from a simple product structure in the non-interacting case towards the full random matrix result at strong interaction as a simple convex combination of these two extreme cases. This transition is universally governed by a single scaling parameter, which combines the dependence of the SFF on all parameters of the system, namely the interaction strength as well as the size and the number of subsystems, into a single number. The SFF signals an intricate interplay between different time (and associated energy) scales, such as the Heisenberg time of the subsystems and the full system, and most notably a non-trivial Thouless time. We confirm our prediction by extensive numerical studies in few-body systems but expect our results to hold even in the many-body case. For the minimal setting of a bipartite system, we obtain a similar description also for the moments of the spectral form factor, which characterize its distribution and indicate correlations between multiple levels. Moreover we go beyond random matrix models and demonstrate that the above results equally well apply in quantized dynamical systems, i.e., a pair of coupled kicked rotors. Ultimately, we complement our results by a perturbative treatment of the interaction. _Extended random matrix transition ensemble._--To model interacting few- or many-body systems, we generalize the RMTE introduced in Ref. [38] by allowing for an arbitrary number \(L\) of subsystems. We consider Floquet systems which evolve in discrete time with unitary time evolution operator given by \[\mathcal{U}=\mathcal{U}_{\epsilon}(\epsilon)\left(\mathcal{U}_{ \dagger}\otimes\mathcal{U}_{2}\otimes\cdots\otimes\mathcal{U}_{L}\right). \tag{1}\] Each of the \(\mathcal{U}_{i}\) is an independent, \(N\)-dimensional Haar random unitary drawn from the circular unitary ensemble, \(\mathrm{CUE}(N)\), and models a chaotic subsystem. By considering the CUE we restrict ourselves to systems without anti-unitary symmetries, e.g., time-reversal invariance. The interaction is introduced by the \(N^{L}\)-dimensional diagonal unitary matrix \(\mathcal{U}_{\epsilon}(\epsilon)\), where \(\epsilon\) controls the strength of the interaction with \(\epsilon=0\) corre sponding to the non-interacting situation, \(\mathcal{U}_{\mathrm{c}}(0)=\mathds{1}\). In the canonical product basis the interaction reads \[\left[\mathcal{U}_{\mathrm{c}}(\epsilon)\right]_{j_{1}\cdots j_{L}}^{i_{1}\cdots i _{L}}=\delta_{j_{1}}^{i_{1}}\cdots\delta_{j_{L}}^{i_{L}}\exp\left(\mathrm{i} \epsilon\xi_{i_{1}\cdots i_{L}}\right). \tag{2}\] Here the phases \(\xi_{i_{1}\cdots i_{L}}\) are i.i.d. random variables with zero mean and variance \(\sigma^{2}\), which gives rise to an effective interaction strength \(\sigma\epsilon\). For numerical simulations we use phases uniformly distributed in \([-\pi,\pi]\). Note, that imposing a spatial locality structure on \(\mathcal{U}_{\mathrm{c}}\) recovers the random-phase circuit of Ref. [26]. _Spectral form factor._--The SFF indicates correlations between the eigenphases (or quasi-energies) \(\phi_{i}\) defined by the eigenvalue equation \(\mathcal{U}\left|i\right>=\exp\left(\mathrm{i}\phi_{i}\right)\left|i\right>\). For chaotic Floquet systems the spectral density is a constant, but its two-point correlation function \(r_{2}(\omega)\) yields the probability of finding two eigenphases with a distance \(\omega\) and hence encodes spectral correlations. The SFF \(K(t)\) is then given by the Fourier transform of the connected part of \(r_{2}(\omega)\) and depends on a time variable \(t\) conjugate to the quasi-energy difference \(\omega\). The SFF has a simple representation in terms of the time evolution operator \(\mathcal{U}\) as \[K(t)=\left<\left|\mathrm{tr}\left(\mathcal{U}^{t}\right)\right|^{2}\right>-N^{ 2L}\delta_{t}^{0}. \tag{3}\] Here, the brackets denote an ensemble average over the subsystems, i.e., over the \(L\) independent \(\mathrm{CUE}(N)\), as well as an average over the random phases \(\xi_{i_{1}\cdots i_{L}}\). For numerical simulation we average over at least \(1000\) realizations. This averaging procedure is necessary as the SFF is not self averaging [44] and fluctuates wildly for an individual realization. It is instructive to begin with the SFF for a single CUE of dimension \(M\), for which the SFF takes the simple form \(K_{M}(t)=\min\{t,M\}\). The initial linear ramp \(\sim t\) indicates correlations in the spectrum and substantially differs from the constant SFF of an uncorrelated Poissonian spectrum, characteristic for, e.g., integrable systems [11]. Hence a linear ramp of the SFF indicates quantum chaos and ergodicity. In interacting physical models the linear ramp is usually approached after a non-universal time scale known as Thouless time \(t_{\mathrm{Th}}\). It sets the energy scale \(\sim 1/t_{\mathrm{Th}}\) below which the system exhibits random matrix like spectral correlations and hence indicates the onset of universal dynamics. In contrast for non-interacting systems modeled by the tensor product of independent \(\mathrm{CUE}(N)\) matrices, e.g., the extended RMTE at \(\epsilon=0\), the SFF factorizes into a product \(K(t)=\left[K_{N}(t)\right]^{L}\). In the extended RMTE we expect a transition from this factorized SFF to the full \(\mathrm{CUE}(N^{L})\) SFF for increasing interaction strength. In the following we fully characterize this transition and demonstrate that it depends on a single scaling parameter only. To this end we adapt the large \(N\) expansion of the SFF for the random-phase circuit of Ref. [26] based on the Weingarten calculus for integration over unitary groups [45; 46] to the extended RMTE. The average over the subsystems proceeds in the same fashion whereas the average over the phases simplifies; see Ref. [47] for a detailed derivation. Ultimately, as our first main result, we represent the SFF in the simple form of a time-dependent convex combination of the two extreme cases discussed above. It is given by \[K(t)=|\chi(\epsilon)|^{2t}K_{N}(t)^{L}+\left(1-|\chi(\epsilon)|^{2t}\right)K_{ N^{L}}(t), \tag{4}\] where \(\chi(\epsilon)=\langle\exp\left(\mathrm{i}\epsilon\xi\right)\rangle_{\xi}\) is the characteristic function of the distribution of the phases \(\xi_{i_{1}\cdots i_{L}}\). This result is exact in the limit \(N\to\infty\). For finite \(N\) it provides the leading contribution (in \(1/N\)) for times \(t<t_{\mathrm{SH}}=N\), i.e., smaller than the subsystems' Heisenberg time \(t_{\mathrm{SH}}\) set by the mean level spacing \(2\pi/N\) of the subsystems. For this times \(K_{N}(t)=K_{N^{L}}(t)=t\). It has a natural extension to larger times by including the plateaus of \(K_{N}(t)=N\) for \(t>t_{\mathrm{SH}}\) and \(K_{N^{L}}(t)=N^{L}\) for times \(t>t_{\mathrm{H}}=N^{L}\), i.e., larger than the full systems Heisenberg time \(t_{\mathrm{H}}\). This extension is an approximation, which is in excellent agreement with numerical data as depicted in Fig 1 with possible deviations occurring around Heisenberg time and for small coupling. We emphasize, that requiring large \(N\) limits numerical studies to few-body systems, i.e., small \(L\), while our arguments do not depend on \(L\) being small. We therefore expect our results to hold also in the many-body setting. However, before discussing the qualitative features of the SFF in the RMTE in more detail, we first point out its universal dependence on a single scaling parameter. _Universality._--To compare the SFF for different systems it is appropriate to measure both \(K(t)\) and time \(t\) in units of \(t_{\mathrm{H}}\) and to introduce the rescaled SFF \(\kappa(\tau)\) and the rescaled time \(\tau\) via \[\kappa(\tau)=K(t)/N^{L}\quad\text{and}\;\,\tau=t/N^{L}. \tag{5}\] This results in a rescaled Heisenberg time \(\tau_{\mathrm{H}}=1\) of the full system and \(\tau_{\mathrm{SH}}=N^{-L+1}\) of the subsystems. Apart Figure 1: SFF \(\kappa(\tau)\) for the extended RMTE for different \(N\), \(L\) and \(\Gamma\). Black lines correspond to Eq. (4). Dashed gray lines correspond to \(\tau_{\mathrm{SH}}\) and \(\tau_{\mathrm{H}}\). from the latter, the only \(N\) dependence is implicitly contained in \(|\chi(\epsilon)|^{2t}\) via \(t=N^{L}\tau\). By applying the central limit theorem to the characteristic function the \(N\) dependence together with the dependence on the effective coupling strength \(\sigma\epsilon\) can be converted into the dependence on a single scaling parameter \(\Gamma\) via \[|\chi(\epsilon)|^{2t}=\exp\left(-\Gamma^{2}\tau\right)\quad\text{with}\;\;\Gamma =\sigma\epsilon N^{L/2}. \tag{6}\] Here we use the characteristic function \(\exp\left(-x^{2}/2\right)\) of the standard normal distribution. Consequently, the SFF becomes independent from the concrete choice of the distribution of the phases \(\xi_{i_{1}\cdots i_{L}}\) entering \(\mathcal{U}_{\epsilon}\). Moreover, it depends only on \(\Gamma\) for times \(\tau>\tau_{\text{SH}}\). This universal dependence on a single scaling parameter constitutes our second main result. It is well confirmed in Fig. 1, where we depict the SFF for different combinations of \(N\), \(L\), and \(\epsilon\) all leading to the same \(\Gamma\) and coinciding SFF for \(\tau>\tau_{\text{SH}}\). In the non-interacting case \(\Gamma=0\) and hence \(\exp\left(-\Gamma^{2}\tau\right)=1\) the SFF initially grows as \(\kappa(\tau)=\tau^{L}\) up to \(\tau=\tau_{\text{SH}}\) and subsequently is constant, \(\kappa(\tau)=1\) (not shown). For small \(\Gamma\) we still observe an initial growth of the SFF as \(\kappa(\tau)\sim\tau^{L}\), but after times larger than \(\tau_{\text{SH}}\) the SFF drops down to the linear ramp \(\kappa(\tau)\sim\tau\) because all other terms are exponentially suppressed as \(\exp\left(-\Gamma^{2}\tau\right)\). This indicates the Thouless time \(\tau_{\text{Th}}\) as the smallest time for which \(\kappa(\tau)\sim\tau\). For intermediate \(\Gamma\) one has \(\tau_{\text{SH}}<\tau_{\text{Th}}<1\) and we obtain [47] \[t_{\text{Th}}=N^{L}\tau_{\text{Th}}=\frac{L\ln(N)}{2|\ln|\chi(\epsilon)||}, \tag{7}\] which scales linear with the number of subsystems. This is in contrast with, e.g. logarithmic scaling [26, 28, 20] for local interactions or even \(t_{\text{Th}}=0\) in local dual-unitary quantum circuits [21, 22]. For large \(\Gamma\) the linear ramp is approached earlier than \(\tau_{\text{SH}}\), as shown for \(\Gamma=27.21\) for \(N=80\) and \(L=2\). Ultimately for very large \(\Gamma\) all terms involving the characteristic function are almost immediately suppressed and the SFF reduces to the \(\text{CUE}\big{(}N^{L}\big{)}\) result (not shown). _Higher moments_.--As the SFF is defined via an average over the RMTE one might study its distribution via its moments of order \(m\) defined by \[K_{m}(t)=\left\langle\left|\text{tr}\left(\mathcal{U}^{t}\right)\right|^{2m} \right\rangle-N^{2Lm}\delta_{t}^{0}. \tag{8}\] For the \(\text{CUE}(M)\) the SFF follows an exponential distribution, i.e., \(K_{M,m}(t)=m!K_{M}(t)^{m}\)[48]. To compute the moments in the extended RMTE for \(t<t_{\text{SH}}\) we follow Ref. [29] to perform the average over the independent \(\text{CUE}(N)\). The remaining average over the phases \(\xi_{i_{1}\cdots i_{L}}\) yields [47] \[K_{m}(t)=m!t^{m}\sum_{k=0}^{m}A_{k}(t)|\chi(\epsilon)|^{2t(m-k)} \tag{9}\] for initial times \(t<t_{\text{SH}}\). Here the combinatorical factors \(A_{k}(t)\) are polynomials of degree \(m(L-1)\) in \(t\) which can be obtained exactly only for the bipartite case \(L=2\). Computing the latter for \(L=2\) and fixed \(m\) allows for expressing the SFF as a time dependent convex combination between the full random matrix result \(K_{N^{2},m}(t)\) and the non-interacting result \(\left[K_{N,m}(t)\right]^{2}\) as well as additional terms involving products of lower moments. For instance for the second moment, \(m=2\), we find [47] \[K_{2}(t)= K_{N^{2},2}(t)\left(1-|\chi(\epsilon)|\right)^{2}+\left[K_{N,2}(t )\right]^{2}|\chi(\epsilon)|^{4t}\] \[+4K_{N^{2}}(t)K_{N}(t)^{2}|\chi(\epsilon)|^{2t}\left(1-|\chi( \epsilon)|^{2t}\right) \tag{10}\] and similar for \(m>2\). By explicitly including the plateaus for the moments of the CUE spectral form factors the above results again extends also to times \(t>N\). Moreover, it reproduces the correct result for the non-interacting case \(\epsilon=0\) for all \(m\) and for the interacting case implies \(K(t)\sim K_{N^{2},m}(t)\), i.e., an exponential distribution, for \(t>t_{\text{Th}}\) as all the terms involving \(|\chi(\epsilon)|^{2t}\) have decayed. Given this exponential distribution we define the rescaled moments via \[\kappa_{m}(\tau)=\frac{1}{N^{L}}\left(\frac{K_{m}(t)}{m!}\right)^{1/m}. \tag{11}\] Repeating the argument invoking the central limit theorem, we again find that the rescaled moments of the SFF depend only on \(\Gamma\) for times \(\tau>\tau_{\text{SH}}\). Both Eq. (10) and its variants for \(m>2\), see [47], as well as the universal dependence on \(\Gamma\) for fixed \(L=2\) is confirmed in Fig. 2 for the second and third moment. Due to the rescaling (11) higher moments exhibit the same phenomenology as the SFF \(\kappa(\tau)\). They depend on both \(L\) and \(\Gamma\) even for times \(\tau_{\rm SH}<\tau\lesssim\tau_{\rm Th}\) while coinciding with the \(\rm CUE(N^{L})\) result afterwards. _Coupled kicked rotors._--To demonstrate, that the RMTE describes actual physical systems, we apply our results to a quantized dynamical system given by two coupled kicked rotors [49]. While individual kicked rotors [50] are a paradigmatic model for both classical and single particle quantum chaos, coupling two rotors provides an example for the corresponding two-body setting [38; 39; 40; 51; 52; 53; 54]. We consider coupled kicked rotors with periodic boundary conditions, whose classical phase space is the four torus with canonical conjugate coordinates \((q_{1},q_{2},p_{1},p_{2})\). After quantization the effective Planck's constant \(h\) is constraint to integer values \(1/h=N\). The time evolution operator is a \(N^{2}\)-dimensional unitary of the form (1) with [55; 56; 57; 58; 59] \[\mathcal{U}_{i}=\mathrm{e}^{-\pi\mathrm{i}Np_{i}^{2}}\mathrm{e}^{-\frac{ \mathrm{i}\gamma N}{2\pi}\cos(2\pi q_{i})}. \tag{12}\] Here \(k_{1}=9.7\) and \(k_{2}=10.5\) governs the strength of the kicks end ensures chaotic classical dynamics. The coupling is introduced by \[\mathcal{U}_{\rm c}=\mathrm{e}^{-\frac{\mathrm{i}\gamma N}{2\pi}\cos(2\pi[q_{ 1}+q_{2}])}, \tag{13}\] with coupling strength \(\gamma\) and effective \(\epsilon=\gamma N/(2\pi)\). We choose boundary conditions for the quantum states which break time-reversal invariance and average over such boundary conditions in order to perform the average in the definition of the SFF and its moments. The resulting SFF and its second moment is depicted in Fig. 3 and shows qualitatively similar behavior as in the RMTE. However, initial fluctuations are more pronounced, which we attribute to short periodic orbits in the classical dynamics. In order to model the coupled kicked rotors with the bipartite RMTE we choose \(\xi_{ij}=\cos(\eta_{ij})\) with i.i.d. and uniformly distributed \(\eta_{ij}\). This yields \(\chi(\epsilon)=J_{0}(\gamma N/(2\pi))\). The corresponding RMTE result is in good agreement with numerical data and again implies universal dependence on \(\Gamma\) for \(\tau>\tau_{\rm SH}\); see Fig. 3. For scaling parameters \(\Gamma\) for which the Thouless time is given by Eq. (7) we note, that \(t_{\rm Th}\) does not coincide with the Ehrenfest time \(t_{\rm E}\). The latter is the time it takes for an initially localized wave packet to spread over the system and hence indicates the time for which quantum follows classical dynamics. It is determined by the classical system's Lyuapunov exponents and for the coupled kicked rotors approximately reads \(t_{\rm E}\approx\ln(N)/(2\ln(k_{\rm A}k_{\rm B}/4))\)[50]. For chaotic subsystems \(t_{\rm E}\) is necessarily smaller than the subsystem's Heisenberg time \(t=N\) and is also much smaller than \(t_{\rm Th}\) even though both times scale logarithmic with \(N\). _Perturbative regime._--For very small scaling parameter extrapolating the exact result from \(t<N\) to larger times gives a less accurate description of the SFF. This is visible already for \(\Gamma=2.27\) in Fig. 1 around Heisenberg time \(\tau\approx\tau_{\rm H}\). A natural approach for \(\Gamma\ll 1\) is to extend the regularized Raleigh-Schrodinger perturbation theory introduced in Ref. [38] from the bipartite to the extended case of arbitrary \(L\). Viewing \(\mathcal{U}_{\rm c}(\epsilon)\) as a perturbation to the non-interacting system the eigenphases \(\phi_{i}=\phi_{i}(\epsilon)\) can be expanded in a perturbative series in \(\epsilon\) which allows for computing \(K(t)=\langle\exp\left(\mathrm{i}\sum_{ij}\phi_{i}-\phi_{j}\right)\rangle\). While Eq. (9) still holds for \(\tau<\tau_{\rm SH}\) the perturbative approach yields [47] \[\kappa(\tau)=1-\Gamma^{2}\tau\mathrm{e}^{-(\Gamma\tau)^{2}} \tag{14}\] for \(\tau>\tau_{\rm SH}\) up to arbitrary large times. Again, this universally depends on the scaling parameter \(\Gamma\) only. The validity of the perturbative approach for very small \(\Gamma\) is depicted in Fig. 4. _Summary and Outlook._--We have given a simple description of the SFF (and its moments for the bipartite case) for interacting chaotic subsystems as a convex combination of the results for the non-interacting and the strongly interacting case. We confirm this numerically for few-body systems and expect it to hold also for many-body systems at large \(N\). Interestingly relatively small subsystem sizes, \(N\approx 10\), seem to be large enough for our description to apply. Our description additionally implies Figure 3: SFF (\(m=1\), left) and its second moment (\(m=2\), right) \(\kappa_{m}(\tau)\) for the coupled kick rotors for different \(N\) and \(\Gamma\). Black lines depict the RTME results. Dashed gray lines correspond to \(\tau=\tau_{\rm SH}\) and \(\tau=\tau_{\rm H}\). the universal dependence of the SFF on a single scaling parameter \(\Gamma\) and is insensitive to the detailed statistics of the phases \(\xi_{i_{1}\cdots i_{L}}\). However, using i.i.d. phases we ignore all correlations in the phases as they would be present for instance due to spatial locality of typical many-body systems. It therefore is an interesting open question, whether such a simple picture applies also for these situations. Moreover, our results for the RMTE are exact only for small times \(t<N\) whereas a derivation for larger times might be possible using field theoretical methods [60, 61]. For systems originating from the quantization of classically chaotic systems, e.g., the coupled kicked rotors, semiclassical periodic-orbit based techniques might shed further light on spectral correlations. The latter approaches, however, are left for future research. _Acknowledgements_.-- We thank A. Backer for insightful discussions. FF further acknowledges fruitful discussion with P. Kos, F. G. Montoya and T. Prosen. The work has been supported by Deutsche Forschungsgemeinschaft (DFG), Project No. 453812159 (FF) and Project No. 497038782 (MK).
2306.04496
Electromagnetic high-frequency gravitational wave detection
Ultra-high frequency gravitational waves in the MHz to THz regime promise a unique possibility to probe the very early universe, particle physics at very high energies and exotic astrophysical objects - but achieving the sensitivity required for detection is an immense challenge. This is a brief summary of recent progress in electromagnetic high-frequency gravitational wave searches, which are based on classical electromagnetism in a space-time perturbed by gravitational waves. A particular focus is given to synergies with axion searches and atomic precision measurements. This article was prepared as proceedings for Moriond EW 2023.
Valerie Domcke
2023-06-07T15:05:15Z
http://arxiv.org/abs/2306.04496v1
###### Abstract ###### Abstract Ultra-high frequency gravitational waves in the MHz to THz regime promise a unique possibility to probe the very early universe, particle physics at very high energies and exotic astrophysical objects - but achieving the sensitivity required for detection is an immense challenge. This is a brief summary of recent progress in electromagnetic high-frequency gravitational wave searches, which are based on classical electromagnetism in a space-time perturbed by gravitational waves. A particular focus is given to synergies with axion searches and atomic precision measurements. This article was prepared as proceedings for Moriond EW 2023. CERN-TH-2023-099 **Electromagnetic high-frequency gravitational wave detection** Valerie Domcke _CERN, Department of Theoretical Physics, 1211 Geneva, Switzerland_ ## 1 Introduction Due to their extremely weak coupling to matter, gravitational waves (GWs) can traverse the universe nearly unperturbed, providing a unique window to probe the epoch before the decoupling of the cosmic microwave background (CMB), when the universe was opaque to electromagnetic (EM) waves. Taking into account cosmic expansion, earlier times and hence higher energies correspond to smaller characteristic physical scales, and cosmological processes at earlier times thus source GWs at higher frequencies, reaching around 100 GHz for thermal processes around the scale of grand unification. At these energy scales, a plethora of different possible extensions of the Standard Model of particle physics are viable and well motivated, some of which entail processes which yield a significant amount of relic GWs. While probing a stochastic background or GWs from the early universe is a driving motivation for searching for these elusive signals, a more achievable target are exotic astrophysical objects such as mergers of light primordial black holes [1] or GWs emitted from superradiance of axion clouds around primordial black holes [2]. These can lead to locally relatively high energy densities in GWs, facilitating detection if such an object happens to be sufficiently close to the observer. Motivated by this and for simplicity, we will here focus on a toy model comprising of a coherent GW, i.e. a monochromatic plane wave. Identifying the most promising detection strategy for GWs above a kHz remains very much an open challenge, with several concepts summarized in the ultra-high frequency GW living review [3]. Here, we will focus on recent progress on EM GW detectors, many of which have been inspired or directly make use of axion searches. ## 2 Axion inspired searches Starting from electromagnetism with a general metric \[S=S_{G}+S_{EM}=\int d^{4}x\sqrt{-g}\left(\frac{1}{2}M_{P}^{2}R-\frac{1}{4}F_{\mu \nu}g^{\alpha\mu}F_{\alpha\beta}g^{\beta\nu}\right)\,, \tag{2.1}\] with the field strength tensor \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\), we expand around a flat background perturbed by a gravitational wave, \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\) with \(|h_{\mu\nu}|\ll 1\), to obtain the equations of motion of electromagnetism in linearized gravity. Expanding the first term gives [4] \[S_{G}=-\frac{1}{8}\int d^{4}x\left(\partial_{\mu}h_{\alpha\beta}\partial^{\mu} h^{\alpha\beta}-(\partial_{\mu}h)(\partial^{\mu}h)+2\partial_{\mu}h^{\mu\nu} \partial_{\nu}h-2\partial_{\mu}h^{\mu\nu}\partial_{\rho}h^{\rho}_{\nu}\right)+ \mathcal{O}(h^{3})\,, \tag{2.2}\] which describes the propagation of GWs with \(h\equiv h_{\mu}^{\mu}\). Here and in the following, we have made powers of \(h\) explicit, so all indices are understood to be lowered and raised using the flat metric. The second terms yields [5, 6] \[S_{EM}=\int d^{4}x\left(-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}j_{\rm eff }^{\mu}A_{\mu}\right)+\mathcal{O}(h^{2})\,, \tag{2.3}\] where the effective current \(j_{\rm eff}^{\mu}\) can be expressed through polarization and magnetization vectors [7], \[j_{\rm eff}^{\mu}=(-\nabla\cdot\mathbf{P},\nabla\times\mathbf{M}+\partial_{t} \mathbf{P})\,, \tag{2.4}\] with \[P_{i}=-h_{ij}E_{j}+\frac{1}{2}hE_{i}+h_{00}E_{i}-\epsilon_{ijk}h_{0j}B_{k}\,, \quad M_{i}=-h_{ij}B_{j}-\frac{1}{2}hB_{i}+h_{jj}B_{i}+\epsilon_{ijk}h_{0j}E_{ k}\,. \tag{2.5}\] This effective current describes the generation of an induced EM field in the presence of a gravitational wave and a background EM field, and thus formally resembles the effective flux generated by axions in the presence of external EM fields. LC circuit resonators.Existing and planned low-mass axion haloscopes such as ABRACADABRA [8, 9, 10], ADMX SLIC [11], BASE [12], DMRadio [13, 14, 15], SHAFT [16] and WISPLC [17] us a resonant LC circuit to search for tiny oscillating induced magnetic fields generated by a dark matter axion background in the presence of a strong external static magnetic field \(B_{0}\). From Eqs. (2.3) - (2.5) we see that the same setup is in principle also suited for detecting GWs. The resulting magnetic flux induced by a coherent GW can be computed using standard methods of electromagnetism in dielectric media and parametrically scales as [7, 18] \[\Phi_{h}\sim e^{-i\omega t}B_{0}\,h(\omega L)^{n}L^{2}\,, \tag{2.6}\] where \(\omega\) denotes the GW frequency, \(L\) the characteristic size of the detector and the integer \(n\geq 2\) depends on the detector geometry. Since all these experiments operate in the quasi-static limit, \(\omega L\ll 1\), smaller values of \(n\) imply a better GW sensitivity. However, the cylindrical symmetry often used in axion experiments to maximize the sensitivity to the axion cancels the leading order contribution to the GW induced flux [18]. This can, luckily, be relatively easily remedied by measuring the induced magnetic field with a pick-up loop configuration that breaks the cylindrical symmetry. Bounds on the GW strain from existing and planned axion haloscopes can be obtained by bootstrapping the bounds and predicted sensitivities for the axion searches. The magnetic flux induced by an axion \(a\) of mass \(m_{a}\) constituting all of dark matter \(\rho_{\rm DM}\) is given by \[\Phi_{a}\sim e^{-im_{a}t}B_{0}\,g_{\sigma\gamma T}\,\rho_{\rm DM}^{1/2}L^{3}\,. \tag{2.7}\] Taking into account the differences in duration and coherence, comparing Eqs. (2.6) and (2.7) allows a recasting of bounds on \(g_{\sigma\gamma T}\) in terms of GW strain \(h\)[18]. Fig. 1 shows (in purple) the recasted limit from ADMX SLIC as well as the expected sensitivity (dashed) of WISPLC and the DMRadio program (\(m^{3}\) and GUT) to a coherent GW [18]. Microwave cavities.In axion experiments based on microwave cavities, such as ADMX [19], CAPP [20], HAYSTAC [21] or ORGAN [22], the induced EM field is resonantly enhanced in a cavity with high quality factor \(Q_{\rm em}\). The coupling of a GW to an EM resonance mode yields an induced EM field \(E_{h}^{\rm(em)}\sim\eta_{n}\,Q_{\rm em}\,(\omega L)\,h\,B_{0}\,e^{i\omega t}\), where \(B_{0}\) indicates the static externally applied EM field and \(\eta_{n}\) the coupling coefficient between the effective source term (2.4) and the \(n\)-th cavity mode. Comparing the power spectral density of this signal with the noise power spectral density of the respective instrument gives an estimate of the achievable GW strain sensitivity [6]. Alternatively, the static external field can be replaced by loading the cavity with a pump mode, as in the MAGO prototype which was designed for GW searches [23, 24]. In this case also the mechanical coupling between the GW and the cavity walls is relevant and the induced EM fields are obtained as [25] \[E_{h}^{\rm(em)}\sim Q_{\rm em}\,(\omega L)^{2}\,h\,E_{0}\,e^{i(\omega_{0}\pm \omega)t}\quad\text{and}\quad E_{h}^{\rm(mech)}\sim Q_{\rm em}\,\text{min}[1,( \omega L)^{2}/c_{s}^{2}]\,h\,E_{0}\,e^{i(\omega_{0}\pm\omega)t}\,, \tag{2.8}\] respectively, where \(E_{0}\) and \(\omega_{0}\) indicate the amplitude and frequency of the externally applied EM field and \(c_{s}\) is the speed of sound of the cavity material. In blue, Fig. 1 shows the projected sensitivity of the existing axion experiments ADMX, CAPP, HAYSTAC and ORGAN as well as an estimate of the achievable sensitivity of MAGO 2.0 including the mechanical cavity resonances and assuming that the vibrational noise has been attenuated down the the irreducible thermal component (dashed). A significantly increased sensitivity has been suggested based on reading out the power of the interference term proportional to \(E_{0}E_{h}\) instead of \(E_{h}^{2}\)[5], it remains however disputed how this could be concretely achieved [6]. Photon regeneration experiments.Photon regeneration experiments such as light-shining-through-the-wall experiments and axion helioscopes rely on the interconversion of axions and EM waves in the presence of an external transverse magnetic field. Similarly, EM waves and GWs undergo oscillations according to the (inverse) Gertsenshtein effect, as can be seen by deriving the equations of motion from Eqs. (2.2) and (2.3). In particular, the second term in Eq. (2.3) acts as a source term for the EM wave (GW) which is proportional to the GW (EM wave) amplitude. Working for simplicity in the transverse traceless (TT) gauge [26], \[\left(\partial_{t}^{2}-\partial_{\ell}^{2}+m_{\gamma}^{2}\right)A_{\lambda}=-B _{0}\partial_{\ell}h_{\lambda}^{TT}\,,\quad(\partial_{t}^{2}-\partial_{\ell}^ {2})h_{\lambda}^{TT}=\kappa^{2}B_{0}\partial_{\ell}A_{\lambda}\,, \tag{2.9}\] with \(\ell\) indicating the propagation direction, \(\kappa^{2}=16\pi G_{N}\), \(\lambda=\{+,\times\}\) and \(m_{\gamma}\) is the effective photon mass. From this, the conversion probability of GWs to photons after a propagation distance \(\Delta\ell\) in the limit \(m_{\gamma}\ll\omega\) is obtained as \[P(\Delta\ell)\simeq\frac{1}{4}\kappa^{2}\,B_{0}^{2}\,\ell_{\rm osc}^{2}\,\sin^{2} \left(\frac{\Delta\ell}{\ell_{\rm osc}}\right)\simeq\frac{1}{4}\kappa^{2}\,B_{0 }^{2}\,\Delta\ell^{2}\,, \tag{2.10}\] with the oscillation length \(\ell_{\rm osc}=2/\sqrt{m_{\gamma}^{4}/(4\omega^{2})+\kappa^{2}B_{0}^{2}}\). Based on this, limits and projected sensitivities of ALPS [27, 28], CAST [29], IAXO [30], JURA [31] and OSQAR [32, 33] have been used to set limits on the energy density of a high-frequency stochastic gravitational wave background [34, 35], \(\rho_{\rm SGWB}=4\pi^{2}\int d\ln f\,f^{2}h_{c}^{2}\). Comparing this to the energy of a coherent GW, \(\rho\sim 2h^{2}\omega^{2}/\kappa^{2}\), we see that for a broadband detector, \(\Delta f\sim f\), the bounds on \(h_{c}\) can equivalently be taken as bounds on the amplitude \(h\) of a coherent signal. Existing axion experiments target optical or x-ray frequencies, however proposals have been made to lower this to the GHz range [35], depicted as dashed orange lines in Fig. 1. ## 3 Frequency modulation An alternative way to use EM waves as a probe of GWs is the impact of GWs on the propagation of the EM wave as well on the position of the EM emitter and absorber. Contrary to the methods described in the previous section, this calculation is linear in the EM field, i.e. no background EM field appears. The most commonly known effect is the one exploited in interferometers such as LIGO/VIRGO/KAGRA and LISA, as well as in the Holometer [36] targeting the high-frequency regime. The GW changes the proper distance in the interferometer arms, resulting in a measurable phase shift (at constant photon frequency). Here, we focus on a different effect, which is the modulation of the photon frequency, \(\omega_{\gamma}=-g_{\mu\nu}p^{\mu}u^{\nu}\), with \(p^{\mu}\) the photon momentum and \(u^{\nu}\) the four-velocity of the observer. All three quantities on the right hand side can obtain corrections due to a passing GW, with the change in \(p^{\mu}\) described by the geodesic equation Figure 1: Sensitivities of high-frequency EM GW detectors to a coherent GW with amplitude \(h\) for existing experiments (solid) and proposed experiments (dashed): microwave cavities [6, 25], LC circuits [18], frequency modulation [39], cosmological detectors [41] and photon regeneration experiments [35]. For reference, the dotted black line shows the cosmological bound on the radiation energy density for an isotropic and stationary GW background. and the change in \(u^{\ast}\) depends on the boundary conditions of the sender (\(S\)) and detector (\(D\)), i.e. if these are freely falling or rigidly mounted. For example, for sender and detector in free fall the relative frequency shift is [37, 38, 39] \[\frac{\omega_{\gamma}^{D}-\omega_{\gamma}^{S}}{\omega_{\gamma}^{D}}=h_{+}\cos^{ 2}(\theta/2)\left\{\cos\varphi^{S}(t)-\cos\left[\omega L(1-\cos\theta)+\varphi ^{S}(t)\right]\right\}, \tag{3.1}\] where \(\theta\) is the angle between the photon and GW propagation direction and \(\varphi^{S}(t)\sim\cos(\omega t)\) is the GW phase at the location of the sender at time \(t\). The GW thus leads to a frequency modulation with amplitude \(h\) of the photon frequency as measured by the detector. Atomic clock techniques allow extremely precise optical frequency measurements with remarkable progress over the last years, currently reaching resolutions of the order \(\Delta f/f\sim 10^{-18}\)[40]. However, detecting a MHz - GHz frequency modulation is much more challenging. Liberally using Heisenberg's uncertainty relation, precise frequency measurements require long integration times, \(\Delta t\sim\Delta f^{-1}\), which will average away the GW signal since \(\Delta t\gg f^{-1}\). A possible solution is an 'optical rectifier', i.e. a shutter which only allows photons to pass for a fraction of the GW period [39]. An estimate for the achievable sensitivity is shown as dashed gray line in Fig. 1, for a \(1\)m instrument, one day of measurement time, using a mW optical laser and assuming a frequency resolution to static freqeuncy shifts of \(\Delta f/f\sim 10^{-21}\). ## 4 Cosmological detectors Cosmological high-frequency GW detectors combat the tiny conversion probability of GWs to photons (see Eq. (2.10)) by enhancing the 'detector volume' to astrophysical or cosmological scales, at the price of reduced knowledge of the environmental conditions compared to laboratory experiments. For example, exploiting the inverse Gertsenshtein effect in large scale cosmic magnetic fields in the dark ages, gravitational waves in the GHz range can be converted into photons detectable by radio telescopes [41], such as EDGES [42] and ARCADE 2 [43]. The resulting bound reflects our poor knowledge of the strength of these cosmic magnetic fields: the green solid line in Fig. 1 is based on the lower limit on the magnetic fields from blazer observations, the dashed line assumes the upper limit based on CMB observations. Other astrophysical environments which have recently been considered in a similar spirit are the depletion of CMB radiation into GWs [44], as well as GW to photon conversion in neutron stars and the magnetosphere of planets [45, 46]. ## 5 Discussion Fig. 1 summarizes sensitivities (obtained by recasting or achievable by re-analyzing existing experiments) as well as future prospects. Given the range of different experiments, different search strategies and possible GW signals comparing apples with apples is a challenge and requires knowledge of the detector bandwidth, scanning strategy (when applicable), data analysis procedure, duration and coherence of the GW signal. For simplicity, we have chosen here to quote sensitivities to coherent signals as for many of the proposed experiments, this is the first toy model to consider. However, we note that sourcing a GW signal which stays in-band for hours or days at these high frequencies is extremely non-trivial - though not impossible as the example of superradiance demonstrates. More realistic approaches include the introduction of a coherence ratio factor taking into account the limited coherence and duration of the GW [18], presenting results in terms of the noise power spectral density [25] and using simplified benchmark signals [18, 39]. Moving forward, it will be crucial to address this question in a coherent manner. Another pertinent question is the coordinate system, or gauge, employed. While of course physical observables are invariant under the choice of gauge, once the gauge is fixed it needs to be applied throughout the computation, describing both the GW and the experimental setup. A rigid experimental setup is more easily described in the proper detector frame, where the coordinates are fixed by an idealized rigid ruler. A free-falling experimental setup (as e.g. the mirrors in LIGO but also any system where the GW frequency lies above the lowest resonance mode of the system) is, on the other hand, more easily described in the TT frame. The list of proposals presented here is of course incomplete. We have focused on proposals deriving directly from the interaction of electromagnetism and gravity. It should however be noted that other mechanical proposals such as bulk acoustic wave devices [47] and levitated sensors [48], have already advanced to dedicated science runs for GW searches in the MHz regime. Moreover, including charged particles and considering the impact of a passing GW on the Dirac equation has lead to the proposal of magnon detectors [49, 50]. A more controversial suggestion is the Gaussian beam proposal [51], which, if open questions around the current noise estimates can be addressed, promises a very competitive sensitivity. In summary, Fig. 1 demonstrates the increase in interest, activity and achievable sensitivity in recent years. It also shows that a lot of the most competitive proposals are small scale experiments (by the nature of the targeted frequency range), using synergies with other precision measurements or particle physics searches. Expected advances in this domain may thus provide chances for spin-offs leading to significant improvements over current proposals. However, the path to detecting, or setting bounds on realistic GW signals is still far and very challenging. At the moment, even the most optimistic proposals are a few orders of magnitude away from possible astrophysical coherent signals. Reaching the sensitivity needed to probe cosmological stochastic backgrounds is even more challenging, however rewarded by a very strong theory motivation: not only are there a range of models which predict such signals but high-frequency GWs are moreover the only conceivable way of probing many of these ideas. These proceedings are largely based on Refs. [7, 18, 39] and [41], and it is a pleasure to thank my fantastic collaborators T. Bringmann, C. Garcia-Cely, E. Fuchs, J. Kopp, S.M. Lee and N. Rodd. Special thanks also to S. Ellis, S.M. Lee, A. Ringwald and N. Rodd for valuable comments on the draft of these proceedings.
2308.12980
Phenomenologies in hypersphere soliton and stringy photon models
We consider the Dirac quantization in the first class formalism to investigate the hypersphere soliton model (HSM) defined on the $S^{3}$ hypersphere. To do this, we construct the first class Hamiltonian possessing the Weyl ordering correction. In the HSM, we evaluate the baryon physical quantities such as the baryon masses, magnetic moments, axial coupling constant and charge radii, most predicted values of which are in good agreement with the corresponding experimental data. Moreover, shuffling the baryon and transition magnetic moments, we find the model independent sum rules. In the HSM we also evaluate the baryon intrinsic frequencies such as $\omega_{N}=0.87\times 10^{23}~{\rm sec}^{-1}$ and $\omega_{\Delta}=1.74\times 10^{23}~{\rm sec}^{-1}$ of the nucleon and delta baryon, respectively, to yield the identity $\omega_{\Delta}=2\omega_{N}$. Next, making use of the Nambu-Goto string action and its extended rotating bosonic string theory, we formulate the stringy photon model to obtain the energy of the string configuration, which consists of the rotational and vibrational energies of the open string. Exploiting this total string energy we evaluate the photon intrinsic frequency $\omega_{\gamma}=9.00\times 10^{23}~{\rm sec}^{-1}$ which is comparable to the corresponding baryon intrinsic frequencies. We also predict the photon size $\langle r^{2}\rangle^{1/2}(\rm photon)=0.17~{\rm fm}$ which is approximately 21\% of the proton magnetic charge radius.
Soon-Tae Hong
2023-08-23T13:35:55Z
http://arxiv.org/abs/2308.12980v1
# Phenomenologies in hypersphere soliton and stringy photon models ###### Abstract We consider the Dirac quantization in the first class formalism to investigate the hypersphere soliton model (HSM) defined on the \(S^{3}\) hypersphere. To do this, we construct the first class Hamiltonian possessing the Weyl ordering correction. In the HSM, we evaluate the baryon physical quantities such as the baryon masses, magnetic moments, axial coupling constant and charge radii, most predicted values of which are in good agreement with the corresponding experimental data. Moreover, shuffling the baryon and transition magnetic moments, we find the model independent sum rules. In the HSM we also evaluate the baryon intrinsic frequencies such as \(\omega_{N}=0.87\times 10^{23}\) sec\({}^{-1}\) and \(\omega_{\Delta}=1.74\times 10^{23}\) sec\({}^{-1}\) of the nucleon and delta baryon, respectively, to yield the identity \(\omega_{\Delta}=2\omega_{N}\). Next, making use of the Nambu-Goto string action and its extended rotating bosonic string theory, we formulate the stringy photon model to obtain the energy of the string configuration, which consists of the rotational and vibrational energies of the open string. Exploiting this total string energy we evaluate the photon intrinsic frequency \(\omega_{\gamma}=9.00\times 10^{23}\) sec\({}^{-1}\) which is comparable to the corresponding baryon intrinsic frequencies. We also predict the photon size \(\langle r^{2}\rangle^{1/2}(\text{photon})=0.17\) fm which is approximately 21% of the proton magnetic charge radius. hypersphere soliton model; baryon physical quantities; intrinsic frequencies; stringy photon model; photon size pacs: 11.10.Ef; 12.39.Dc; 13.40.Em; 14.20.-c; 14.70.Bh ## I Introduction In this work we will consider the extended objects instead of the point particles. It is well known that, as the extended objects, we have the solitons [1; 2; 3; 4; 5; 6; 7; 8; 9] and strings [10; 11; 12; 13]. In particular, in this review we will mainly use the hypersphere soliton model (HSM) [4; 6; 9] and stringy photon model (SPM) [13]. To be more specific, in the soliton models, we have the standard Skyrmion which describes baryon static properties in \(R^{3}\) manifold [1; 3; 5; 8]. This model was proposed by Skyrme in 1961 [1]. In this paper we will consider the paper by Adkins, Nappi and Witten (ANW) [3; 5], to compare with the HSM. Next we will investigate the HSM which is formulated on the hypersphere \(S^{3}\) instead of \(R^{3}\)[4; 6; 9]. Exploiting the HSM, we will evaluate the baryon physical quantities most of which are in good agreement with the corresponding experimental data. In 1962 the electron was proposed as a charged conducting surface by Dirac [14]. According to his proposal, the electron shape and size should pulsate. Here the surface tension of the electron was supposed to prevent the electron from flying apart under the repulsive forces of the charge. Motivated by his idea, we will investigate pulsating baryons in the first class formalism in the HSM [4; 6], to evaluate the intrinsic frequencies of the baryons, baryon masses with the Weyl ordering correction (WOC) and axial coupling constant [9]. On the other hand, as regards the string theories, we have the critical higher dimensional string theory [10; 11; 12], and the recently proposed SPM defined in the four dimensional spacetime which predicts the photon radius, and the photon intrinsic frequency comparable to the corresponding baryon intrinsic frequencies [13]. In the SPM we have exploited the open string which performs both rotational and vibrational motions [13]. Note that the rotational degrees of freedom of the photon have been investigated in the early universe [15; 16]. In this review, we will exploit the HSM in the first class Dirac Hamiltonian formalism, to evaluate the physical quantities such as the baryon masses, magnetic moments, axial coupling constant, charge radii and baryon intrinsic frequencies. Next in the SPM we will predict the photon intrinsic frequency which is shown to be comparable to the baryon intrinsic ones. To do this, we will exploit the Nambu-Goto string theory [17; 18]. In the SPM we will next introduce an open string action associated with the photon [19]. Making use of the rotational and vibrational energies of the string, we will evaluate explicitly the photon intrinsic frequency with which, assuming that the photon size is given by the string radius in the SPM, we will predict the photon size. In Sec. 2, we will predict the baryon properties in the HSM. In Sec. 3, we will evaluate the intrinsic frequencies of the baryons in the HSM. In Sec. 4, we will exploit the SPM to predict the photon intrinsic frequency and photon size. Sec. 5 includes conclusions. ## II Baryon predictions in HSM Now we consider the baryon predictions in the first class Hamiltonian formalism in the HSM. To do this, we introduce the Skyrmion Lagrangian density given by \[{\cal L}=\frac{f_{\pi}^{2}}{4}{\rm tr}(\partial_{\mu}U^{\dagger}\partial^{\mu}U)+ \frac{1}{32e^{2}}{\rm tr}[U^{\dagger}\partial_{\mu}U,U^{\dagger}\partial_{\nu} U]^{2} \tag{1}\] where \(U\) is an SU(2) chiral field, and \(f_{\pi}\) and \(e\) are a pion decay constant and a dimensionless Skyrme parameter, respectively. In this work, we will treat \(f_{\pi}\) and \(e\) as the model parameters. Here the quartic term is necessary to stabilize the soliton in the baryon sector. Next we introduce the hyperspherical three metric on \(S^{3}\) of the form \[ds^{2}=\lambda^{2}d\mu^{2}+\lambda^{2}\sin^{2}\mu\ (d\theta^{2}+\sin^{2}\theta \ d\phi^{2}), \tag{2}\] where the ranges of the three angles are defined as \(0\leq\mu\leq\pi\), \(0\leq\theta\leq\pi\) and \(0\leq\phi\leq 2\pi\), and \(\lambda\) (\(0\leq\lambda<\infty\)) is a radius parameter of \(S^{3}\). In the HSM, using the Skyrmion Lagrangian density in (1) we obtain the soliton energy \(E\) of the form \[E=\frac{f_{\pi}}{e}\left[2\pi L\int_{0}^{\pi}d\mu\sin^{2}\mu\left(\left(\frac{ df}{d\mu}+\frac{1}{L}\frac{\sin^{2}f}{\sin^{2}\mu}\right)^{2}+2\left(\frac{1}{L} \frac{df}{d\mu}+1\right)^{2}\frac{\sin^{2}f}{\sin^{2}\mu}\right)+6\pi^{2}B \right], \tag{3}\] where \(L=ef_{\pi}\lambda\) (\(0\leq L<\infty\)) is a dimensionless radius parameter and \(B\) is topological baryon number, which is unity for a single soliton. Here \(f(\mu)\) is a profile function for hypersphere soliton, and satisfies \(f(0)=\pi\) and \(f(\pi)=0\) for unit topological baryon number. Note that the the profile function \(f\) in the soliton energy \(E\) in (3) satisfies the first order differential equations \[\frac{df}{d\mu}+\frac{1}{L}\frac{\sin^{2}f}{\sin^{2}\mu}=0,\quad\frac{1}{L} \frac{df}{d\mu}+1=0, \tag{4}\] to attain the BPS topological lower bound in the soliton energy [2; 4; 5; 6; 7] given by \[E_{B}=\frac{6\pi^{2}f_{\pi}}{e}. \tag{5}\] Moreover, in this case we find the equation of motion for the hypersphere soliton [4; 6] \[\left(L\sin^{2}\mu+\frac{2}{L}\sin^{2}f\right)\frac{d^{2}f}{d\mu^{2}}+\left(L \sin 2\mu+\frac{1}{L}\frac{df}{d\mu}\sin 2f\right)\frac{df}{d\mu}-\left(L+ \frac{1}{L}\frac{\sin^{2}f}{\sin^{2}\mu}\right)\sin 2f=0. \tag{6}\] One of the simplest solution of (6) is the identity map \[f(\mu)=\pi-\mu, \tag{7}\] in which case the soliton energy in (3) can be rewritten as [4; 6] \[E=\frac{3\pi^{2}f_{\pi}}{e}\left(L+\frac{1}{L}\right). \tag{8}\] Note that, in order to obtain the BPS topological _lower bound_\(E_{B}\) in (5) by exploiting the soliton energy \(E\) in (8) associated with the identity map \(f(\mu)=\pi-\mu\) in (7), we use the fixed value \(L=L_{B}\) where \[L_{B}\equiv ef_{\pi}\lambda_{B}=1. \tag{9}\] Note also that the identity map in (7) is a minimum energy solution for \(L<\sqrt{2}\), while for \(L>\sqrt{2}\) it is a saddle point [20; 21]. Now we briefly discuss the Dirac quantization of constrained system [1; 3; 4; 5; 6; 7; 8; 9; 22]. In the HSM, we have the second class constraints for the collective coordinates \(a^{\mu}\) (\(\mu=0,1,2,3\)) and the corresponding canonical momenta \(\pi^{\mu}\) conjugate to \(a^{\mu}\) of the form \[\Omega_{1}=a^{\mu}a^{\mu}-1\approx 0,\quad\Omega_{2}=a^{\mu}\pi^{\mu}\approx 0. \tag{10}\] Exploiting the Poisson bracket for \(a^{\mu}\) and \(\pi^{\mu}\), \[\{a^{\mu},\pi^{\nu}\}=\delta^{\mu\nu}, \tag{11}\] we obtain the Poisson algebra for the commutator of \(\Omega_{1}\) and \(\Omega_{2}\)[8; 22] \[\{\Omega_{1},\Omega_{2}\}=2a^{\mu}a^{\mu}\neq 0. \tag{12}\] Since this Poisson algebra does not vanish we call the constraints \(\Omega_{1}\) and \(\Omega_{2}\), the second class. In the HSM, spin and isospin states can be treated by collective coordinates \(a^{\mu}=(a^{0},\vec{a})\) (\(\mu=0,1,2,3\)), corresponding to the spin and isospin collective rotation \(A(t)\in\) SU(2) given by \(A(t)=a^{0}+i\vec{a}\cdot\vec{\tau}\). Exploiting the coordinates \(a^{\mu}\), we obtain the Hamiltonian of the form \[H=E_{B}+\frac{1}{8\mathcal{I}}\pi^{\mu}\pi^{\mu}, \tag{13}\] where \(\pi^{\mu}\) are canonical momenta conjugate to the collective coordinates \(a^{\mu}\). Here the soliton energy lower bound \(E_{B}\) is given by (5) and moment of inertia \(\mathcal{I}\) is given by \[\mathcal{I}=\frac{3\pi^{2}}{e^{3}f_{\pi}}. \tag{14}\] Note that the identity map \(f(\mu)=\pi-\mu\) in (7) with condition \(L=L_{B}\), where \(L_{B}\) is given by (9), is used to predict the physical quantities such as the moment of inertia \(\mathcal{I}\) in (14), baryon masses, charge radii, magnetic moments, axial coupling constant \(g_{A}\) and intrinsic pulsation frequencies \(\omega_{I}\) in the HSM. Note also that the hypersphere coordinates \((\mu,\theta,\phi)\) are integrated out in (3), and \(E_{B}\) in (5) is a function of \(\lambda_{B}=\frac{1}{ef_{\pi}}\) or equivalently \(f_{\pi}\) and \(e\) only. Similarly, after integrating out the hypersphere coordinates \((\mu,\theta,\phi)\), the physical quantities in (14), (16) and (26)-(30) and (44) are formulated in terms of \(f_{\pi}\) and \(e\) only. After performing the canonical quantization in the second class formalism in the HSM, we now construct the Hamiltonian spectrum \[\langle H\rangle=E_{B}+\frac{1}{2\mathcal{I}}I(I+1), \tag{15}\] where \(I\) (\(=1/2,\ 3/2,...\)) are baryon isospin quantum numbers. Exploiting (15) we find the nucleon mass \(M_{N}\) for \(I=1/2\) and delta baryon mass \(M_{\Delta}\) for \(I=3/2\), respectively [6; 9] \[M_{N}=ef_{\pi}\left(\frac{6\pi^{2}}{e^{2}}+\frac{e^{2}}{8\pi^{2}}\right),\quad M _{\Delta}=ef_{\pi}\left(\frac{6\pi^{2}}{e^{2}}+\frac{5e^{2}}{8\pi^{2}}\right). \tag{16}\] Next we formulate the first class constraints \(\tilde{\Omega}_{1}\) and \(\tilde{\Omega}_{2}\) by adding the terms related with the Stuckelberg fields \(\theta\) and \(\pi_{\theta}\) \[\tilde{\Omega}_{1}=a^{\mu}a^{\mu}-1+2\theta=0,\quad\tilde{\Omega}_{2}=a^{\mu }\pi^{\mu}-a^{\mu}a^{\mu}\pi_{\theta}=0. \tag{17}\] Here \(\theta\) and \(\pi_{\theta}\) satisfy the following Poisson bracket \[\{\theta,\pi_{\theta}\}=1, \tag{18}\] to produce the first class Poisson algebra for the first class constraints \(\tilde{\Omega}_{1}\) and \(\tilde{\Omega}_{2}\) \[\{\tilde{\Omega}_{1},\tilde{\Omega}_{2}\}=0. \tag{19}\] Now we investigate the operator ordering problem in the first class Hamiltonian formalism. To do this, we construct the first class Hamiltonian [9] \[\tilde{H}=E_{B}+\frac{1}{8\mathcal{I}}(\pi^{\mu}-a^{\mu}\pi_{\theta})(\pi^{ \mu}-a^{\mu}\pi_{\theta})\frac{a^{\nu}a^{\nu}}{a^{\nu}a^{\nu}+2\theta}. \tag{20}\] Applying the first class constrains in (17) to (20), we find \[\tilde{H}=E_{B}+\frac{1}{8\mathcal{I}}(a^{\mu}a^{\mu}\pi^{\nu}\pi^{\nu}-\pi^{\mu }a^{\mu}a^{\nu}\pi^{\nu}). \tag{21}\] Next we introduce the Weyl ordering procedure [23] to obtain the Weyl ordered operators \[(a^{\mu}a^{\mu}\pi^{\nu}\pi^{\nu})^{op}_{W} = \frac{1}{4}[a^{\mu}(a^{\mu}\pi^{\nu}+\pi^{\nu}a^{\mu})\pi^{\nu}+ \pi^{\nu}(a^{\mu}\pi^{\nu}+\pi^{\nu}a^{\mu})a^{\mu}]=-\frac{1}{4}(4a^{\mu}a^{ \mu}\partial_{\nu}^{2}+8a^{\mu}\partial_{\mu}+3\delta_{\mu\mu}),\] \[(\pi^{\mu}a^{\mu}a^{\nu}\pi^{\nu})^{op}_{W} = \frac{1}{4}(\pi^{\mu}a^{\mu}+a^{\mu}\pi^{\mu})(a^{\nu}\pi^{\nu}+ \pi^{\nu}a^{\nu})=-\frac{1}{4}(4a^{\mu}a^{\nu}\partial_{\mu}\partial_{\nu}+20a ^{\mu}\partial_{\mu}+\delta_{\mu\mu}\delta_{\nu\nu}), \tag{22}\] where we have used the quantum operator \(\pi^{\mu}=-i\frac{\partial}{\partial a^{\mu}}\equiv-i\partial_{\mu}\). Inserting (22) into (21), we arrive at the Weyl ordered first class Hamiltonian operator \[\tilde{H}^{op}_{W}=H^{op}+\frac{1}{32\mathcal{I}}\delta_{\mu\mu}(\delta_{\nu \nu}-3), \tag{23}\] where \(H^{op}\) is the second class Hamiltonian operator given by \[H^{op}=E_{B}+\frac{1}{8\mathcal{I}}(-a^{\mu}a^{\mu}\partial_{\nu}^{2}+3a^{\mu }\partial_{\mu}+a^{\mu}a^{\nu}\partial_{\mu}\partial_{\nu}). \tag{24}\] Here the last three terms are the three-sphere Laplacian given in terms of the collective coordinates and their derivatives to yield the eigenvalues \(4I(I+1)\)[24]. Inserting the relation \(\langle H^{op}\rangle=E_{B}+\frac{1}{2\mathcal{I}}I(I+1)\), which is also given in (15), and the identity \(\delta_{\mu\mu}=4\) into (23) we construct the Hamiltonian spectrum with the WOC in the first class formalism [9] \[\langle\tilde{H}\rangle=E_{B}+\frac{1}{2\mathcal{I}}\left[I(I+1)+\frac{1}{4} \right], \tag{25}\] where \(E_{B}\) is the soliton energy BPS lower bound in (5) and \(\mathcal{I}\) is the moment of inertia in (14). Comparing the canonical quantization spectrum result \(\langle H\rangle\) in (15) with \(\langle\tilde{H}\rangle\) obtained via the Dirac quantization with the WOC, the latter has the additional term \(\frac{1}{8\mathcal{I}}\) in (25). This additional contribution originates from the first class constraints in (17). The nucleon mass \(M_{N}\) (\(I=1/2\)) and delta baryon mass \(M_{\Delta}\) (\(I=3/2\)) are then given as follows [9] \[M_{N}=ef_{\pi}\left(\frac{6\pi^{2}}{e^{2}}+\frac{e^{2}}{6\pi^{2}}\right),\quad M _{\Delta}=ef_{\pi}\left(\frac{6\pi^{2}}{e^{2}}+\frac{2e^{2}}{3\pi^{2}}\right). \tag{26}\] Next we formulate the magnetic moments of the form [9] \[\mu_{p} = \frac{2M_{N}}{ef_{\pi}}\left(\frac{e^{2}}{48\pi^{2}}+\frac{\pi^{ 2}}{2e^{2}}\right),\quad\quad\mu_{n}=\frac{2M_{N}}{ef_{\pi}}\left(\frac{e^{2} }{48\pi^{2}}-\frac{\pi^{2}}{2e^{2}}\right),\] \[\mu_{\Delta^{++}} = \frac{2M_{N}}{ef_{\pi}}\left(\frac{e^{2}}{16\pi^{2}}+\frac{9\pi^{ 2}}{10e^{2}}\right),\quad\mu_{N\Delta}=\frac{2M_{N}}{ef_{\pi}}\cdot\frac{ \sqrt{2}\pi^{2}}{2e^{2}}, \tag{27}\] where \(M_{N}\) is now given by the nucleon mass with the WOC in (26) given in the first class formalism. Next we similarly obtain the axial coupling constant [9] \[g_{A}=\frac{4\pi}{e^{2}}\int_{0}^{\pi}d\mu\sin^{2}\mu\left(1+\cos\mu\right)= \frac{2\pi^{2}}{e^{2}}. \tag{28}\] Now we consider the charge radii. The electric and magnetic isovector mean square charge radii are given in the HSM, respectively [6; 9] \[\langle r^{2}\rangle_{E,I=1}=\langle r^{2}\rangle_{M,I=1}=\frac{2}{3e^{2} \mathcal{I}}\int_{S^{3}}dV_{B}\sin^{2}\mu\sin^{2}f\left(1+\left(\frac{df}{d \mu}\right)^{2}+\frac{\sin^{2}f}{\sin^{2}\mu}\right)=\frac{5}{6e^{2}f_{\pi}^{2}}, \tag{29}\] where the subscripts \(E\) and \(M\) denote electric and magnetic charge radii, respectively, and \(dV_{B}=\lambda_{B}^{3}\sin^{2}\mu\sin\theta\ d\mu\ d\ \theta\ d\phi\) on the hypersphere \(S^{3}\). Note that \(dV_{B}\) is given by product of three arc lengths: \(\lambda_{B}d\mu\), \(\lambda_{B}\sin\mu d\theta\) and \(\lambda_{B}\sin\mu\sin\theta d\phi\) and \(\lambda_{B}\) is radius of hypersphere soliton. Moreover, we find the charge radii in terms of \(ef_{\pi}\)[6; 9] \[\langle r^{2}\rangle_{M,I=0}^{1/2} = \langle r^{2}\rangle_{M,I=1}^{1/2}=\langle r^{2}\rangle_{M,p}^{1/ 2}=\langle r^{2}\rangle_{M,n}^{1/2}=\langle r^{2}\rangle_{E,I=1}^{1/2}=\sqrt{ \frac{5}{6}}\frac{1}{ef_{\pi}},\] \[\langle r^{2}\rangle_{E,I=0}^{1/2} = \frac{\sqrt{3}}{2}\frac{1}{ef_{\pi}},\quad\langle r^{2}\rangle_{ p}=\frac{19}{24}\frac{1}{(ef_{\pi})^{2}},\quad\langle r^{2}\rangle_{n}=-\frac{1}{24} \frac{1}{(ef_{\pi})^{2}}. \tag{30}\] Shuffling the above baryon and transition magnetic moments, we obtain the model independent sum rules in the HSM [6] \[\mu_{\Delta^{++}} = \frac{3}{5}(4\mu_{p}+\mu_{n})\] \[\mu_{N\Delta} = \frac{\sqrt{2}}{2}(\mu_{p}-\mu_{n})\] \[\mu_{p}+\mu_{n} = \frac{2}{9}M_{N}(M_{\Delta}-M_{N})\langle r^{2}\rangle_{E,I=0}\] \[\mu_{p}-\mu_{n} = \frac{M_{N}}{M_{\Delta}-M_{N}}. \tag{31}\] Next, we choose \(\langle r^{2}\rangle_{M,p}^{1/2}=0.80\) fm as an input parameter. We then have \[ef_{\pi}=225.23\ {\rm MeV}=(0.876\ {\rm fm})^{-1}, \tag{32}\] and exploiting this fixed value of \(ef_{\pi}\) and the phenomenological formulas in (26) - (30) we can proceed to calculate the physical quantities as shown in Table 1. Now we discuss the predictions in the soliton models. In Table 1, Prediction I and II are given by Hong [9] using the HSM, while Prediction III is given by ANW [3] exploiting the standard Skyrmion model defined in \(R^{3}\) manifold. Here the input parameters are indicated by \(*\). In Prediction I, the two experimental values for \(\langle r^{2}\rangle_{M,p}^{1/2}\) and \(M_{N}\) are used as input parameters. In Predictions II and III, we have exploited the same input parameters associated with \(M_{N}\) and \(M_{\Delta}\) to compare their predictions effectively. Note that in Prediction II we have finite charge radii, while in Prediction III we have infinite charge radii. Next we discuss the evaluations of Prediction I. First, the six predicted values for \(\mu_{\Delta^{++}}\), \(\langle r^{2}\rangle_{M,I=0}^{1/2}\), \(\langle r^{2}\rangle_{M,I=1}^{1/2}\), \(\langle r^{2}\rangle_{M,I=0}^{1/2}\) (in addition to the input parameters \(M_{N}\) and \(\langle r^{2}\rangle_{M,p}^{1/2}\)) are within about 1 % of the corresponding experimental \begin{table} \begin{tabular}{c r r r r} \hline Quantity & Prediction I (Hong) & Prediction II (Hong) & Prediction III (ANW) & Experiment \\ \hline \(\langle r^{2}\rangle_{M,I=0}^{1/2}\) & 0.80 fm & 0.63 fm & 0.92 fm & 0.81 fm \\ \(\langle r^{2}\rangle_{M,I=1}^{1/2}\) & 0.80 fm & 0.63 fm & \(\infty\) & 0.80 fm \\ \(\langle r^{2}\rangle_{M,p}^{1/2}\) & 0.80 fm\({}^{*}\) & 0.63 fm & \(\infty\) & 0.80 fm \\ \(\langle r^{2}\rangle_{M,n}^{1/2}\) & 0.80 fm & 0.63 fm & \(\infty\) & 0.79 fm \\ \(\langle r^{2}\rangle_{E,I=0}^{1/2}\) & 0.76 fm & 0.60 fm & 0.59 fm & 0.72 fm \\ \(\langle r^{2}\rangle_{E,I=1}^{2/2}\) & 0.80 fm & 0.63 fm & \(\infty\) & 0.88 fm \\ \(\langle r^{2}\rangle_{p}\) & \((0.780\ {\rm fm})^{2}\) & \((0.61\ {\rm fm})^{2}\) & \(\infty\) & \((0.805\ {\rm fm})^{2}\) \\ \(\langle r^{2}\rangle_{n}\) & \(-(0.179\ {\rm fm})^{2}\) & \(-(0.14\ {\rm fm})^{2}\) & \(-\infty\) & \(-(0.341\ {\rm fm})^{2}\) \\ \(\mu_{p}\) & 2.98 & 1.88 & 1.87 & 2.79 \\ \(\mu_{n}\) & \(-2.45\) & \(-1.32\) & \(-1.31\) & \(-1.91\) \\ \(\mu_{\Delta^{++}}\) & 5.69 & 3.72 & 3.72 & \(4.7-6.7\) \\ \(\mu_{N\Delta}\) & 3.84 & 2.27 & 2.27 & 3.29 \\ \(M_{N}\) & 939 MeV\({}^{*}\) & 939 MeV\({}^{*}\) & 939 MeV\({}^{*}\) & 939 MeV \\ \(M_{\Delta}\) & 1112 MeV & 1232 MeV\({}^{*}\) & 1232 MeV\({}^{*}\) & 1232 MeV \\ \(g_{A}\) & 1.30 & 0.98 & 0.61 & 1.23 \\ \hline \end{tabular} \end{table} Table 1: In Predictions I and II (Hong) [9], we use the hypersphere soliton model. In Prediction III (ANW), we exploit the standard Skyrmion model. The input parameters are indicated by \(*\). data. Second, the three predictions for \(g_{A}\), \(\langle r^{2}\rangle_{E,I=0}^{1/2}\) and \(\langle r^{2}\rangle_{p}\) are within about 6 % of the experimental values. Third, the three predictions for \(M_{\Delta}\), \(\mu_{p}\) and \(\langle r^{2}\rangle_{E,I=1}^{1/2}\) are within about 10 % of the experimental values. Now we comment on the hypersurface \(A_{3}\) of the hypersphere \(S^{3}\) of radius parameter \(\lambda_{B}\), and the charge radius \(\langle r^{2}\rangle_{E,I=1}^{1/2}\) in (29). Exploiting the hyperspherical three metric in (2), we find that \(A_{3}\) can be analyzed in terms of three arc length elements \(\lambda_{B}d\mu\), \(\lambda_{B}\sin\mu d\theta\) and \(\lambda_{B}\sin\mu\sin\theta d\phi\), from which we find the three dimensional hypersurface manifold with \(A_{3}=2\pi^{2}\lambda_{B}^{3}\). Note that \(\lambda_{B}\) is the radial distance from the center of \(S^{3}\) to the hypersphere manifold \(S^{3}\) in \(R^{4}\). In fact, inserting the value \(ef_{\pi}=(0.876\ {\rm fm})^{-1}\) in (32) into the condition \(L_{B}=1\) in (9), in the HSM we obtain the fixed radius parameter given by \(\lambda_{B}=\frac{1}{ef_{\pi}}=0.876\ {\rm fm}\). On the other hand, the charge radius \(\langle r^{2}\rangle_{E,I=1}^{1/2}\) is the physical quantity expressed in (29). Integrating over a relevant density on \(S^{3}\) corresponding to the integrand in (29), we evaluate \(\langle r^{2}\rangle_{E,I=1}\) which is now independent of \(\mu\), to yield a specific value of the electric isovector root mean square charge radius. The calculated charge radius then can be defined as the fixed radial distance to the point on a hypersurface manifold which does not need to be located only on the compact manifold \(S^{3}\) of radius parameter \(\lambda_{B}\). This hypersurface manifold is now a submanifold in \(R^{4}\) which is located at \(r=0.80\ {\rm fm}\) far from the center of \(S^{3}\). Note that \(\langle r^{2}\rangle_{E,I=1}^{1/2}\) denotes the radial distance which is a geometrical invariant giving the same value both in \(R^{3}\) (for instance in volume \(R^{3}\) which contains the center of \(S^{3}\) and is described in terms of \((r,\theta,\phi)\) at \(\mu=\frac{\pi}{2}\)) and in \(R^{4}\). Next, the physical quantity \(\langle r^{2}\rangle_{E,I=1}^{1/2}\) calculated in \(R^{3}\) (and in \(R^{4}\)) then can be compared with the corresponding experimental value, similar to the other physical quantities such as \(M_{N}\) and \(M_{\Delta}\). Note that, as a toy model of soliton embedded in \(R^{3}\), we consider a uniformly charged manifold \(S^{2}\) described in terms of \((\theta,\phi)\) and a fixed radius parameter \(\lambda_{B}\) where we have \(A_{2}=4\pi\lambda_{B}^{2}\). By integrating over a surface charge density residing on \(S^{2}\), one can calculate the physical quantity such as the electric potential, at an arbitrary observation point which does not need to be located only on the compact manifold \(S^{2}\) of radius parameter \(\lambda_{B}\). Next, since the \(S^{2}\) soliton of fixed radius parameter \(\lambda_{B}\) is embedded in \(R^{3}\), we manifestly define an arbitrary radial distance from the center of the compact manifold to an observation point which is located in \(R^{3}=S^{2}\times R\). Here \(S^{2}\) denotes foliation leaves [25] of spherical shell of radius parameter \(\lambda\) (\(0\leq\lambda<\infty\)) and \(R\) is a manifold associated with radial distance. Note that the radial distance itself is a fixed geometrical invariant producing the same value both in \(R^{2}\) (for instance on equatorial plane \(R^{2}\) which contains the center of \(S^{2}\) and is delineated by \((r,\phi)\) at \(\theta=\frac{\pi}{2}\)) and in \(R^{3}\). The same mathematical logic can be applied to \(S^{3}\) soliton of fixed radius parameter \(\lambda_{B}\) embedded in \(R^{4}=S^{3}\times R\) where \(S^{3}\) stands for foliation leaves of hyperspherical shell of radius parameter \(\lambda\) (\(0\leq\lambda<\infty\)) and \(R\) is a manifold related with radial distance. Finally we have some comments on the Betti numbers associated with the manifold \(S^{3}\) in the HSM. First of all, the \(p\)-th Betti number \(b_{p}(M)\) is defined as the maximal number of \(p\)-cycles on \(M\): \[b_{p}(M)={\rm dim}\ H_{p}(M), \tag{33}\] where \(H_{p}(M)\) is the homology group of the manifold \(M\)[26; 27; 28]. For the case of \(S^{3}\), we obtain \[H_{0}(S^{3}) = H_{3}(S^{3})={\bf Z},\] \[H_{p}(S^{3}) = 0,\ {\rm otherwise}. \tag{34}\] The non-vanishing Betti numbers related with \(S^{3}\) are thus given by \(b_{0}(S^{3})=b_{3}(S^{3})=1\). ## III Intrinsic frequencies of baryons Now we investigate the intrinsic frequencies of baryons in the first class Hamiltonian formalism in the HSM. To do this, in the HSM we construct the equivalent first class Hamiltonian \(\tilde{H}^{\prime}\) as follows \[\tilde{H}^{\prime}=\tilde{H}+\frac{1}{4\mathcal{I}}\pi_{\theta}\tilde{\Omega}_ {2}, \tag{35}\] to yield the corresponding Gauss law constraint algebra \[\{\tilde{\Omega}_{1},\tilde{H}^{\prime}\}=\frac{1}{2\mathcal{I}}\tilde{\Omega} _{2},\quad\{\tilde{\Omega}_{2},\tilde{H}^{\prime}\}=0. \tag{36}\] Note that \(\{\tilde{\Omega}_{1},\tilde{H}\}=0\) and \(\{\tilde{\Omega}_{2},\tilde{H}\}=0\). We then find the Hamiltonian spectrum for \(\tilde{H}^{\prime}\) \[\langle\tilde{H}^{\prime}\rangle=E_{B}+\frac{1}{2\mathcal{I}}\left[I(I+1)+ \frac{1}{4}\right], \tag{37}\] which is equal to that for \(\tilde{H}\) in (25), as expected. Next we consider the equation of motion in Poisson bracket form \[\dot{\tilde{W}}=\{\tilde{W},\tilde{H}^{\prime}\},\quad\text{for the first class variable }\tilde{\text{W}}, \tag{38}\] where the over-dot denotes time derivative. Making use of the equation of motion in (38), we obtain these two equations \[\dot{\tilde{a}}^{\mu}=\{\tilde{a}^{\mu},\tilde{H}^{\prime}\}=\frac{1}{4 \mathcal{I}}\tilde{\pi}^{\mu},\quad\dot{\tilde{\pi}}^{\mu}=\{\tilde{\pi}^{\mu},\tilde{H}^{\prime}\}=-\frac{1}{4\mathcal{I}}\tilde{\pi}^{\nu}\tilde{\pi}^{ \nu}\tilde{a}^{\mu}, \tag{39}\] where the first class fields \(\tilde{a}^{\mu}\) and \(\tilde{\pi}^{\mu}\) are given as follows \[\tilde{a}^{\mu}=a^{\mu}\left(\frac{a^{\nu}a^{\nu}+2\theta}{a^{\nu}a^{\nu}} \right)^{1/2},\quad\tilde{\pi}^{\mu}=(\pi^{\mu}-a^{\mu}\pi_{\theta})\left( \frac{a^{\nu}a^{\nu}}{a^{\nu}a^{\nu}+2\theta}\right)^{1/2}. \tag{40}\] In order to formulate the equations in (39), we have used the following identities among the physical fields \[\{\tilde{a}^{\mu},\tilde{\pi}^{\nu}\} = \delta^{\mu\nu}-\tilde{a}^{\mu}\tilde{a}^{\nu},\qquad\qquad\{ \tilde{\pi}^{\mu},\tilde{\pi}^{\nu}\}=\tilde{\pi}^{\mu}\tilde{a}^{\nu}- \tilde{a}^{\mu}\tilde{\pi}^{\nu},\] \[\{\tilde{a}^{\mu},\tilde{H}\} = \frac{1}{4\mathcal{I}}(\tilde{\pi}^{\mu}-\tilde{a}^{\mu}\tilde{a} ^{\nu}\tilde{\pi}^{\nu}),\quad\{\tilde{\pi}^{\mu},\tilde{H}\}=\frac{1}{4 \mathcal{I}}(\tilde{\pi}^{\mu}\tilde{a}^{\nu}\tilde{\pi}^{\nu}-\tilde{a}^{\mu }\tilde{\pi}^{\nu}\tilde{\pi}^{\nu}),\] \[\{\tilde{a}^{\mu},\pi_{\theta}\} = \tilde{a}^{\mu},\qquad\qquad\qquad\qquad\{\tilde{\pi}^{\mu},\pi_{ \theta}\}=-\tilde{\pi}^{\mu}. \tag{41}\] Applying the equation of motion algorithm in (38) to \(\dot{\tilde{a}}^{\mu}\), we find \[\ddot{\tilde{a}}^{\mu}=\{\dot{\tilde{a}}^{\mu},\tilde{H}^{\prime}\}=\frac{1} {4\mathcal{I}}\dot{\tilde{\pi}}^{\mu}=-\frac{1}{4\mathcal{I}^{2}}\left[I(I+1) +\frac{1}{4}\right]\tilde{a}^{\mu}, \tag{42}\] to yield the equation of motion for a simple harmonic oscillator \[\ddot{\tilde{a}}^{\mu}=-\omega_{I}^{2}\tilde{a}^{\mu}, \tag{43}\] where \(\omega_{I}\) is the intrinsic frequency of pulsating baryon with isospin quantum number \(I\) given by \[\omega_{I}=\frac{1}{2\mathcal{I}}\left[I(I+1)+\frac{1}{4}\right]^{1/2}. \tag{44}\] Making use of the formula for \(\omega_{I}\) in (44) for the nucleon \(N\) (\(I=\frac{1}{2}\)) and the delta baryon \(\Delta\) (\(I=\frac{3}{2}\)), we obtain predictions of intrinsic frequencies \(\omega_{N}\) and \(\omega_{\Delta}\) of the baryons given in Table 2. Note that we find the identity \(\omega_{\Delta}=2\omega_{N}\). Finally it seems appropriate to comment on the gauge fixing problem within the first class constraints of the Dirac Hamiltonian formalism. In order to investigate the gauge fixing of the first class Hamiltonian \(\tilde{H}^{\prime}\) in (35), we introduce two canonical sets of ghost and anti-ghost fields together with auxiliary fields \((\mathcal{C}^{i},\bar{\mathcal{P}}_{i})\), \((\mathcal{P}^{i},\bar{\mathcal{C}}_{i})\), \((\mathcal{N}^{i},\mathcal{B}_{i})\), \((i=1,2)\) which satisfy the super-Poisson algebra, \(\{\mathcal{C}^{i},\bar{\mathcal{P}}_{j}\}=\{\mathcal{P}^{i},\bar{\mathcal{C}}_{j }\}=\{\mathcal{N}^{i},\mathcal{B}_{j}\}=\delta_{j}^{i}\). Here the super-Poisson bracket is defined as \(\{A,B\}=\frac{\delta A}{\delta q}|\frac{\delta B}{\delta p}|_{l}-(-1)^{\eta_{A} \eta_{B}}\frac{\delta B}{\delta q}|\frac{\delta A}{\delta p}|_{l}\), where \(\eta_{A}\) denotes the number of fermions, called the ghost number, in \(A\) and the subscript \(r\) and \(l\) denote right and left derivatives, respectively. The BRST charge for the first class constraint algebra related with \(\tilde{H}^{\prime}\) is then given by \[Q=\mathcal{C}^{i}\tilde{\Omega}_{i}+\mathcal{P}^{i}\mathcal{B}_{i}. \tag{45}\] We choose the unitary gauge \[\chi^{1}=\Omega_{1},\quad\chi^{2}=\Omega_{2}, \tag{46}\] \begin{table} \begin{tabular}{l c c} \hline Particle type & Notation & Intrinsic frequency \\ \hline Nucleon & \(\omega_{N}\) & \(0.87\times 10^{23}\) sec\({}^{-1}\) \\ Delta baryon & \(\omega_{\Delta}\) & \(1.74\times 10^{23}\) sec\({}^{-1}\) \\ Photon & \(\omega_{\gamma}\) & \(9.00\times 10^{23}\) sec\({}^{-1}\) \\ \hline \end{tabular} \end{table} Table 2: The intrinsic frequencies of particles [9; 13]. by selecting the fermionic gauge fixing function \(\Psi\): \(\Psi=\bar{\cal C}_{i}\chi^{i}+\bar{\cal P}_{i}{\cal N}^{i}\). Exploiting the BRST charge \(Q\) in (45), we find the BRST transformation rule defined as \(\delta_{Q}F=\{Q,F\}\) for a physical field \(F\) \[\begin{array}{ll}\delta_{Q}a^{\mu}=-{\cal C}^{2}a^{\mu},&\delta_{Q}\pi^{\mu}= 2{\cal C}^{1}a^{\mu}+{\cal C}^{2}(\pi^{\mu}-2a^{\mu}\pi_{\theta}),&\delta_{Q} \theta={\cal C}^{2}a^{\mu}a^{\mu},&\delta_{Q}\pi_{\theta}=2{\cal C}^{1},\quad \delta_{Q}{\cal C}^{i}=0,\\ \delta_{Q}\bar{\cal P}_{i}=\tilde{\Omega}_{i},&\delta_{Q}{\cal P}^{i}=0,&\delta _{Q}{\cal C}_{i}={\cal B}_{i},&\delta_{Q}{\cal N}^{i}=-{\cal P}^{i},&\delta_{Q }{\cal B}_{i}=0.\end{array} \tag{47}\] Note that \(\tilde{H}^{\prime}\) is not BRST invariant, which implies that \(\delta_{Q}\tilde{H}^{\prime}\neq 0\). Next, we obtain the gauge fixed Hamiltonian \[\tilde{H}^{\prime\prime}=\tilde{H}^{\prime}-\frac{1}{2{\cal I}}{\cal C}^{1} \bar{\cal P}_{2}, \tag{48}\] which is now invariant under the BRST transformation rule in (47), namely \(\delta_{Q}\tilde{H}^{\prime\prime}=0\). Note that the BRST charge \(Q\) in (45) is nilpotent so that we can have \(\delta_{Q^{2}}F=\{Q,\{Q,F\}\}=0\) for a physical field \(F\). Note also that \(H_{eff}\equiv\tilde{H}^{\prime\prime}-\{Q,\Psi\}\) is the BRST invariant Hamiltonian including the fermionic gauge fixing function \(\Psi\). ## IV SPM predictions In this section we will predict the physical quantities such as the photon intrinsic frequency and photon size in the SPM [13]. To do this, we will exploit the Nambu-Goto string action [17; 18] and its extended rotating bosonic string model in \(D=3+1\) dimension spacetime [29]. Note that in the \(D=26\) dimension open string theory which will be briefly discussed below, it is well known that there exists the vector boson with 24 independent polarizations [10; 11], corresponding to the photon in the stringy photon model defined in the \(D=3+1\) dimension spacetime [13] considered in this paper. Before we construct the SPM, we pedagogically summarize a mathematical formalism for the Nambu-Goto open string which is related with a photon. In order to define the action on curved manifold, we introduce \((M,g_{ab})\) which is a \(D\) dimensional spacetime manifold \(M\) associated with the metric \(g_{ab}\). Given \(g_{ab}\), we can have a unique covariant derivative \(\nabla_{a}\) satisfying [30] \[\nabla_{a}g_{bc} = 0,\] \[\nabla_{a}\omega^{b} = \partial_{a}\omega^{b}+\Gamma^{b}_{\ ac}\ \omega^{c},\] \[(\nabla_{a}\nabla_{b}-\nabla_{b}\nabla_{a})\omega_{c} = R_{abc}^{\ \ \ d}\ \omega_{d}. \tag{49}\] We parameterize an open string by two world sheet coordinates \(\tau\) and \(\sigma\), and then we have the corresponding vector fields \(\xi^{a}=(\partial/\partial\tau)^{a}\) and \(\zeta^{a}=(\partial/\partial\sigma)^{a}\). The Nambu-Goto string action is now given by [17; 18] \[S=-\kappa\int\int\ d\tau d\sigma f(\tau,\sigma), \tag{50}\] where the coordinates \(\tau\) and \(\sigma\) have ranges \(\tau_{1}\leq\tau\leq\tau_{2}\) and \(0\leq\sigma\leq\pi\) respectively and \[f(\tau,\sigma)=[(\xi\cdot\zeta)^{2}-(\xi\cdot\xi)(\zeta\cdot\zeta)]^{1/2}. \tag{51}\] Here the string tension \(\kappa\) is defined by \(\kappa=\frac{1}{2\pi\alpha^{\prime}}\), with \(\alpha^{\prime}\) being the universal slope of the linear Regge trajectories [31]. We perform an infinitesimal variation of the world sheets \(\gamma_{\alpha}(\tau,\sigma)\) traced by the open string during its evolution in order to find the string geodesic equation from least action principle. Here we impose the restriction that the length of the string is \(\tau\) independent. We introduce the deviation vector \(\eta^{a}=(\partial/\partial\alpha)^{a}\) which represents the displacement to an infinitesimally nearby world sheet, and we consider \(\Sigma\) which denotes the three dimensional submanifold spanned by the world sheets \(\gamma_{\alpha}(\tau,\sigma)\). We then may choose \(\tau\), \(\sigma\) and \(\alpha\) as coordinates of \(\Sigma\) to yield the commutator relations \[\pounds_{\xi}\eta^{a} = \xi^{b}\nabla_{b}\eta^{a}-\eta^{b}\nabla_{b}\xi^{a}=0,\] \[\pounds_{\zeta}\eta^{a} = \zeta^{b}\nabla_{b}\eta^{a}-\eta^{b}\nabla_{b}\zeta^{a}=0,\] \[\pounds_{\xi}\zeta^{a} = \xi^{b}\nabla_{b}\zeta^{a}-\zeta^{b}\nabla_{b}\xi^{a}=0. \tag{52}\] Now we find the first variation as follows \[\frac{dS}{d\alpha}=\int\int d\tau d\sigma\ \eta_{b}(\xi^{a}\nabla_{a}p^{b}+ \zeta^{a}\nabla_{a}\pi^{b})-\int d\sigma\ p^{b}\eta_{b}|_{\tau=\tau_{1}}^{\tau= \tau_{2}}-\int d\tau\ \pi^{b}\eta_{b}|_{\sigma=0}^{\sigma=\pi}, \tag{53}\] where the world sheet currents associated with \(\tau\) and \(\sigma\) directions are respectively given by [31], \[p^{a} = \frac{\kappa}{f}[(\xi\cdot\zeta)\zeta^{a}-(\zeta\cdot\zeta)\xi^{a}],\] \[\pi^{a} = \frac{\kappa}{f}[(\xi\cdot\zeta)\xi^{a}-(\xi\cdot\xi)\zeta^{a}]. \tag{54}\] Using the endpoint conditions \[\eta^{a}(\tau=\tau_{1};\sigma)=\eta^{a}(\tau=\tau_{2};\sigma)=0, \tag{55}\] and \[\pi^{a}(\tau;\sigma=0)=\pi^{a}(\tau;\sigma=\pi)=0, \tag{56}\] we have string geodesic equation \[\xi^{a}\nabla_{a}p^{b}+\zeta^{a}\nabla_{a}\pi^{b}=0, \tag{57}\] and constraint identities [31] \[p\cdot\zeta = 0,\quad p\cdot p+\kappa^{2}\zeta\cdot\zeta=0,\] \[\pi\cdot\xi = 0,\quad\pi\cdot\pi+\kappa^{2}\xi\cdot\xi=0. \tag{58}\] For more details of the string theory and deviation vector on the curved manifold, see the references [15; 16; 30]. Next we consider the open rotating string in the (3+1) dimensional flat spacetime and delineate the string in terms of the coordinates \[x_{\mu}=(x_{0},x_{i})=(\tau,x_{i}(\tau;\sigma)),\quad(i=1,2,3). \tag{59}\] The Nambu-Goto string action in (50) is then described in terms of \(f(\tau,\sigma)\) given by \[f(\tau,\sigma)=[(\dot{x}_{\mu}x_{\mu}^{\prime})^{2}-(\dot{x}_{\mu}\dot{x}_{ \mu})(x_{\mu}^{\prime}x_{\mu}^{\prime})]^{1/2}, \tag{60}\] where the overdot and prime denote derivatives with respect to \(\tau\) and \(\sigma\), respectively. In this paper, we use the metric signature \((+,-,-,-)\). Inserting (59) into (60), we find \[f(\tau,\sigma)=[(\dot{x}_{i}x_{i}^{\prime})^{2}+(1-\dot{x}_{i}\dot{x}_{i})x_{ j}^{\prime}x_{j}^{\prime}]^{1/2}. \tag{61}\] Moreover we proceed to construct the world sheet currents \[p_{0} = \frac{\kappa}{f}x_{i}^{\prime}x_{i}^{\prime},\ \ \ \ \ p_{i}=-\frac{\kappa}{f}[(\dot{x}_{j}x_{j}^{\prime})x_{i}^{ \prime}-(x_{j}^{\prime}x_{j}^{\prime})\dot{x}_{i}],\] \[\pi_{0} = -\frac{\kappa}{f}\dot{x}_{i}x_{i}^{\prime},\ \ \ \pi_{i}=-\frac{\kappa}{f}[(\dot{x}_{j}x_{j}^{\prime})\dot{x}_{i}+(1-\dot{x}_{j} \dot{x}_{j})x_{i}^{\prime}]. \tag{62}\] Now, exploiting (57) we obtain the string equation of motion \[\frac{\partial p_{\mu}}{\partial\tau}+\frac{\partial\pi_{\mu}}{\partial\sigma }=0, \tag{63}\] and the string boundary condition \[\pi_{\mu}(\tau;\sigma=0)=\pi_{\mu}(\tau;\sigma=\pi)=0. \tag{64}\] Inserting \(p_{\mu}\) and \(\pi_{\mu}\) in (62) into the string equation of motion in (63), we find \[\frac{\partial}{\partial\tau}\left(\frac{x_{i}^{\prime}x_{i}^{ \prime}}{f}\right)-\frac{\partial}{\partial\sigma}\left(\frac{\dot{x}_{i}x_{i} ^{\prime}}{f}\right) = 0,\] \[\frac{\partial}{\partial\tau}\left[\frac{(\dot{x}_{j}x_{j}^{ \prime})x_{i}^{\prime}-(x_{j}^{\prime}x_{j}^{\prime})\dot{x}_{i}}{f}\right] +\frac{\partial}{\partial\sigma}\left[\frac{(\dot{x}_{j}x_{j}^{ \prime})\dot{x}_{i}+(1-\dot{x}_{j}\dot{x}_{j})x_{i}^{\prime}}{f}\right] = 0. \tag{65}\] Exploiting the boundary conditions in (64), we also obtain at \(\sigma=0\) and \(\sigma=\pi\) \[\dot{x}_{i}x^{\prime}_{i}=0,\quad(1-\dot{x}_{j}\dot{x}_{j})x^{ \prime}_{i}=0. \tag{66}\] Next, in order to describe an open string, which is rotating in \((x_{1},x_{2})\) plane and residing on the string center of mass, we take an ansatz [29] \[x^{rot}_{i}=(r(\sigma)\cos\omega\tau,r(\sigma)\sin\omega\tau,0). \tag{67}\] Here we propose that \(r(\sigma)\) and \(\omega\) represent respectively the diameter and angular velocity of the photon with solid spherical shape which is delineated by the open string. Note that \(r(\sigma=\pi/2)\) denotes the center of the diameter of string. More specifically, \(r(\sigma=\pi/2)\) is located in the center of the solid sphere which describes the photon. The first boundary condition in (66) is trivially satisfied and the second one yields \[r^{\prime}(\sigma=0,\pi)=0. \tag{68}\] We then obtain \(r(\sigma)\) which fulfills the above condition in (68) \[r(\sigma)=\frac{1}{\omega}\cos\sigma. \tag{69}\] Note that the photon has a finite size which is filled with mass. Using the photon configuration in (67) and (69) together with (62), we find the rotational energy of the photon \[E^{rot}=\int_{0}^{\pi}d\sigma\ p^{rot}_{0}=\frac{1}{2\alpha^{ \prime}\hbar\omega}, \tag{70}\] where we have included \(\hbar\) factor explicitly, and the value of \(\alpha^{\prime}\) is given by \(\alpha^{\prime}=0.95\ {\rm GeV}^{-2}\)[31]. Next we evaluate the photon intrinsic frequency and size in the SPM. To do this, we calculate the vibrational energy of photon by introducing the string coordinate configurations \[x_{i}=x^{rot}_{i}+y_{i},\quad i=1,2,3. \tag{71}\] Exploiting the coordinates in (71), we expand the string Lagrangian density \[{\cal L}={\cal L}_{0}+\frac{1}{2}\frac{\partial^{2}{\cal L}}{ \partial\dot{x}_{i}\partial\dot{x}_{j}}|_{0}\dot{y}_{i}\dot{y}_{j}+\frac{ \partial^{2}{\cal L}}{\partial\dot{x}_{i}\partial x^{\prime}_{j}}|_{0}\dot{y} _{i}y^{\prime}_{j}+\frac{1}{2}\frac{\partial^{2}{\cal L}}{\partial x^{\prime}_ {i}\partial x^{\prime}_{j}}|_{0}y^{\prime}_{i}y^{\prime}_{j}+\cdots, \tag{72}\] where the subscript \(0\) denotes that the terms in (72) are evaluated by using the coordinates in (67). The ellipsis stands for the higher derivative terms. Here the first term is a constant given by \({\cal L}_{0}={\cal L}(x^{rot}_{i})\). The first derivative terms vanish after exploiting the string equation of motion in (63). Next in order to obtain the vibration energy of photon, we define coordinates \(z_{i}\) which co-rotates with the string itself \[z_{1} = y_{1}\cos\omega\tau+y_{2}\sin\omega\tau,\] \[z_{2} = -y_{1}\sin\omega\tau+y_{2}\cos\omega\tau,\] \[z_{3} = y_{3}. \tag{73}\] After some algebra, we obtain the Lagrangian density associated with the coordinates \(z_{i}\) \[{\cal L}(z_{i}) = \frac{\kappa}{2\sin^{2}\sigma}\left[\frac{1}{\omega}(\dot{z}_{2} +\omega z_{1})^{2}+2\sin\sigma\cos\sigma((\dot{z}_{1}-\omega z_{2})z^{\prime}_ {2}\right. \tag{74}\] \[\left.-(\dot{z}_{2}+\omega z_{1})z^{\prime}_{1})-\omega z^{\prime 2 }_{2}\right]+\frac{\kappa}{2\omega}(\dot{z}^{2}_{3}-\omega^{2}z^{\prime 2}_{3}).\] The equations of motion for the directions \(z_{2}\) and \(z_{3}\) are then given by \[\ddot{z}_{2}+\omega^{2}z_{2}+2\omega^{2}\cot\sigma z^{\prime}_{2} -\omega^{2}z^{\prime\prime}_{2} = 0,\] \[\ddot{z}_{3}-\omega^{2}z^{\prime\prime}_{3} = 0. \tag{75}\] Now the photon is assumed to be in the ground state of the string energy spectrum. From (75) we find the eigenfunctions for the ground states \[z_{2} = c_{2}\sin(\omega\tau+\phi_{2}), \tag{76}\] \[z_{3} = c_{3}\cos\sigma\sin(\omega\tau+\phi_{3}). \tag{77}\] Here \(\phi_{2}\) and \(\phi_{3}\) are arbitrary phase constants which are irrelevant to the physics arguments of interest. It seems appropriate to address comments on the photon vibration modes. The transverse mode \(z_{2}\) in (76) is independent of the string coordinate \(\sigma\), so that the photon can tremble back and forth with a constant amplitude, while the longitudinal mode \(z_{3}\) in (77) possesses sinusoidal dependence on \(\sigma\). Here note that \(z_{3}\) does not move at the center of the string, namely at \(\sigma=\pi/2\), independent of \(\tau\) and the other parts of the string oscillate with the frequency \(\omega\). As for the transverse mode \(z_{1}\), we can find that any value for \(z_{1}\) satisfies the Euler-Lagrange equation for \(z_{1}\) obtained from the Lagrangian density in (74). Up to now we have considered a single massive photon with the solid sphere shape, whose diameter is delineated in terms of the length of the open string. The photon thus has a disk-like cross section on which the coordinates \(z_{1}\) and \(z_{2}\) resides. Note that, similar to the phonon associated with massive particle lattice vibrations, the photon is massive so that we can have three polarization directions: two transverse directions as in the massless photon case, and an additional longitudinal one. Keeping this argument in mind, we find that there exist two transverse modes \(z_{1}\) and \(z_{2}\) associated with the photon vibrations on \(z_{1}\)-\(z_{2}\) cross sectional disk, in addition to one longitudinal mode \(z_{3}\). We thus have the transverse mode in \(z_{1}\) direction to yield the eigenfunction for the ground state, with an arbitrary phase constant \(\phi_{1}\) similar to \(\phi_{2}\) and \(\phi_{3}\) discussed above, \[z_{1}=c_{1}\sin(\omega\tau+\phi_{1}). \tag{78}\] Note that, as in the case of massless photon, \(z_{1}\) mode oscillates with the same frequency \(\omega\) that \(z_{2}\) mode does. Note also that the above solutions \(z_{i}\) satisfy their endpoint conditions at \(\sigma=0\) and \(\sigma=\pi\) \[z_{i}^{\prime}=0,\quad i=1,2,3. \tag{79}\] The energy eigenvalues in the ground states in (76)-(78) are then given by \[E_{i}^{vib}=\frac{1}{2}\hbar\omega,\quad i=1,2,3. \tag{80}\] Exploiting the energies in (80), we arrive at the vibrational energy of the open string ground state \[E^{vib}=\sum_{i=1}^{3}E_{i}^{vib}=\frac{3}{2}\hbar\omega. \tag{81}\] In the SPM the classical energy of the string is given by \(E^{rot}\) in (70), while the corresponding quantum mechanical energy is given by \(E^{vib}\) in (81). Now we define the total energy of the string configuration as a function of \(\omega\) \[E=E^{rot}+E^{vib}=\frac{1}{2\alpha^{\prime}\hbar\omega}+\frac{3}{2}\hbar\omega. \tag{82}\] Note that we have already removed the translational degree of freedom, by considering the string observer residing on the photon center of mass in the SPM. Variating the energy \(E\) with respect to \(\omega\), we find the minimum value condition for \(E\) at \(\omega=\omega_{\gamma}\) \[\frac{dE}{d\omega}(\omega=\omega_{\gamma})=0, \tag{83}\] to yield the intrinsic frequency \(\omega_{\gamma}\) for the photon [13] \[\omega_{\gamma}=\frac{1}{\hbar(3\alpha^{\prime})^{1/2}}=9.00\times 10^{23} \ \mathrm{sec}^{-1}, \tag{84}\] which is graeter than the baryon intrinsic frequencies as shown in Table 2. Note that we have related the HSM and SPM, in terms of the intrinsic frequencies of the baryons and photon both of which are the extended objects. Next we consider the photon radius as a half of the open string length. Exploiting the photon intrinsic frequency \(\omega_{\gamma}\) in (84) we obtain the photon radius [13] \[\langle r^{2}\rangle^{1/2}(\mathrm{photon})=\frac{\mathrm{c}}{2\omega_{\gamma} }=0.17\ \mathrm{fm}, \tag{85}\] which is \(21\%\) of the proton magnetic charge radius \(\langle r^{2}\rangle_{M}^{1/2}(\mathrm{proton})=0.80\ \mathrm{fm}\) as shown in Table 1. Next, even though up to now we have investigated the stringy photon model in \(D=3+1\) dimension spacetime [13], we parenthetically discuss the bosonic string theory in the critical dimension \(D=26\). In the light-cone gauge quantization in bosonic string theory, the so-called anomaly associated with commutator of Lorentz generators has been canceled in \(D=26\) critical dimensions and with the condition \(a=-\frac{D-2}{2}R~{}=1\)[10; 11], where \(R\) is the Ramanujan evaluation [32] for the infinite sum \[R\equiv 1+2+3+\cdots=-\frac{1}{12}. \tag{86}\] Now we investigate the Ramanujan evaluation procedure for \(R\). To accomplish this, we manipulate the difference \(R-4R\) to yield \(R-4R=-3R=1+(2-4)+3+(4-8)+5+\cdots=1-2+3-4+5+\cdots=\frac{1}{4}\). Here we have used the identity \(1+2x+3x^{2}+4x^{3}+\cdots=\frac{1}{(1-x)^{2}}\) which yields \(1-2+3-4+\cdots=\frac{1}{4}\) at \(x=-1\). Following the above relation \(R-4R=-3R=\frac{1}{4}\), we finally obtain the Ramanujan evaluation \(R=-\frac{1}{12}\) in (86). Next exploiting the Ramanujan evaluation in (86) we can obtain the relation \[\frac{1}{2}+\frac{3}{2}+\frac{5}{2}+\cdots=\frac{1}{2}R-R=\frac{1}{24}, \tag{87}\] which has been used in the \(D=10\) superstring theory [11; 13]. Note that the stringy photon model has been described in \(D=3+1\) dimension spacetime, without resorting to (86) and (87). ## V Conclusions In summary, we have formulated the HSM to discuss the Dirac quantization in the first class formalism and the predictions of baryon physical quantities in HSM. To be specific, we have evaluated the intrinsic pulsating frequencies of the baryons. To accomplish this, we have exploited the first class Hamiltonian possessing the WOC and quantized the hypersphere soliton in the HSM. Here we have noticed that the the profile function in the soliton energy of the hypersphere soliton satisfies the two first order differential equations to attain the BPS topological lower bound in the soliton energy. Next, we have evaluated the baryon physical quantities such as baryon masses, magnetic moments, charge radii and axial coupling constant. Shuffling the baryon and transition magnetic moments, we have constructed the model independent sum rules. The intrinsic frequency for more massive particle also has been shown to be greater than that for the less massive one. Explicitly we have evaluated the intrinsic frequencies \(\omega_{N}=0.87\times 10^{23}\) sec\({}^{-1}\) and \(\omega_{\Delta}=1.74\times 10^{23}\) sec\({}^{-1}\) of the nucleon and delta baryon, respectively, to yield the identity \(\omega_{\Delta}=2\omega_{N}\). Next, making use of the Nambu-Goto string action and its extended rotating bosonic string theory, we have formulated the SPM to find the energy of the string configuration, which consists of the rotational and vibrational energies of the open string. We have then investigated a photon intrinsic frequency by exploiting the string which performs both rotational and pulsating motions. Here we have interpreted the string as a diameter of a solid spherical shape photon. Explicitly we have found that the intrinsic frequency of the photon is comparable to those of the baryons such as nucleon and delta baryon. In the SPM we also have evaluated the photon size given by the string radius which is approximately \(21\%\) of the proton magnetic charge radius. It will be interesting to search for a strong experimental evidence for the photon size which could be associated with the manifest photon phenomenology such as the photoelectric effect, Compton scattering and Raman scattering, for instance. Assuming that the SPM exploited in this paper could be a precise description for the photon, the photon intrinsic frequency \(\omega_{\gamma}=9.00\times 10^{23}\) sec\({}^{-1}\) and photon size \(\langle r^{2}\rangle^{1/2}({\rm photon})=0.17\) fm could be fundamental predictions in the extended object phenomenology, similar to \(\omega_{N}\), \(\omega_{\Delta}\) and the charge radii in the hypersphere soliton model. ## Acknowledgments The author would like to thank the anonymous editor and referees for helpful comments. ## Funding S.T. Hong was supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, NRF-2019R1I1A1A01058449. **Data Availability Statement** No data has been used in the work. **Conflicts of Interest** The author declares no conflict of interest.
2310.18981
Outsourcing policies for the Facility Location Problem with Bernoulli Demand
This paper focuses on the Facility Location Problem with Bernoulli Demand, a discrete facility location problem with uncertainty where the joint distribution of the customers' demands is expressed by means of a set of possible scenarios. A two-stage stochastic program with recourse is used to select the facility locations and the a priori assignments of customers to open plants, together with the a posteriori strategy to apply in those realizations where the a priori solution is not feasible. Four alternative outsourcing policies are studied for the recourse action, and a mathematical programming formulation is presented for each of them. Extensive computational experiments have been carried-out to analyze the performance of each of the formulations and to compare the quality of the solutions produced by each of them relative to the other outsourcing policies.
Maria Albareda-Sambola, Elena Fernández, Francisco Saldanha-da-Gama
2023-10-29T11:19:20Z
http://arxiv.org/abs/2310.18981v1
# Outourcing policies for the Facility Location Problem with Bernoulli Demand ###### Abstract This paper focuses on the Facility Location Problem with Bernoulli Demand, a discrete facility location problem with uncertainty where the joint distribution of the customers' demands is expressed by means of a set of possible scenarios. A two-stage stochastic program with recourse is used to select the facility locations and the _a priori_ assignments of customers to open plants, together with the _a posteriori_ strategy to apply in those realizations where the _a priori_ solution is not feasible. Four alternative outsourcing policies are studied for the recourse action, and a mathematical programming formulation is presented for each of them. Extensive computational experiments have been carried-out to analyze the performance of each of the formulations and to compare the quality of the solutions produced by each of them relative to the other outsourcing policies. **Keywords:** Location problems, Uncertainty modelling, Experimental results, Combinatorial optimization ## 1 Introduction Broadly speaking, facility location problems look for the best locations for a set of facilities that must satisfy service requests of a given set of customers [see, e.g., 1]. It is often assumed that customers' demand is part of the input data and, thus, is known in advance. Nonetheless, in practice, customers' demand is subject to a high level of uncertainty, so the above assumption very seldom holds. Examples of location problems with non-deterministic demands include any logistic related location problem where demand levels might change over different time periods (postal services, supermarkets, warehouses to distribute goods with seasonal-dependent demand, airports, etc.) Brandeau and Chiu [2], Louveaux [3], and Snyder [4] have surveyed different aspects of Stochastic Location Problems. It is easy to find situations, for instance, in logistic applications, where requests of service are unitary in the sense that each service request consumes one resource unit (e.g. one worker) from the service center. This is the case, for example, of delivery services, door-to-door mail services or home assistance services. Facility location problems with unit-demand customers have been studied in the literature motivated by different types of applications namely in Telecommunications [e.g., 5] and Healthcare [e.g., 6]. Other work not focusing on a specific application can also be found in [7] and [8]. When demand is stochastic, then, each possible scenario is characterized by the set of customers with demand, and requests can be modeled by means of binary vectors. In that case, the components of such a vector are Bernoulli random variables. Two-stage stochastic models with recourse function [see, e.g., 9, 10] have been proposed for addressing several stochastic combinatorial optimization problems with Bernoulli demand. This is the case of the probabilistic traveling salesman problem [11, 12, 13], location-routing problems with Bernoulli demands [14, 15], or the stochastic generalized assignment Problem [16]. An example of an stochastic facility location problem with unitary demands is the Facility Location Problem with Bernoulli Demand (FLPBD) that we address in this paper. The FLPBD is a discrete facility location problem where the uncertain service demand of each customer follows a Bernoulli distribution, and the joint distribution of customers' demands is expressed by means of a set of possible scenarios. This problem aims at modeling situations where a facility provides a service and demand refers to whether a customer requires to be served. Companies providing repair or maintenance services are potential agents in such a modeling framework, and customers may represent sets of users grouped according some criterion (e.g. their location), which are assigned to open facilities that should handle the existing demand. The term "facility" should be looked at in a very general way. For instance, we may be referring to a worker or a team. One potential example concerns mobile health clinics. This type of facility is usually set to assist some specific area or region previously assigned to it. In case the occurring demand is higher than the service capacity, extra personnel is necessary, which may lead to additional costs. Another potential application of the FLPBD can be found in elevator maintenance: each repair or maintenance team must assist a prespecified set of customers in case they call for service. If the actual demand turns out to be higher than the service capacity (which may indicate maximum time service), then the service still has to be provided; this may call for a temporary relocation of workers from other teams or simply for outsourcing the service to a third party. Other examples of settings fitting the FLPBD include target-oriented advertisement activities, door-to-door product demonstration, etc. In all these cases, potential customers are previously assigned to the facility and may or may not have actual demand, and the total actual demand may turn out to exceed the estimated service capacity. The FLPBD was motivated and introduced in Albareda-Sambola et al. [17] for the particular case when the distributions of the customers' demands are independent, all with the same demand probability. In Albareda-Sambola et al. [17] two different outsourcing policies were considered, and closed forms for the corresponding recourse functions were presented. The obtained numerical results showed that the proposed methodology was computationally highly demanding as the sizes of the instances increased. In Albareda-Sambola et al. [18] the same authors considered the heterogeneous version of the problem where the demand probabilities are not necessarily the same, for the same two outsourcing policies, again under the assumption of independent demands. For that case, evaluating exactly the cost of a solution becomes computationally unaffordable, so several heuristics, based on GRASP and Path Relinking, were proposed in which solution costs were computed by simulation. The problem and some of its variants has attracted the attention of other researchers such as Bieniek [19], who considered stochastic i.i.d. demands with arbitrary probability distributions. In this paper we focus on the analysis of several outsourcing policies for the FLPBD, which guide the two-stage model that we consider. The first-stage decision is to select a set of facilities (plants) to open together with a _tentative_ (_a priori_) allocation of customers within the set of selected facilities. The second-stage solution, which determines the recourse action, builds a specific assignment of customers to open plants for each possible realization of the customers' demand. Since each facility has a capacity in terms of the maximum number of customers that it can really serve, for some of the scenarios it may happen that the number of customers who actually have demand, with an _a priori_ allocation to a given open plant, exceeds the capacity of the plant. In such cases the outsourcing policy dictates how to re-allocate the exceeding demand, so the total existing demand is eventually served. In the above framework the choice of the outsourcing policy becomes a highly relevant issue, as it indicates how to _react_ in those scenarios when the existing demand exceeds the available capacity, and how to evaluate such a _reaction_. In other words, the outsourcing policy determines the criterion according to which the quality of solutions will be assessed. Hence, some outsourcing policies may be better suited than others for some specific classes of scenarios, so different outsourcing policies may lead to different solutions. Given that one of the main goals of stochastic models is to produce solutions that are _robust_ for the considered scenarios, determining a suitable outsourcing policy is on itself a strategic decision, which must be made in advance, and may have a notable impact on the specific location/allocation decisions. Still, to the best of our knowledge, there is no comparative analysis of alternative recourse functions for stochastic discrete location models, with the exception of [20]. In [20] two recourse functions are considered and compared, exclusively through the cost of their respective optimal solutions. In this paper we develop an extensive comparative analysis of the performance of several outsourcing policies. On the one hand, we consider four different recourse functions and, on the other hand, the comparison is carried out through the _a priori_ decisions produced by each outsourcing policy, which are also evaluated from the perspective of the other outsourcing policies. The two outsourcing policies considered in Albareda-Sambola et al. [17, 18] are _facility outsourcing_ (FO) and _customer outsourcing_ (CO). Broadly speaking, in FO each facility with insufficient capacity for serving all its allocated customers who have demand in a given scenario, outsources its deficit of capacity and serves from that plant all its allocated customers with demand. On the contrary, in CO when in a given scenario some open facility has not enough capacity to serve all the customers allocated to the plant in the _a priori_ solution, only a subset of them are in fact served from that facility, while service to the remaining customers is outsourced, so they are directly served from an external source. In Albareda-Sambola et al. [17, 18] the outsourced customers are selected with an order driven strategy (OD-CO), according to the FIFO policy relative to the arrival order of service requests. Given the widely recognized relevance of outsourcing in the context of production and distribution systems (see, e.g., Benaroch et al. 21 and Dolgui and Proth 22), in this paper we consider an additional alternative policy for the selection of outsourced customers within CO, which is based on a cost-minimization criterion, and will be referred to as _cost-driven_ (CD-CO). We also analyze an additional outsourcing policy, _reassignment outsourcing_ (RO), derived from the possibility of reassigning some customers allocated to an open facility with insufficient capacity to other open facilities with exceeding capacity. While FO serves all demand customers from the plant they are allocated to, both CO strategies serve outsourced customers from an external source (third party), whereas RO serves each outsourced customer from one open facility, different from the one the customer was allocated to, but with available capacity. For each of the considered policies, we present a mathematical programming formulation, which allows us to optimally solve the problem when uncertainty is expressed via a set of scenarios. As we empirically show, some outsourcing strategies lead to models much more difficult to tackle than others. We carry out extensive computational experiments, whose results we summarize and analyze. Furthermore, the formulations are used to analyze the actual performance of the underlying outsourcing policies. This is accomplished considering both correlated and independent demands so that insights can be gather with respect to possible data dependency. In particular, we carry out a comparative analysis of the four outsourcing policies, by evaluating the quality of the solutions produced by each specific strategy relative to the other outsourcing policies, i.e., we consider the possibility of using the optimal solution for some outsourcing strategy as an approximate solution for the others. The results show that for some outsourcing strategies, a good approximation can be obtained by adopting the solution induced by other strategies, and allow us to derive some managerial insight. In this paper we contribute to existing work in several ways. First, we extend the set of outsourcing policies considered in Albareda-Sambola et al. [17, 18] with two additional strategies, and propose a mixed-integer linear programming formulation for each of them. Second, we carry out an empirical comparison of the considered outsourcing policies in terms of both the computational performance of the proposed formulations as well as their capability of producing good quality solutions for the other policies. The comparison of the quality of the solutions obtained with the different policies allows us to derive some managerial insight, which could help the decision maker in determining the most suitable outsourcing policy in terms of robustness or other possible indicators. Finally, our analysis is developed in a general setting where uncertainty is expressed by means of a set of scenarios and it is no longer assumed that the probability distributions of the customers' demands are independent. Again, this goes beyond the existing literature. The assumption that customers demands are independent holds when customers do not obey to some common interest and demand is not seasonal, but such an assumption is difficult to justify in other cases. Moreover, when "customers" and "facilities" do not necessarily represent physical entities. This would be the case, for instance, of a situation where "customers" were students and "facilities" offered courses. Then it would be unlikely that demand be uncorrelated as it could depend, for instance, on the students background. The paper is organized as follows. In Section 2 we introduce some notation and formally define the FLPBD. Section 3 focuses on the four alternative outsourcing policies that we have considered. Specifically, we present a mixed-integer programming formulation for each of them, which will be used in the computational experiments. Section 4 is dedicated to the computational experiments. The sets of test instances that we have used and their characteristics are described in Section 4.1. The performance of each of the presented formulations for the different sets of benchmark instances at varying values of the input parameters is summarized and analyzed in Section 4.2, whereas Section 4.3 summarizes the results of an extensive comparison of the different policies and derives managerial insight. The paper ends in Section 5 with a summary of our main findings and some final comments. ## 2 Definition of the problem Let \(I\) and \(J\), with \(n=|J|\), denote the set of indices for the potential locations of facilities and for customers, respectively. We assume that the demands of service of customers follow Bernoulli probability distributions, not necessarily independent, with probabilities \(p_{j},j\in J\). We assume that such uncertainty is expressed by means of a set of possible realizations (scenarios), and denote by \(\Omega\) the set of all scenarios, by \(\pi^{\omega}\) the probability of scenario \(\omega\) (\(\sum_{\omega\in\Omega}\pi^{\omega}=1\)), and by \(d_{j}^{\omega}\in\{0,1\}\) the demand of customer \(j\in J\) in scenario \(\omega\in\Omega\). Following the terminology introduced in Albareda-Sambola et al. [17] the customers with demand in a scenario are referred to as _demand customers_ and \(D^{\omega}=\sum_{j\in J}d_{j}^{\omega}\) indicates the number of demand customers in scenario \(\omega\). We have the following additional data. For each potential location \(i\in I\), \(f_{i}\) is the fixed setup cost for opening facility \(i\); \(\ell_{i}\) is a lower bound on the number of customers that have to be _assigned_ to facility \(i\) if it is opened; and, \(K_{i}\) the maximum number of customers that can be _served_ from facility \(i\) if it is opened. For each pair \(i\in I,j\in J\), \(c_{ij}\) is the cost for serving customer \(j\) from facility \(i\). For a given scenario \(\omega\in\Omega\) not all the customers need to have demand. This is why we distinguish between the _assignment_ of customers to plants, which is done _a priori_ and is independent of the potential realizations, and the _service_ of customers from open plants, which is decided _a posteriori_, once the realization is known. An _a priori_ solution is given by a set of _operating_ (open) facilities together with an assignment of all the customers to these facilities, such that for any open plant the number of customers that are assigned to it is at least \(\ell_{i}\). Since \(K_{i}\) is an upper bound on the number of customers that can be served from an open plant, it does not affect the feasibility of _a priori_ solutions. Let \(i(j)\in I\) denote the facility to which customer \(j\in J\) is assigned in the _a priori_ solution and \(J_{i}=\{j\in J:i(j)=i\}\), the set of customers assigned to facility \(i\) in the _a priori_ solution. Given an _a priori_ solution, the _a posteriori_ solution indicates the decisions to make once demand customers are known, i.e., it describes the actual services to demand customers. Let \(J_{i}^{\omega}=J_{i}\cap\{j\in J:d_{j}^{\omega}=1\}\) denote the set of customers assigned to facility \(i\in I\) with demand in scenario \(\omega\), and \(\eta_{i}^{\omega}=|J_{i}^{\omega}|\) the number of such customers. If the number of demand customers assigned to an open facility \(i\in I\) does not exceed its upper bound, i.e. \(\eta_{i}^{\omega}\leq K_{i}\), then in the _a posteriori_ solution all customers indexed in \(J_{i}^{\omega}\) receive service from plant \(i\), each of them incurring a service cost \(c_{ij}\), \(j\in J\). Instead, when \(\eta_{i}^{\omega}>K_{i}\) the _a posteriori_ solution consists of serving \(K_{i}\) (out of \(\eta_{i}^{\omega}\)) demand customers from facility \(i\) and outsourcing the remaining \(\eta_{i}^{\omega}-K_{i}\). A penalty is incurred for every outsourced demand customer. The way in which, for a realization, it is decided whether a demand customer assigned to a plant with \(\eta_{i}^{\omega}>K_{i}\), is actually served from \(i\) or outsourced, and its corresponding penalty, depends on the outsourcing policy that is applied (see Section 3 below). The recourse function is the expected cost of the _a posteriori_ solution, over all possible realizations of the demand vector. A penalty cost \(g_{i}\) is incurred for every outsourced demand customer. The way in which, for a realization, it is decided whether a demand customer assigned to a plant with \(\eta_{i}^{\omega}>K_{i}\), is actually served from \(i\) or outsourced, and its corresponding penalty, depends on the outsourcing policy that is applied (see Section 3 below). The recourse function is the expected cost of the _a posteriori_ solution, over all possible realizations of the demand vector. The FLPBD consists of finding a set of facilities to open and an allocation of the customers to the opened facilities, such that the lower bounds \(\ell_{i}\) are satisfied, and the sum of the fixed cost associated with the open facilities and the recourse function is minimized. To formulate the FLPBD we define the following sets of decision variables: \[y_{i}=\left\{\begin{array}{ll}1&\mbox{if a facility is established at $i$},\\ 0&\mbox{otherwise},\end{array}\right.\ (i\in I).\] \[x_{ij}=\left\{\begin{array}{ll}1&\mbox{if customer $j$ is allocated to $i$},\\ 0&\mbox{otherwise},\end{array}\right.\ (i\in I,j\in J).\] The generic formulation for the FLPBD proposed in Albareda-Sambola et al. [17] is: \[(P)\quad\min\quad\sum_{i\in I}f_{i}y_{i}+\mathcal{Q}(x), \tag{1}\] s. t. \[\sum_{i\in I}x_{ij}=1, j\in J,\] (2) \[x_{ij}\leq y_{i}, i\in I,\,j\in J,\] (3) \[\ell_{i}y_{i}\leq\sum_{j\in J}x_{ij}, i\in I,\] (4) \[y_{i}\in\{0,1\}, i\in I,\] (5) \[x_{ij}\in\{0,1\}, i\in I,\,j\in J.\] (6) The objective function (1) includes the fixed costs for opening the facilities and the recourse function. In particular, \(\mathcal{Q}(x)=\mathbb{E}\left[\text{Service cost}+\text{Penalty cost}\right]\). Constraints (2) assure that all customers will be assigned to (exactly) one facility while constraints (3) impose that these assignments are only done to operating facilities. Constraints (4) state the minimum number of customers that must be assigned to each operating facility. Finally, (5)-(6) define the domain of the variables. ## 3 Outsourcing Policies The expression of \(\mathcal{Q}(x)\) and the additional variables and constraints that may be needed to express the FLPBD through a mathematical programming formulation are directly related to how the recourse action is defined. That is, what specific outsourcing policy is applied. Below we describe the outsourcing policies that will be considered in this work and we discuss manageable Mixed-Integer Linear Programming (MILP) formulations in each case. Broadly speaking, the alternative policies that we study differ from each other in the recourse actions that are taken in the scenarios where some facility has a number of assigned demand customers that exceeds its capacity. ### Facility outsourcing With the facility outsourcing (FO) policy, under scenario \(\omega\), facility \(i\) takes delivery of the whole set \(J_{i}^{\omega}\). When \(\eta_{i}^{\omega}>K_{i}\), then \(\eta_{i}^{\omega}-K_{i}\) units of product are outsourced, at a unit cost \(g_{i}\). Then, facility \(i\) serves the full demand of its assigned customers, \(\eta_{i}^{\omega}\), at the same cost \(c_{ij}\) that would be incurred if it were not outsourced. This recourse action models an external purchase of the resources needed to fully satisfy the demand of an open facility and was applied in Albareda-Sambola et al. [17] for the case when the demand of customers are independent random variables, and all of them have the same probability of demand, i.e., \(p_{j}=p\). With the FO policy, each scenario \(\omega\in\Omega\) is characterized by its probability \(\pi^{\omega}\) and the demands \(d_{j}^{\omega}\in\{0,1\}\), \(j\in J\). To formulate the FO-FLPBD, in addition to the \(y\) and \(x\) decision variables introduced above, we use the following: \(\theta_{i}^{\omega}:\) number of demand customers outsourced at facility \(i\) under scenario \(\omega\). (\(i\in I,\omega\in\Omega\)). \(z^{\omega}:\) total penalty incurred under scenario \(\omega\in\Omega\). The formulation is: \[\mathrm{FO}\quad\min \sum_{i\in I}f_{i}y_{i}+\sum_{i\in I}\sum_{j\in J}p_{j}c_{ij}x_{ij}+ \sum_{\omega\in\Omega}\pi^{\omega}z^{\omega}, \tag{7}\] \[\mathrm{s.\ t.} (\ref{eq:1})-(\ref{eq:2}),\] \[\theta_{i}^{\omega}\geqslant\sum_{j\in J}d_{j}^{\omega}x_{ij}-K_ {i}y_{i}, i\in I,\omega\in\Omega,\] (8) \[z^{\omega}\geqslant\sum_{i\in I}g_{i}\theta_{i}^{\omega}, \omega\in\Omega,\] (9) \[z^{\omega}\geqslant 0,\theta_{i}^{\omega}\geqslant 0, i\in I,\omega\in\Omega,\] (10) \[y_{i}\in\{0,1\}, i\in I,\] (11) \[x_{ij}\in\{0,1\}, i\in I,j\in J. \tag{12}\] The objective function (7) includes the costs for opening facilities plus the expected value of the service plus outsourcing costs. As explained, Constraints (2)-(4) guarantee the feasibility of the _a priori solution_. Constraints (8) force \(\theta\) variables to take consistent values, and Constraints (9) compute the penalty cost of each scenario. Indeed (9) will hold as equality in any optimal solution. So, in fact, they are not strictly needed, as their right hand side could be substituted in the last term of the objective function instead of using the variables \(z^{\omega}\). Some preliminary experiments showed that the formulation with Constraints (9) could be solved in smaller times than that where its right was substituted in the objective function. Hence, we used this alternative, even if we have no theoretical argument that justifies this improvement. The domain of the variables is defined by (10)-(12). Note that, given the structure of the formulation, integrality constraints on the \(z\) and \(\theta\) variables can be relaxed to nonnegativity constraints. Formulation (7)-(12) uses \(|I|(1+|J|)+|\Omega|(|I|+1)\) variables and has \(|J|(1+|I|)+|I|+|\Omega|(|I|+1)\) constraints. Depending on the size of \(\Omega\), these numbers can be quite high, even for moderate numbers in terms of customers and facilities. Hence, enhancing the formulation can be very useful to decrease the CPU time required to solve such model to proven optimality using an off-the-shelf solver. Inequalities (13) and (14) below have proven to give a good balance between the increase in the size of the formulation and the improvement obtained when solving the model. \[\sum_{\omega\in\Omega}\pi^{\omega}\theta_{i}^{\omega}\geqslant \sum_{j\in J}p_{j}x_{ij}-K_{i}y_{i},\qquad i\in I, \tag{13}\] \[\sum_{i\in I}K_{i}y_{i}+\sum_{i\in I}\theta_{i}^{\tilde{\omega}} \geqslant D^{\tilde{\omega}}. \tag{14}\] Inequality (13) states that the expected number of demand customers outsourced at facility \(i\) (\(i\in I\)) is at least the expected number of demand customers assigned to that facility minus the capacity of the facility. Note that these inequalities can be derived as a weighted sum over all scenarios of Constraints (8), using as weights \(\pi^{\omega}\), \(\omega\in\Omega\), after imposing that \(\sum\limits_{\omega\in\Omega}\pi^{\omega}=1\). Since this probability equality affects the input data only but is not explicit in the formulation, Constraints (13) are valid inequalities, which are not implied by (8), and may help cutting fractional solutions. Note that, for binary solutions, these constraints are activated only if \(y_{i}=1\). In (14), \(\tilde{\omega}\in\Omega\) is the scenario with the largest number of demand customers (\(D^{\tilde{\omega}}\)). This constraint ensures that the maximum number of customers that can be served by the open facilities plus the outsourced demand customers, is never below the total demand. This constraint holds for every scenario, but adding such a constraint for all scenarios would increase considerably the size of the formulation, which explains why we consider solely (14). Even being one single constraint, it has proven to decrease the computation time in some instances. ### Customer outsourcing With the customer outsourcing (CO) strategy, in the scenarios where the number of demand customers assigned to facility \(i\) exceeds its capacity \(K_{i}\), i.e., \(\eta_{i}^{\omega}>K_{i}\), exactly \(K_{i}\) customers are served from facility \(i\), whereas the remaining \(\eta_{i}^{\omega}-K_{i}\) customers of \(J_{i}^{\omega}\) are outsourced and receive service from an external third party. Service costs \(c_{ij}\) are incurred for the customers served from facility \(i\), whereas a penalty \(g_{i}\) is incurred for each outsourced customer, which depends on the facility \(i\) the customer is assigned to. Hence, to formulate the FLPBD with a CO policy new decision variables are needed, in addition to the ones defined above, to identify the outsourced customers. In particular, we define: \[s_{ij}^{\omega}=\left\{\begin{array}{ll}1&\mbox{if customer $j$ is served from facility $i$ under scenario $\omega$},\\ 0&\mbox{otherwise,}\end{array}\right.\] \(i\in I,j\in J,\omega\in\Omega\). We consider two different versions of the CO policy. In the first one, that we call cost-driven CO policy (CD-CO), the decision of whether a demand customer \(j\in J_{i}^{\omega}\) is served from facility \(i\) or outsourced is based solely on a cost-minimization criterion. Thus, similarly to the FO policy, in the CD-CO policy a scenario \(\omega\in\Omega\) is characterized by its probability \(\pi^{\omega}\) and the demands \(d_{j}^{\omega}\in\{0,1\}\), \(j\in J\). The formulation for the FLPBD with CD-CO is: \[\mbox{CD-CO}\quad\min \sum\limits_{i\in I}f_{i}y_{i}+\sum\limits_{\omega\in\Omega}\sum \limits_{i\in I}\sum\limits_{j\in J}c_{ij}s_{ij}^{\omega}+\sum\limits_{\omega \in\Omega}\pi^{\omega}z^{\omega}, \tag{15}\] \[s.\,t. (2)-(4),\] (16) \[\sum\limits_{\omega\in\Omega}s_{ij}^{\omega}\leqslant\left(\sum \limits_{\omega\in\Omega}d_{j}^{\omega}\right)x_{ij}, i\in I,j\in J,\] \[\sum\limits_{j\in J}d_{j}^{\omega}s_{ij}^{\omega}\leqslant K_{i}, i\in I,\omega\in\Omega, \tag{17}\] \[\sum_{j\in J}d_{j}^{\omega}s_{ij}^{\omega}+\theta_{i}^{\omega} \geqslant\sum_{j\in J}d_{j}^{\omega}x_{ij}, i\in I,\omega\in\Omega, \tag{18}\] \[z^{\omega}\geqslant\sum_{i\in I}g_{i}\theta_{i}^{\omega}, \omega\in\Omega,\] (19) \[z^{\omega}\geqslant 0,\theta_{i}^{\omega}\geqslant 0, i\in I,\omega\in\Omega,\] (20) \[y_{i}\in\{0,1\}, i\in I,\] (21) \[x_{ij}\in\{0,1\}, i\in I,j\in J,\] (22) \[s_{ij}^{\omega}\in\{0,1\}, i\in I,j\in J,\omega\in\Omega. \tag{23}\] Again, Constraints (2)-(4) guarantee the feasibility of the _a priori_ solution. The second stage variables \(s_{ij}^{\omega}\) are now used to compute the expected service cost. Constraints (16) ensure that service from open facilities is only provided according to the _a priori_ assignments dictated by the \(x\) variables. Constraints (17) and (18) state the service capacities of the facilities and set the right value of the number of outsourced units at each facility, respectively. Again, the structure of the problem allows to relax integrality constraints on \(\theta\) variables and restrict them to be just nonnegative. The number of variables of formulation CD-CO has increased in \(|I|\times|J|\times|\Omega|\) with respect to the number of variables of the FO formulation. Its number of constraints is also larger, as it has raised to \(|J|(1+2|I|)+|I|+|\Omega|(2|I|+1)\). In the second CO strategy that we consider, in the scenarios where \(\eta_{i}^{\omega}>K_{i}\), the demand customers to serve from facility \(i\) are selected following a FIFO policy, relative to the order in which requests of service have arrived. This order driven recourse action will be referred to as OD-CO and was applied in Albareda-Sambola et al. [17] for the particular case when \((i)\) the demand of customers are independent random variables, \((ii)\) all of them have the same probability of demand, i.e., \(p_{j}=p\), and \((iii)\) requests of service arrive in a random order. Note that for the FLPBD with OD-CO policy a scenario \(\omega\in\Omega\) is no longer fully characterized by its probability and the demands of the customers. Now, the order in which calls for service from the demand customers of \(J_{i}^{\omega}\) were received must also be known. The formulation for the OD-CO is (15)-(23) plus the following set of constraints, which ensure that the FIFO policy is followed for selecting the customers that will be served from a given facility when \(\eta_{i}^{\omega}>K_{i}\). The notation \(j^{\prime}\prec^{\omega}j\) indicates that customers \(j,j^{\prime}\in J\) have demand under scenario \(\omega\) and \(j^{\prime}\) requested service before \(j\). \[K_{i}d_{j}^{\omega}(x_{ij}-s_{ij}^{\omega})\leqslant\sum_{j^{\prime}\prec^{ \omega}j}d_{j^{\prime}}^{\omega}s_{ij^{\prime}}^{\omega},\qquad i\in I,\omega \in\Omega,j\in J. \tag{24}\] ### Reassignment outsourcing In the previous policies, when facility \(i\) has a number of assigned demand customers that exceeds its capacity \(K_{i}\), then the excess of demand at \(i\) is served from external sources. Instead, in the reassignment outsourcing (RO) policy the unused capacity of other open plants can be used to satisfy the deficit of capacity at \(i\) prior to resorting to outsourcing. Similarly to the CO policy, in the RO recourse, when \(\eta_{i}^{\omega}>K_{i}\) exactly \(K_{i}\) customers of \(J_{i}^{\omega}\) are served from \(i\). Now, up to \(\sum_{i^{\prime}\in I:i^{\prime}\neq i}\left(K_{i^{\prime}}-\eta_{i^{\prime}}^{ \omega}\right)^{+}\) customers of \(J_{i}^{\omega}\) can be served from different open facilities (with \((a)^{+}\) standing for \(\max\{0,a\}\)). The service cost for serving a demand customer \(j\in J_{i}^{\omega}\) from an open facility \(i^{\prime}\in I\) is \(c_{i^{\prime}j}\), independently of whether or not \(i(j)=i^{\prime}\). When \(i(j)\neq i^{\prime}\), i.e. \(j\) is not assigned to \(i^{\prime}\) in the _a priori_ solution, an additional penalty \(h_{j}\) is paid. Finally, when \(\eta_{i}^{\omega}-K_{i}>\sum_{i^{\prime}\in I:i^{\prime}\neq i}\left(K_{i^{ \prime}}-\eta_{i^{\prime}}^{\omega}\right)^{+}\) we resort to external outsourcing to serve the remaining unserved demand assuming a unit cost \(g\) per outsourced customer. RO is a cost-driven policy, so the decision of whether a demand customer is served from its allocated facility, a different open facility, or outsourced is only based on the cost-minimization criterion. In the RO-FLPBD, each scenario \(\omega\in\Omega\) is characterized by its probability \(\pi^{\omega}\) and the customers demands \(d_{j}^{\omega}\in\{0,1\}\), \(j\in J\). In this case, to identify the demand customers served from facilities different from the ones they are assigned to and the demand customers who receive service from an external source, we use the following sets of binary decision variables in addition to the location \((y)\) and the allocation \((x)\) variables: * \(\lambda_{j}^{\omega}=1\Leftrightarrow\) demand customer \(j\in J^{\omega}\) is reassigned to an open facility \(i^{\prime}\neq i(j)\) in scenario \(\omega\). * \(\mu_{j}^{\omega}=1\Leftrightarrow\) demand customer \(j\in J^{\omega}\) is served from external sourcing in scenario \(\omega\). The obtained formulation is: \[\text{RO}\quad\min \sum_{i\in I}f_{i}y_{i}+\sum_{\omega\in\Omega}\pi^{\omega}\sum_{j \in J}\left(\left(\sum_{i\in I}c_{ij}s_{ij}^{\omega}\right)+h_{j}\lambda_{j} ^{\omega}+g\mu_{j}^{\omega}\right), \tag{25}\] \[\text{s.\,\,t.} (\ref{eq:1})-(\ref{eq:2}),\] \[\sum_{j\in J}s_{ij}^{\omega}\leqslant K_{i}y_{i}, i\in I,\omega\in\Omega,\] (26) \[d_{j}^{\omega}\lambda_{j}^{\omega}\geqslant s_{ij}^{\omega}-d_{j }^{\omega}x_{ij}^{\omega}, i\in I,j\in J,\omega\in\Omega,\] (27) \[\sum_{i\in I}s_{ij}^{\omega}+\mu_{j}^{\omega}\geqslant d_{j}^{ \omega}, j\in J,\omega\in\Omega,\] (28) \[y_{i},x_{ij},s_{ij}^{\omega},\lambda_{j}^{\omega},\mu_{j}^{ \omega}\in\{0,1\}, i\in I,j\in J,\omega\in\Omega. \tag{29}\] The role of Constraints (2)-(4) has been repeatedly explained. Constraints (26) impose that no facility serves more than \(K_{i}\) customers. Observe that as opposed to the previous policies, these constraints do not prevent that a facility serves a customer not assigned to it in the _a priori_ solution. Constraints (27) activate the \(\lambda\) variables associated with demand customers served from open facilities different from the ones they are assigned to in the _a priori_ solution. The \(\mu\) variables associated with customers that receive service from an external source are activated in constraints (28). ## 4 Computational Experiments ### Test instances All the instances used in this paper are generated from the homogeneous instances presented in Albareda-Sambola et al. [17]. In that work, homogeneous FLPBD instances were generated taking as a starting point 11 Traveling Salesman Problem (TSP) instances available at [http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/](http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/) namely, berlin52, eil51, eil76, kroA100, kroB100, kroC100, kroD100, kroE100, pr76, rat99, and st70. From those TSP instances, small and large FLPBD instances were generated in Albareda-Sambola et al. [17], with \(|I|=15\), \(|J|=30\) and \(|I|=20\), \(|J|=60\), respectively. Below we briefly recall the generation process. From each original TSP instance with \(N\) nodes, first the number of facilities and customers was set in such a way that \(|I|+|J|\leq N\). This means that the largest instances could be generated only from the TSP instances kroA100, kroB100, kroC100, kroD100, kroE100, and rat99. Then, for each fixed FLPBD dimensions, three different sets of customers and potential plants were randomly selected among the \(n\) original nodes. For each choice of plants and customers, the remaining data for the FLPBD was generated varying several parameters as follows: (i) three different values for the probability of demand (0.1, 0.5 and 0.9), (ii) three different levels of variability for the setup costs (0, \(\mu/10\), and \(\mu/3\), where \(\mu\) is the expected value set for the setup costs), (iii) low, medium and high capacity \(K_{i}\), and (iv) two different possibilities for \(\ell\)--the minimum number of customers to assign _a priori_ to the opened facilities (0 and a value greater than 0). In total, Albareda-Sambola et al. [17] generated 2754 instances divided into 17 groups (11 groups of small instances and 6 groups of large instances). A subset of those instances was then considered in Albareda-Sambola et al. [18] for generating a data set for the non-homogeneous case. Three types of customers were considered: low-, medium-, and high-probability demand customers, with demand probabilities drawn from U(0.10, 0.25), U(0.40, 0.60), and U(0.75, 0.90) distributions, respectively. In its turn, different patterns were defined for the overall demand. In this work we use the following two patterns: in pattern 1, there are 20% of low-probability demand customers, 60% medium, and 20% high and in pattern 2, these percentages are 20%, 20%, and 60%, respectively. The above procedures generated a whole set of probabilistically defined FLPBD instances (all the details can be found in Albareda-Sambola et al. [18]). For the current work, we have used a subset of the instances with patterns 1 and 2 referred to as _PT1_ and _PT2_, respectively. In particular, we considered the set of instances corresponding to: (i) the intermediate value for the variability for the setup costs (\(\mu/10\)), (ii) low and high capacity levels, and (iii) \(\ell>0\). Concerning the capacities we recall that the data for every instance in Albareda-Sambola et al. [17] was generated considering among other parameters, one multiplication factor--\(\gamma\in\{1,2,4\}\)--that leads to the so-called low, intermediate and high capacity levels for the facilities. In this work, we consider the instances generated using \(\gamma=1,4\) which means that we are retrieving the instances with the lower and the higher capacities generated in the above mentioned work. The above choices for the testbed instances to be used in the current work are motivated by the fact that we wanted to consider a rather limited set of instances. In fact, considering all the instances worked out by Albareda-Sambola et al. [18] was not relevant for a paper whose focus is not in algorithmic developments. Hence, we concentrated our experiments on instances corresponding to one possibility for the variability of the setup costs (thus we chose the intermediate one), and two possibilities for the capacities: low and high. This led to a base set of 204 instances. From each of them, we generated two scenario-defined FLPBD instances with 50 scenarios each, yielding two different groups of 204 instances each. The number of scenarios was fixed to get a good compromise between quality of randomness representation, and affordability in terms of computational effort. In the first group, for each scenario, demands were generated independently, and according to the previously defined probabilities. After that, the customers having demand are randomly sorted to simulate their calling sequence. In the second one, we used again the previous demand probabilities, but now forcing spatially correlated demands. To do so, for each pair of customers \(j,j^{\prime}\in J\) we first computed \(\delta_{jj^{\prime}}=\max\{0.1,\max_{i\in I}\{|c_{ij}-c_{ij^{\prime}}|\}\}\) which, although not being a proper distance, gives an idea of the proximity of two customers, taking the set of facilities as a reference. The idea is to force that the demands of customers that are at a small _distance_ have higher correlations. The decay of this correlation is governed by a preset decreasing function \(w(\delta)\). In particular, we used \(w(\delta)=1+\left(1-2\frac{\delta}{\Delta}\right)^{\frac{1}{3}}\), where \(\Delta\) stands for the maximum distance between two customers. Using the above _distances_ and decay function, we generated each scenario using the following observation: Let \(\{Z_{j}\}_{j\in J}\) be a family of independent random variables, each following a \(N(0;1)\) distribution. Then variables \(\{Y_{j}\}_{j\in J}\) defined as \[Y_{j}=\frac{\sum\limits_{k\in J}w(\delta_{jk})Z_{k}}{\sqrt{\sum\limits_{k\in J }w(\delta_{jk})^{2}}},\] also follow a \(N(0;1)\) distribution and have positive correlations which increase as the corresponding customers get closer. Then, from a sample of the \(\{Z_{j}\}_{j\in J}\) variables we computed the corresponding \(\{Y_{j}\}_{j\in J}\) values and set \[d_{j}=\begin{cases}1&\text{if }\Phi(Y_{j})\leqslant p_{j},\\ 0&\text{otherwise},\end{cases}\quad\text{ with }\Phi\text{ denoting the cdf of the }N(0;1)\text{ distribution}.\] By proceeding like this, we generated demands following correlated Bernoulli distributions with the same probabilities as in the instances with independent demands. Again, after generating these demands we simulate the calling sequence by randomly sorting the customers with demand. In total we have 408 instances (204 with uncorrelated demands and 204 with correlated demands). In the following, "U" stands for instances with uncorrelated demands and "C" for instances with correlated demands. Recall also that \(\gamma=1\) indicates low capacities in use whereas \(\gamma=4\) corresponds to high capacites. Table 1 summarizes the characteristics of the scenario-defined instances (\(|\Omega|=50\)). All the formulations presented in this paper were implemented using CPLEX 12.6 callable libraries within a C code. In all cases a CPU time limit was set. This limit was one hour for the small instances, and two hours for the large ones. All the tests were carried out on a Pentium(R) 4, 3.2GHz, 1.0GB of RAM. In what follows, we report the results obtained. First we analyze the performance of the formulations for the different outsourcing policies and then we carry out a comparative analysis among them. ### Performance of formulations In Tables 2-5 we summarize the results obtained when using the four models proposed in the previous section. In these tables, "S" and "L" stand for small and large instances, respectively. The columns under _CPU_ give average computing times in seconds. The columns under \(\%gap\) give average percent optimality gaps, computed, for each instance, as \(100\frac{z_{U}-z_{L}}{z_{L}}\), where \(z_{U}\) and \(z_{L}\) respectively denote the values of the best feasible solution and the best lower bound at termination. In each case, averages are computed over all the instances of the corresponding row. Columns under _# opt_ indicate the number of proven optimal solutions found. The last row in each table summarizes the results over the full set of benchmark instances, giving average computing times in columns CPU and \(\%gap\), respectively, and total number of optimal solutions found in _# opt_. The other entries have already been explained before. Table 2 contains the results for formulation FO. Looking into this table, we observe that the instances with uncorrelated demands are harder to handle than those with correlated demands. This is expected since a correlated behaviour somehow simplifies (reduces) the range of observable "futures" thus leading to easier-to-solve instances. The superiority of the correlated instances is clear both in terms of the number of instances solved to optimality and in terms of the computing time required by the solver to accomplish that. As expected, the larger instances are harder to tackle: they require on average a much higher computing time and also more space, and thus fewer are being solved to proven optimality within the time limit. It is worth noting that, as will be seen in the next section, most of the non-solved instances terminated because of memory problems. Actually, only two small instances terminated because of the time limit. When using formulation FO, the demand pattern considered does not seem to influence the results obtained. This indicates that having a higher or \begin{table} \begin{tabular}{c|c|c c|c} & & \multicolumn{2}{c}{PT1} & \multicolumn{2}{c}{PT2} & Total \\ & \(\gamma\) & (20\%, 60\%, 20\%) & (20\%, 20\%, 60\%) & \\ \hline Small & 1 & 33 (U) + 33 (C) & 33 (U) + 33 (C) & 132 \\ \(|I|=15,|J|=30\) & 4 & 33 (U) + 33 (C) & 33 (U) + 33 (C) & 132 \\ \hline Large & 1 & 18 (U) + 18 (C) & 18 (U) + 18 (C) & 72 \\ \(|I|=20,|J|=60\) & 4 & 18 (U) + 18 (C) & 18 (U) + 18 (C) & 72 \\ \hline Total & & 204 & 204 & 408 \\ \end{tabular} \end{table} Table 1: Summary of scenario-defined instances. lower expected service request is not as relevant as having correlated or uncorrelated demand or higher or lower capacities for the facilities. Concerning the capacity type for the instances with uncorrelated demands, it clearly has an effect: the instances with tighter capacities (\(\gamma=1\)) are more challenging. Overall, when using model FO there is empirical evidence that the harder instances are those with uncorrelated demands and low capacities. In Table 3 we can find a synthesis of the results obtained when considering formulation OD-CO. Clearly, the structure of the mathematical model derived for this outsourcing strategy makes it more challenging. The effect of the correlation in the demand does not impact as much as before in terms of the CPU time required to solve the instances to proven optimality. However, for correlated demands, more instances are solved successfully. In turn, the size of the instances and the type of capacity do influence the results. In fact, similarly to formulation FO, tighter capacities (\(\gamma=1\)) make the instances harder to tackle. This holds in terms of the number of instances solved to proven optimality as well as in terms of the computing times required to solve such instances and in terms of the final gap for the remaining instances. As observed in the case of facility outsourcing, the demand pattern does not seem to impact on the results. Overall, when considering model OD-CO, the harder instances to tackle are those with uncorrelated demands and tighter capacities. \begin{table} \begin{tabular}{|c|c|c|c c|c c|c c c|} \hline & & & \multicolumn{2}{c|}{CPU} & \multicolumn{2}{c|}{\%gap} & \multicolumn{3}{c|}{\# opt} \\ & & \(\gamma\) & PT1 & PT2 & PT1 & PT2 & PT1 & PT2 & Total \\ \hline \multirow{4}{*}{U} & S & 1 & 520.63 & 434.19 & 0.05 & 0.03 & 31 & 31 & 62 \\ & & 4 & 5.89 & 3.70 & 0.00 & 0.00 & 33 & 33 & 66 \\ & & L & 1 & 2467.54 & 2328.09 & 2.76 & 1.11 & 1 & 0 & 1 \\ & & 4 & 81.78 & 62.23 & 0.00 & 0.00 & 18 & 18 & 36 \\ \hline \multirow{4}{*}{C} & S & 1 & 0.33 & 0.37 & 0.00 & 0.00 & 33 & 33 & 66 \\ & & 4 & 1.36 & 0.45 & 0.00 & 0.00 & 33 & 33 & 66 \\ \cline{1-1} & & L & 1 & 2.82 & 2.10 & 0.00 & 0.00 & 18 & 18 & 36 \\ \cline{1-1} & & 4 & 3.91 & 1.95 & 0.00 & 0.00 & 18 & 18 & 36 \\ \hline Summary & & & 310.98 & 282.23 & 0.25 & 0.10 & 185 & 184 & 369 \\ \hline \end{tabular} \end{table} Table 2: Summary of results for model FO—facility outsourcing policy. \begin{table} \begin{tabular}{|c|c|c|c c|c c|c c|} \hline & & & \multicolumn{2}{c|}{CPU} & \multicolumn{2}{c|}{\%gap} & \multicolumn{3}{c|}{\# opt} \\ & & \(\gamma\) & PT1 & PT2 & PT1 & PT2 & PT1 & PT2 & Total \\ \hline \multirow{4}{*}{U} & S & 1 & 3600.12 & 3600.09 & 4.96 & 4.10 & 0 & 0 & 0 \\ & & 4 & 1704.22 & 1985.06 & 0.47 & 0.65 & 26 & 21 & 47 \\ \cline{1-1} & & L & 1 & 7200.21 & 7200.27 & 7.95 & 8.59 & 0 & 0 & 0 \\ & & 4 & 6786.20 & 6370.27 & 1.34 & 1.14 & 2 & 5 & 7 \\ \hline \multirow{4}{*}{C} & S & 1 & 2162.91 & 2584.14 & 1.66 & 3.70 & 10 & 3 & 13 \\ & & 4 & 902.61 & 230.51 & 0.32 & 0.00 & 26 & 33 & 59 \\ \cline{1-1} & & L & 1 & 7145.92 & 6544.58 & 1.23 & 0.63 & 0 & 3 & 3 \\ \cline{1-1} & & 4 & 4457.38 & 4573.73 & 1.08 & 0.74 & 10 & 9 & 19 \\ \hline \multicolumn{1}{|l|}{Summary} & & & 3611.86 & 3537.22 & 2.22 & 2.35 & 74 & 74 & 148 \\ \hline \end{tabular} \end{table} Table 3: Summary of results for model OD-CO—customer outsourcing with a order driven discipline. Table 4 contains the results for formulation CD-CO. As for the above models, the size of the instances and the capacity type (low vs high) of the facilities clearly influence the results and in a similar way. However, contrary to the previous cases, now, the demand pattern influences the final gap observed for the instances not solved to optimality. Although in some cases this is not significant, in the columns headed by "%gap" we observe that averages corresponding to PT2 tend to be smaller than those for PT1. The demand pattern 2 also seems to make the instances more tractable when it comes to solve them to optimality as we observe in the last columns of the table. The above aspects may be justified by the fact that in the case of pattern 2 we have more customers with high probability of requesting the service, i.e., we have a larger expected service request. This feature embedded in a cost-driven outsourcing strategy is apparently helping discarding sooner many solutions that are not as competitive and they could be when cost is not a driving feature for deciding about customer outsourcing. Observing this table we also realize that no instance combining uncorrelated demands with tight capacities (\(\gamma=1\)) was solved to proven optimality within the time limit. Overall, from the perspective of using model CD-CO, the hardest instances are those with uncorrelated demands, tighter capacities and the first demand pattern (60% medium-demand customers and 20% of low- and high-demand customers). \begin{table} \begin{tabular}{|c|c|c|c c|c c|c c c|} \hline & & & \multicolumn{3}{c|}{CPU} & \multicolumn{3}{c|}{\%gap} & \multicolumn{3}{c|}{\# opt} \\ & & \(\gamma\) & PT1 & PT2 & PT1 & PT2 & PT1 & PT2 & Total \\ \hline \multirow{4}{*}{U} & S & 1 & 2783.10 & 3277.74 & 2.56 & 1.95 & 2 & 3 & 5 \\ & & 4 & 580.90 & 560.63 & 0.12 & 0.11 & 30 & 31 & 61 \\ \cline{2-10} & L & 1 & 6368.24 & 6930.04 & 6.14 & 3.40 & 0 & 0 & 0 \\ & & 4 & 3746.21 & 2394.78 & 0.09 & 0.39 & 14 & 16 & 30 \\ \hline \multirow{4}{*}{C} & S & 1 & 852.81 & 1422.87 & 0.20 & 0.47 & 29 & 26 & 55 \\ & & 4 & 358.68 & 22.55 & 0.31 & 0.00 & 27 & 33 & 60 \\ \cline{1-1} \cline{2-10} & L & 1 & 2951.89 & 1036.39 & 0.21 & 0.06 & 15 & 17 & 32 \\ \cline{1-1} & L & 4 & 1326.28 & 655.00 & 0.36 & 0.23 & 14 & 16 & 30 \\ \hline Summary & & & 2010.09 & 1826.75 & 1.12 & 0.77 & 131 & 142 & 273 \\ \hline \end{tabular} \end{table} Table 4: Summary of results for model CD-CO—customer outsourcing using a cost-driven discipline. \begin{table} \begin{tabular}{|c|c|c|c c|c c|c c|} \hline & & & \multicolumn{3}{c|}{CPU} & \multicolumn{3}{c|}{\%gap} & \multicolumn{3}{c|}{\# opt} \\ & & \(\gamma\) & PT1 & PT2 & PT1 & PT2 & PT1 & PT2 & Total \\ \hline \multirow{4}{*}{U} & S & 1 & 3062.82 & 3006.35 & 1.18 & 0.56 & 0 & 0 & 0 \\ & & 4 & 1082.92 & 1173.42 & 0.07 & 0.07 & 23 & 24 & 47 \\ \cline{1-1} \cline{2-10} & L & 1 & 7200.12 & 7200.18 & 3.05 & 2.27 & 0 & 0 & 0 \\ & & 4 & 7099.35 & 7200.13 & 0.46 & 0.70 & 0 & 0 & 0 \\ \hline \multirow{4}{*}{C} & S & 1 & 558.38 & 84.18 & 0.02 & 0.01 & 30 & 32 & 62 \\ & & 4 & 226.63 & 126.29 & 0.01 & 0.01 & 32 & 32 & 64 \\ \cline{1-1} \cline{2-10} & L & 1 & 5401.62 & 3965.66 & 0.13 & 0.06 & 3 & 7 & 10 \\ \cline{1-1} & & 4 & 4740.20 & 3484.75 & 0.08 & 0.04 & 7 & 11 & 18 \\ \hline Summary & & & 2954.21 & 2638.19 & 0.53 & 0.38 & 95 & 106 & 201 \\ \hline \end{tabular} \end{table} Table 5: Summary of results for model RO—reassignment outsourcing. Finally, in Table 5 we present the results for the fourth outsourcing strategy proposed: reassignment outsourcing. The conclusions are very similar to those drawn for model CD-CO. Again, the demand pattern seems to have an influence in the results and in a similar way as for the previous model discussed. Now, no large instance with uncorrelated demands could be solved to optimality within the time limit. In the case of loose capacities and for the first demand pattern we observe an average computing time of 7099 seconds, i.e., below the time limit. This is an indication that for some instances in that group the machine ran out of memory before the time limit was reached. Actually, if we only consider the instances that could be optimally solved, the average solution time for the uncorrelated instances was 516 seconds, and for the correlated ones, 380. ### Computational comparison among outsourcing policies The results discussed in the previous section do not allow a deep comparison between the different outsourcing policies being studied. In this section we focus on such a comparison. We start by analyzing the termination status when using the different formulations. Afterwards, we report percentage gaps at termination for instances that could not be solved to proven optimality and computing times for those that could. Finally, we analyze the capability of each outsourcing policy to provide a good approximation for other policy(ies). This is accomplished by considering the optimal location-allocation solution obtained using one policy and checking how good it is if some other policy is considered instead. Figure 1 depicts the termination status for the different outsourcing policies. We can observe that the policy for which we were able to obtain more optimal solutions (around 90%) is FO (Figure 1(a)). In the other extreme, we observe OD-CO where optimality was proven for nearly 37% of the instances (Figure 1(c)). When we compare this policy with the other CO policy, we conclude that the inclusion of the ordering constraints (24) in formulation (15)-(23) has a dramatic consequence in terms of model solvability. In the case of RO and OD-CO, the major cause for not solving an instance to proven optimality is the time limit, whereas for the other two policies available memory was the reason for a premature stop. Of course, these numbers apply to the computer we have used in our experiments. Still, while memory limitation could be alleviated by using a computer with a higher RAM, the computing time limitation could also be alleviated by increasing the time limit. Thus, in our opinion, the above analysis reflects the main difficulties for optimally solve the instances for each of the considered outsourcing policies, and does not depend on the computer we have used. In Figure 2 we can observe the termination percentage gap (%) for the instances that could not be solved to optimality either because the time limit was reached or because the computer memory was exhausted. Each sub-figure refers to one outsourcing policy as labeled. The bars indicate the number of instances involved (to be read in the right-hand side axis); The three lines (whose values can be seen in the left-hand side axis) indicate the minimum, average and maximum values observed. In these figures, we are disaggretating the results according to three factors: type of demand (correlated or uncorrelated) dimension of the instances (small or large), and demand pattern (pattern 1 or pattern 2). In these figures we do not distinguish between the two capacity types, low and high (\(\gamma=1\) and \(\gamma=2\)) because for the purpose of the analysis of the percentage gaps, that distinction was not as relevant as the others when we look into the results. Observing Figure 2 we realize that the instances with correlated demand seem easier to handle when compared to those with uncorrelated demand. In fact, for the former, not only fewer instances are involved in these figures (i.e., more instances were solved to optimality) but also for those considered in the figure, the final gap at termination is globally smaller. As before, this is an indication that by working with correlated data one is considering an easier setting in terms of possibilities for the future observations since the customers behave in a similar fashion. When we focus on the demand pattern or on the size of the instances, we do not observe a clear trend in Figure 2. This is an indication that the structure of the problem is not significantly influenced by these aspects. Overall, the final gap at termination for the instances not solved to optimality does not seem to be too sensitive neither to the dimension of the instances nor to the demand pattern. Figure 1: Termination status for the different outsourcing strategies. When comparing the different outsourcing policies, we see a clear trend in favor of FO (Figure 2(a)) and RO (Figure 2(b)). On the other hand, OD-CO seems again to provide the model harder to solve to proven optimality using a general purpose solver. As mentioned before, the need to include ordering constraints in the model seems to make a huge difference in its structure with a dramatic consequence in terms of its efficient solvability. In Figure 3 we can observe the computing times (in seconds) for the instances that were successfully solved up to proven optimality within the time limit. Each sub-figure refers to one outsourcing policy as labeled. The vertical bars count the number of instances involved in the analysis whose corresponding values are marked on the right-hand side axis. The three lines depicted (whose values are associated with the left-hand side axis) regard the minimum, average and maximum values observed. Similarly to Figure 2 we are disaggregating the results according to type of demand (uncorrelated or correlated), size of the instances and demand pattern. As before, FO is the policy whose optimization model is easier to tackle. This is even more evident when we focus on the instances with correlated data. Apart from FO, the type of demand does not seem to influence the "friendliness" of the optimization models. Analyzing the graphics Figure 2: Percentage gaps at termination for the different outsourcing strategies. we also see a tendency for fewer instances involved in the analysis for large instances with uncorrelated demand, which is in line to previous conclusions already drawn. Finally, in this section we look into the cost structure of the optimal solutions found using each outsourcing strategy. In particular, for each outsourcing policy we present the contribution to the objective function value of the strategic decisions (facility setup costs), transportation costs (customers assignment costs) and outsourcing costs. This information is depicted in Figures 4-7. In these figures "opening" refers to the strategic decisions, "service" concerns customer satisfaction cost, and "penalty" stands for the outsourcing. In the case of facility reassignment, we also present the corresponding costs--"reassign.". Observing Figures 4-7 we immediately conclude for a tendency in terms of higher opening costs when data is correlated. This is an indication that under correlated data, we need to pay more in terms of first-stage decisions (facility opening costs) to better hedge against the future uncertainty. On the other hand, apart from RO, we see that the penalty costs are usually small and even more with correlated data. Regarding RO we do not see much different in terms of the opening costs when moving Figure 3: Computing times (in seconds) for the instances that were successfully solved to proven optimality. from uncorrelated to correlated data. However, in the latter the reassignment costs are negligible which is an indication that under correlated data reassigning customers is less frequent, which is justified by having most of the facilities close to their service capacity. ### Managerial insight The results reported so far in this section show that the mathematical programming formulations adopted for the different outsourcing policies have a different behavior when it comes to tackling them using a general purpose solver. Overall, there seems to be a hierarchy between the four models tested. FO seems to provide the model easier to solve optimally by the solver; then we observe RO; finally, we see the customer outsourcing policies with OD-CO associated with the model harder to solve optimally. Given the differences observed between the optimization models, a decision maker may think of using the ones easier to solve to proven optimally to provide approximate solutions to the harder ones. To investigate this possibility, for every instance we looked into how good the optimal (or best) solution found for one outsourcing policy is when looked at as an approximate solution for another policy. In Table 6 we can observe the values obtained. Looking at the values per rows in this table, we conclude that the policy that provides more robust solutions for all the other policies is OD-CO. In fact, apart from this one, all the other policies provide a quite bad approximation for at least one other outsourcing policy. If we focus on the values per columns, we conclude that CD-CO is the policy with the optimal solution "easier" to approximate using the solution provided by other policies. An interesting aspect to notice is that the "matrix" presented in Table 6 is not symmetric. The differences are often quite significant. This indicates that on average, the fact that one policy provides a good approximation to another one does not imply the reverse. The most extreme case involves the two customer outsourcing policies: we see that OD-CO solutions are very good approximations for CD-CO but the reverse approximation is quite bad. This indicates that the optimal (or best feasible solutions) are clearly more sensitive to order-driven outsourcing than to cost-driven outsourcing. We can draw similar conclusions for other pairs of policies. In order to deepen the analysis, we disaggregated the results. In Table 7 we consider explicitly the type of demand (uncorrelated or correlated) and the type of capacity adopted for the facilities--low or high (\(\gamma=1\), \(\gamma=4\)). In Table 8 we consider again \begin{table} \begin{tabular}{l r r r r} \hline \hline & FO & CD-CO & OD-CO & RO \\ \cline{2-5} FO & & 0.463 & 20.378 & -0.014 \\ CD-CO & 17.826 & & 48.583 & 13.169 \\ OD-CO & 2.896 & 1.646 & & 5.318 \\ RO & 1.139 & 3.044 & 23.039 & \\ \hline \hline \end{tabular} \end{table} Table 6: Average percentage gaps (%) of the optimal/best-found solution using one policy (rows) with respect to the others (columns). the type of demand but now jointly with the demand pattern as well. We note that in many cases, there were instances that could not be solved to proven optimality (either because the time limit was reached or because the memory of the machine was exhausted). In such situations we work with best-found solutions. This explains some negative averages that we observe in Tables 7 and 8. Observing Tables 7 and 8 we conclude that in general, uncorrelated demand leads to worse results than correlated demand. In other words, the different outsourcing policies provide poorer approximations to each other in the former case. In these tables we also observe that changing the capacity of the facilities seems to have a greater impact than changing the demand pattern. Additionally, we realize that the results for \(\gamma=4\) (high capacities) seem systematically worse than for \(\gamma=1\) (low capacities). Moreover, when we focus on OD-CO we observe that the instances with uncorrelated demand seem invariably more difficult to approximate using the other outsourcing policies than those with correlated demand. The worst approximation occurs (on average) when we approximate OD-CO by CD-CO for uncorrelated demand and the larger capacity of the facilities. The best approximations (on average) occur when we approximate RO by FO. In search for an explanation in terms of the robustness of model FO (for providing good feasible solutions for CD-CO and RO) and OD-CO for all policies, we analyzed the number of facilities opened in each solution. Figure 8 depicts such results already disaggregated according to the type of demand (uncorrelated or correlated) and the size of the instances. Additionally, in Figure 8(a) we can observe the results for the two types of capacity whereas in Figure 8(b) we have the information according to the demand pattern. Observing this figure we conclude that OD-CO policy calls (on average) for the largest number of open facilities. This may turn the solutions for this policy more robust when considered as feasible solutions to the other policies since more facilities means, for instance, more possibilities for reassigning customers and lower customer outsourcing costs. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{FO} & \multicolumn{2}{c}{CD-CO} & \multicolumn{2}{c}{OD-CO} & \multicolumn{2}{c}{RO} \\ \cline{3-10} & & \multicolumn{2}{c}{\(\gamma\)=1} & \multicolumn{2}{c}{\(\gamma\)=4} & \multicolumn{2}{c}{\(\gamma\)=1} & \multicolumn{2}{c}{\(\gamma\)=4} & \multicolumn{2}{c}{\(\gamma\)=1} & \multicolumn{2}{c}{\(\gamma\)=4} & \multicolumn{2}{c}{\(\gamma\)=1} & \multicolumn{2}{c}{\(\gamma\)=4} \\ \cline{3-10} FO & U & \multicolumn{2}{c}{} & & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ C & C & & & & & & & & & & & & \\ \cline{3-10} & U & 4.708 & 52.415 & & & & & & & & & & \\ \cline{3-10} & C & 2.854 & 11.327 & & & & & & & & & & & \\ \cline{3-10} & U & 0.893 & 0.230 & 0.005 & 0.121 & & & & & & & & & & \\ \cline{3-10} & C & 9.854 & 0.608 & 6.045 & 0.411 & & & & & & & & & & \\ \cline{3-10} & U & 1.316 & 0.160 & 3.600 & 0.160 & 21.210 & 28.366 & & & & & & & \\ \cline{3-10} & C & 1.242 & 1.837 & 4.720 & 3.700 & 13.527 & 29.051 & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 7: Data of Table 6 disaggregated according to the type of data and the type of capacity for the facilities. ## 5 Conclusions In this work we studied different outsourcing policies in the context of the facility location problem with Bernoulli demands. We extended the previous work on this problem by considering both uncorrelated and correlated service demands. Furthermore, in addition to the two policies that had already been proposed in the literature we introduced two other possibilities. Each of the four investigated outsourcing policies leads to a variant of the problem for which a mathematical programming formulation was derived assuming a finite set of scenarios for the demand. An extensive computational study was performed to evaluate the extent to which the mixed-integer linear programming models proposed can be handled by a general-purpose solver as well as to evaluate the capability of each model to produce high quality feasible solutions for the others. The results show that two outsourcing policies lead to models easier to solve to proven optimality using a solver, namely: facility outsourcing and reassignment outsourcing. Moreover, the fact that the two customer outsourcing policies induce mathematical models more difficult to solve to proven optimality, motivates the use of approximate solutions in those cases. The results show that on average, the optimal solution obtained for facility outsourcing turns out to be a good approximate solution for cost-driven customer outsourcing and for reassignment outsourcing. Nevertheless, the most "robust" outsourcing policy is order-driven customer outsourcing in the sense that the corresponding optimal solutions are invariably good feasible solutions to the other three policies. This means that if a decision maker adopts the a priori solution provided by an order-driven customer outsourcing, it is likely to perform reasonably well, regardless the outsourcing policy that is finally adopted. This work opens new research directions in terms of the explicit inclusion of outsourcing in stochastic capacitated facility location problems. In particular, it would be interesting to investigate whether probability distributions other than Bernoulli can benefit from the insights provided by this paper. \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline & & \multicolumn{2}{c}{FO} & \multicolumn{2}{c}{CD-CO} & \multicolumn{2}{c}{OD-CO} & \multicolumn{2}{c}{RO} \\ \cline{3-10} & & \multicolumn{2}{c}{PT1} & \multicolumn{2}{c}{PT2} & \multicolumn{2}{c}{PT1} & \multicolumn{2}{c}{PT2} & \multicolumn{2}{c}{PT1} & \multicolumn{2}{c}{PT2} \\ \cline{3-10} FO & U & & & & -0.145 & 0.491 & 22.291 & 23.468 & -0.027 & -0.031 \\ & C & & & & 0.740 & 0.766 & 16.714 & 19.039 & 0.000 & 0.000 \\ CD-CO & U & 28.150 & 28.973 & & & & 73.127 & 65.711 & 21.653 & 21.516 \\ & C & 7.336 & 6.845 & & & 29.862 & 25.633 & 5.275 & 4.231 \\ OD-CO & U & 0.546 & 0.576 & 0.196 & -0.070 & & & 3.786 & 2.093 \\ & C & 3.797 & 6.664 & 2.310 & 4.147 & & & 7.565 & 8.833 \\ RO & U & 1.316 & 0.363 & 3.601 & 1.166 & 21.210 & 24.262 & & & \\ & C & 1.137 & 1.942 & 3.887 & 4.525 & 20.372 & 22.205 & & & \\ \hline \hline \end{tabular} \end{table} Table 8: Data of Table 6 disaggregated according to the type of data and the demand pattern. ## Acknowledgments This work was supported by the Spanish Ministry of Economy and Competitiveness through MINECO/ FEDER grants MTM2015-63779-R and MTM2019-105824GB-I00, and by National Funding from FCT--Fundacao para a Ciencia e a Tecnologia, Portugal, under the project: UIDB/04561/2020. This support is gratefully acknowledged. The authors thank the two anonymous reviewers whose comments and insights helped improving the article.
2302.06776
Minimum-link $C$-Oriented Paths Visiting a Sequence of Regions in the Plane
Let $E=\{e_1,\ldots,e_n\}$ be a set of $C$-oriented disjoint segments in the plane, where $C$ is a given finite set of orientations that spans the plane, and let $s$ and $t$ be two points. %(We also require that for each orientation in $C$, its opposite orientation is also in $C$.) We seek a minimum-link $C$-oriented tour of $E$, that is, a polygonal path $\pi$ from $s$ to $t$ that visits the segments of $E$ in order, such that, the orientations of its edges are in $C$ and their number is minimum. We present an algorithm for computing such a tour in $O(|C|^2 \cdot n^2)$ time. This problem already captures most of the difficulties occurring in the study of the more general problem, in which $E$ is a set of not-necessarily-disjoint $C$-oriented polygons.
Kerem Geva, Matthew J. Katz, Joseph S. B. Mitchell, Eli Packer
2023-02-14T01:27:41Z
http://arxiv.org/abs/2302.06776v1
# Minimum-link \(C\)-Oriented Paths Visiting a Sequence of Regions in the Plane ###### Abstract Let \(E=\{e_{1},\ldots,e_{n}\}\) be a set of \(C\)-oriented disjoint segments in the plane, where \(C\) is a given finite set of orientations that spans the plane, and let \(s\) and \(t\) be two points. We seek a minimum-link \(C\)-oriented tour of \(E\), that is, a polygonal path \(\pi\) from \(s\) to \(t\) that visits the segments of \(E\) in order, such that, the orientations of its edges are in \(C\) and their number is minimum. We present an algorithm for computing such a tour in \(O(|C|^{2}\cdot n^{2})\) time. This problem already captures most of the difficulties occurring in the study of the more general problem, in which \(E\) is a set of not-necessarily-disjoint \(C\)-oriented polygons. ## 1 Introduction We consider the problem in which we are given a sequence of regions, \(\mathcal{R}=(R_{1},R_{2},\ldots,R_{n})\), where each \(R_{i}\) is a subset of an underlying geometric domain, and our goal is to compute a tour (a path or a cycle) within the domain that visits the regions \(\mathcal{R}\) in the given order and is optimal in some prescribed sense. Optimality might be based on the Euclidean length of the tour, the number of turns in a polygonal tour (or, equivalently, the number of _links_ (edges) in the tour), a weighted cost function, etc. There are also variants of the problem in which it is important to specify exactly what constraints there are on the ordered visitation of the regions, particularly if the regions are not disjoint. The problem arises naturally and is also motivated by applications in curve simplification (e.g., [5]), vehicle routing (e.g., the traveling salesperson problem (TSP); see [7]), search and exploration (e.g., [3]), computing structures on imprecise points [6], task sequencing in robotics (see [2, 1]), etc. In this paper we focus on the version of the problem in which the regions \(R_{i}\) are disjoint \(C\)-oriented line segments (with orientations/slopes from a finite set \(C\)) in the plane, the tour is required to be polygonal and \(C\)-oriented, and the optimality criterion is to minimize the number of links (equivalently, the number of turns, or vertices in the polygonal tour). We briefly mention generalizations (deferred to the full paper), including the case in which the regions \(R_{i}\) are more general than disjoint line segments. More formally, let \(C\) be a finite set of orientations, which can be thought of as points on a unit circle centered at the origin. We assume that (i) \(C\) spans the plane, i.e., for any two points \(p,q\) in the plane, there exists a two-link (directed) path from \(p\) to \(q\) (or a one-link path), such that the orientation of the edges in the path belong to \(C\), and (ii) for any orientation \(c_{i}\in C\), the orientation \(\overline{c_{i}}\) is also in \(C\), where \(\overline{c_{i}}\) is the opposite orientation of \(c_{i}\). The requirement for paths to be \(C\)-oriented arises in some settings (mechanical constraints) but also has advantages in lower/upper bounding of the turn angles, in comparison with polygonal paths having general links, which may form arbitrarily sharp turns. We focus on the following problem: _Minimum-link \(C\)-oriented tour of a sequence of \(C\)-oriented segments_: Let \(E=\{e_{1},\ldots,e_{n}\}\) be a set of \(C\)-oriented disjoint segments, that is, if we think of \(e\in E\) as a directed segment, by arbitrarily picking one of the two possible directions, then \(e\)'s orientation belongs to \(C\). Let \(s\) and \(t\) be two points that do not belong to any of the segments in \(E\). A _tour_ of \(E\) is a polygonal path \(\pi\) that begins at \(s\) and ends at \(t\) with the following property: There exists a sequence of points \(p_{1},\ldots,p_{n}\) on \(\pi\), such that, \(p_{i}\) precedes \(p_{i+1}\), for \(1\leq i\leq n-1\), and \(p_{i}\in e_{i}\), for \(1\leq i\leq n\). A tour is \(C\)-oriented, if the orientation of each of its edges belongs to \(C\). We wish to compute a \(C\)-oriented minimum-link tour of \(E\), that is, a \(C\)-oriented tour consisting of a minimum number of links (i.e., edges). Our main contribution is an efficient algorithm to compute a minimum-link \(C\)-oriented tour of a sequence of \(n\) disjoint \(C\)-oriented line segments, in time \(O(|C|^{2}\cdot n^{2})\). (The algorithm becomes \(O(n)\) in the special case of \(|C|=4\), e.g., axis-oriented paths.) #### 1.0.1 Related Work In the _touring polygons problem_ (TPP), one seeks a tour that is shortest in Euclidean length that visits a sequence of polygons; such a tour is found in polynomial time if the polygons are convex and is NP-hard in general (and has an FPTAS) [3]. Minimization of the link length of a tour visiting a sequence of (possibly overlapping) disks is studied in [5], where the motivation for this "ordered stabbing" problem was curve and map simplification (see also [9]). In contrast with our problem specification, in [5] the path edges are allowed to be of arbitrary orientation, not required to be \(C\)-oriented. This assumption leads to particular efficiency, as one can use an extension of linear-time line stabbing methods (see Egyed and Wenger [4]) to execute a greedy algorithm efficiently. Computing a minimum-link \(C\)-oriented path from start to goal among obstacles has been studied as well, without requiring visitation of a sequence of regions; see [8, 10]. ## 2 Preliminaries **Notation.** For any \(1\leq i\leq n\), let \(l(e_{i})\) be the number of links in a minimum-link path that begins at \(s\) and ends at a point on \(e_{i}\). We only consider \(C\)-oriented paths to \(e_{i}\) that visit the segments \(e_{1},\ldots,e_{i}\), as defined above. We refer to the number of links in such a path as its _length_. We distinguish between paths to \(e_{i}\) both by their length and by the orientation of their last link. Let \(I(e_{i},c_{j})\) (\(I^{+}(e_{i},c_{j})\)) be the set of maximal intervals on \(e_{i}\) formed by all paths of length \(l(e_{i})\) (\(l(e_{i})+1\)) from \(s\) to \(e_{i}\), whose last link has orientation \(c_{j}\). We set \(I(e_{i})=\bigcup_{c\in C}I(e_{i},c)\) and \(I^{+}(e_{i})=\bigcup_{c\in C}I^{+}(e_{i},c)\). For an orientation \(c_{j}\in C\), let \(c_{j+1}\) and \(c_{j-1}\) be the orientations in \(C\) that immediately succeed \(c_{j}\) and precede \(c_{j}\) in clockwise order, respectively. We denote by \(\phi(c_{j},c_{k})\) the set of orientations in \(C\) between \(c_{j}\) and \(c_{k}\) (in clockwise order from \(c_{j}\)), not including \(c_{j}\) and \(c_{k}\). Finally, we denote the ray emanating from \(p\) in orientation \(c_{j}\) by \(Ray(p,c_{j})\) and the line through \(p\) parallel to a segment of orientation \(c_{j}\) by \(Line(p,c_{j})\). Let \(a\) be an interval on \(e_{i}\) that belongs to one of the sets \(I(e_{i})\) or \(I^{+}(e_{i})\). Then \(a\) has a length \(l_{a}\) (which is either \(l(e_{i})\) or \(l(e_{i})+1\)) and an orientation \(c_{a}\in C\) associated with it. We denote the endpoints of \(a\) by \(a_{1}\) and \(a_{2}\), where \(a_{1}\) is to the left of \(a_{2}\), when approaching \(a\) through a path corresponding to \(a\) (i.e., a path starting at \(s\) and ending at a point in \(a\), which is of length \(l_{a}\) and whose last link is of orientation \(c_{a}\)). Next, we use \(a\) to define two regions of the plane, namely, \(PT(a)\) and \(\psi(a,c_{j})\). Let \(PT(a)\) denote the semi-slab consisting of all points that can be reached by extending the last link of a path corresponding to \(a\). We refer to such a path as a path that _passes through \(a\)_ and continues in the same orientation at which it reached \(a\) (i.e., \(c_{a}\)). Thus, the region \(PT(a)\) is the semi-slab bounded by the rays \(Ray(a_{1},c_{a}),Ray(a_{2},c_{a})\) and the interval \(a\) (see, e.g., the red region in Figure 1). Similarly, let \(\psi(a,c_{j})\) be the region of all points that can be reached by a path that passes through \(a\) and then, not necessarily immediately, turns and continues in orientation \(c_{j}\). Thus, \(\psi(a,c_{j})=\bigcup_{q\in PT(a)}Ray(q,c_{j})\), for example if \(c_{j}=\overline{c_{a}}\), then \(\psi(a,c_{j})\) is the slab defined by the lines \(Line(a_{1},c_{a})\) and \(Line(a_{2},c_{a})\) (for additional examples see Figure 8). Finally, for an interval \(b\in I^{+}(e_{i})\), we set \(\delta(b)=\{a\in I(e_{i})|a\subseteq b\}\). We now show that the sets \(I(e_{i})\) and \(I^{+}(e_{i})\) are sufficient, in the sense that there exists a minimum-link tour of \(E\) whose portion from \(s\) to \(e_{i}\) corresponds to an interval in \(I(e_{i})\cup I^{+}(e_{i})\). Assume this is false, and let \(\pi\) be a minimum-link tour of \(E\), such that its portion \(\pi_{i}\) from \(s\) to \(e_{i}\) does not correspond to an interval in \(I(e_{i})\cup I^{+}(e_{i})\). Then, the length of \(\pi_{i}\) (denoted \(|\pi_{i}|\)) is at least \(l(e_{i})+2\). Let \(p\) be the point on \(e_{i}\) where \(\pi_{i}\) ends, and denote the portion of \(\pi\) from \(p\) to \(t\) by \(\pi^{i}\). Then \(|\pi|\geq l(e_{i})+2+|\pi^{i}|\), if \(\pi\) makes a turn at \(p\), or \(|\pi|=l(e_{i})+2+|\pi^{i}|-1\), otherwise. Consider any path \(\pi^{\prime}_{i}\) from \(s\) to \(e_{i}\) that corresponds to an interval in \(I(e_{i})\) and let \(p^{\prime}\) be the point on \(e_{i}\) where \(\pi^{\prime}_{i}\) ends. Then, the tour obtained by \(\pi^{\prime}_{i}\), the edge \(p^{\prime}p\) and \(\pi^{i}\) is a tour of \(E\) of length at most \(l(e_{i})+1+|\pi^{i}|\leq|\pi|\). We have thus shown that **Claim 1**: _There exists a minimum-link tour of \(E\) whose portion from \(s\) to \(e_{i}\) corresponds to an interval in \(I(e_{i})\cup I^{+}(e_{i})\), for \(1\leq i\leq n\)._ Finally, since our assumptions on the set of orientations \(C\) imply that there exists a two-link path from \(p\) to \(q\), for any pair of points \(p,q\) in the plane, we have **Claim 2**: \(l(e_{i-1})\leq l(e_{i})\leq l(e_{i-1})+2\)_, for \(1\leq i\leq n\) (where \(l(e_{0})=0\))._ ## 3 The Main Algorithm In this section, we present an algorithm for computing a minimum-link tour of \(E\). The algorithm consists of two stages. In the first stage, it considers the segments of \(E\), one at a time, beginning with \(e_{1}\), and, at the current segment \(e_{i}\), it computes the sets \(I(e_{i})\) and \(I^{+}(e_{i})\) from the sets \(I(e_{i-1})\) and \(I^{+}(e_{i-1})\), associated with the previous segment. In the second stage, it constructs a minimum-link tour of \(E\), beginning from its last link, by consulting the sets \(I(\cdot)\) and \(I^{+}(\cdot)\) computed in the first stage. We begin with several definitions that will assist us in the description of the algorithm. Given a set \(I\) of intervals on \(e_{i}\), where each interval \(a\in I\) is associated with some fixed length (link distance) \(l_{a}=l\) and an orientation \(c_{a}\), and \(c_{j}\in C\), we define the sets of intervals _+0-intervals_, _+1-intervals_, _+2-intervals_ on \(e_{i+1}\) with respect to \(I\) and \(c_{j}\) (the definition of the first set does not depend on \(c_{j}\)). The **+0-intervals** on \(e_{i+1}\) consist of the intervals on \(e_{i+1}\) formed by passing through the intervals of \(I\), without making any turns. It is constructed by computing the interval \(b=PT(a)\cap e_{i+1}\), for each \(a\in I\), and including it in the set, setting \(l_{b}=l\) and \(c_{b}=c_{a}\), if it is not empty. The **+1-intervals** on \(e_{i+1}\) associated with orientation \(c_{j}\) consist of the intervals on \(e_{i+1}\) formed by passing through the intervals of \(I\) and then making a turn in orientation \(c_{j}\). It is constructed by computing the interval \(b=\psi(a,c_{j})\cap e_{i+1}\), for each \(a\in I\), and including it in the set, setting \(l_{b}=l+1\) and \(c_{b}=c_{j}\). The **+2-intervals** on \(e_{i+1}\) associated with orientation \(c_{j}\) consist of the intervals on \(e_{i+1}\) formed by passing through the intervals of \(I\) and then making two turns, where the first is in any orientation \(c\neq\overline{c_{a}}\) and the second is in orientation \(c_{j}\); see Lemma 3. We construct it as follows. First, we check if there is an interval \(a\in I\) such that \(c_{a}\notin\{c_{j-1},c_{j},c_{j+1}\}\). If there is such an interval, we include the interval \(b=e_{i+1}\), setting \(l_{b}=l+2\) and \(c_{b}=c_{j}\), and stop; see Lemma 4. Otherwise, for each \(a\in I\), we include the intervals \(b^{+}=\psi(a,\overline{c_{a+1}})\cap e_{i+1}\) and \(b^{-}=\psi(a,\overline{c_{a-1}})\cap e_{i+1}\), provided that they are not empty, and set \(l_{b^{+}}=l_{b^{-}}=l+2\) and \(c_{b^{+}}=c_{b^{-}}=c_{j}\); see paragraph following Lemma 4. ### Stage I We are now ready to describe the first stage of the algorithm. It is convenient to treat the points \(s\) and \(t\) as segments \(e_{0}\) and \(e_{n+1}\), respectively. We set \(l(e_{0})=0\) and, for each \(c_{j}\in C\), we insert the interval \(a=e_{0}\), after setting \(l_{a}=0\) and \(c_{a}=c_{j}\), into \(I(e_{0},c_{j})\). Similarly, for each \(c_{j}\in C\), we insert the interval \(a=e_{0}\), after setting \(l_{a}=1\) and \(c_{a}=c_{j}\), into \(I^{+}(e_{0},c_{j})\). We iterate over the segments \(e_{1},\ldots,e_{n+1}\), where in the \(i\)'th iteration, \(1\leq i\leq n+1\), we compute \(l(e_{i})\) and the pair of sets \(I(e_{i})\) and \(I^{+}(e_{i})\). Assume we have already processed the segments \(e_{0},\ldots,e_{i}\), for some \(0\leq i\leq n\). We describe the next iteration, in which we compute \(l(e_{i+1})\) and the sets \(I(e_{i+1})\) and \(I^{+}(e_{i+1})\). For each \(c_{j}\in C\), we compute the \(+0\)-intervals on \(e_{i+1}\) with respect to \(I(e_{i},c_{j})\) and store them in \(I(e_{i+1},c_{j})\). If at least one of the sets \(I(e_{i+1},c_{j})\) is non-empty, we set \(l(e_{i+1})=l(e_{i})\) (otherwise \(l(e_{i+1})>l(e_{i})\)). Next, for each \(c_{j}\in C\), we compute the \(+0\)-intervals on \(e_{i+1}\) with respect to \(I^{+}(e_{i},c_{j})\) and the \(+1\)-intervals on \(e_{i+1}\) with respect to \(I(e_{i})\setminus I(e_{i},c_{j})\) (and \(c_{j}\)). We store these intervals (if exist) either in \(I^{+}(e_{i+1},c_{j})\), if \(l(e_{i+1})=l(e_{i})\), or in \(I(e_{i+1},c_{j})\), if \(l(e_{i+1})>l(e_{i})\). If we performed the latter option, then we set \(l(e_{i+1})=l(e_{i})+1\). Finally, if we performed one of the two options, then we repeatedly merge overlapping intervals in the set (either \(I^{+}(e_{i+1},c_{j})\) or \(I(e_{i+1},c_{j})\)), until there are no such intervals. If \(l(e_{i+1})>l(e_{i})\), then, for each \(c_{j}\in C\), we compute the \(+2\)-intervals on \(e_{i+1}\) with respect to \(I(e_{i})\) and the \(+1\)-intervals on \(e_{i+1}\) with respect \(I^{+}(e_{i})\setminus I^{+}(e_{i},c_{j})\). We store these intervals (if exist) either in \(I^{+}(e_{i+1},c_{j})\), if \(l(e_{i+1})=l(e_{i})+1\), or in \(I(e_{i+1},c_{j})\), otherwise (i.e., we still have not fixed \(l(e_{i+1})\)). If we performed the latter option, then we set \(l(e_{i+1})=l(e_{i})+2\), and, as above, if we performed one of the two options, then we repeatedly merge overlapping intervals in the set, until there are no such intervals. Finally, if \(l(e_{i+1})=l(e_{i})+2\), then, for each \(c_{j}\in C\), we set \(I^{+}(e_{i+1},c_{j})=e_{i+1}\); see Claim 5. ### Stage II In this stage we use the information collected in the first stage to construct a minimum-link tour \(\pi\) of \(E\). We construct \(\pi\) incrementally beginning at \(t\) and ending at \(s\). That is, in the first iteration we add the portion of \(\pi\) from \(t\) to \(e_{n}\), in the second iteration we add the portion from \(e_{n}\) to \(e_{n-1}\), etc. Assume that we have already constructed the portion of \(\pi\) from \(t\) to \(e_{i}\), where this portion ends at point \(p\) of interval \(a\) on \(e_{i}\). We describe in Algorithm 1 (see Appendix 0.A.1) how to compute the portion from \(e_{i}\) to \(e_{i-1}\), which begins at the point \(p\) of interval \(a\) and ends at a point \(p^{\prime}\) of interval \(b\) on \(e_{i-1}\) (where \(b\in I(e_{i-1})\cup I^{+}(e_{i-1})\)) and consists of \(l_{a}-l_{b}+1\) links. Before continuing to the next iteration, we set \(p=p^{\prime}\) and \(a=b\). After adding the last portion, which ends at \(s\), we remove all the redundant vertices from \(\pi\), i.e., vertices at which \(\pi\) does not make a turn. ## 4 Analysis In this section, we prove the correctness of our two-stage algorithm and bound its running time, via a sequence of lemmas and claims. Lemma 1: _For any interval \(a\in I(e_{i})\) and for any \(c_{j}\in C\setminus\{c_{a}\}\), there exists an interval \(b\in I^{+}(e_{i},c_{j})\) such that \(a\subseteq b\), for \(1\leq i\leq n\)._ Proof: Let \(p\in a\), then there is a path \(\pi_{i}\) of length \(l(e_{i})\) that begins at \(s\), ends at \(p\), and whose last link is of orientation \(c_{a}\). By making a turn at \(p\) in orientation \(c_{j}\) (without extending \(\pi_{i}\)), we obtain a path \(\pi_{i}^{\prime}\) of length \(l(e_{i})+1\), whose last link is of orientation \(c_{j}\). Therefore, there is an interval \(b\in I^{+}(e_{i},c_{j})\) such that \(p\in b\), and since (by construction) there are no overlapping intervals in \(I^{+}(e_{i},c_{j})\), we conclude that \(a\subseteq b\). Lemma 2: _For any \(1\leq i\leq n-1\) and \(c_{j}\in C\), if there is an interval \(a\in I(e_{i},\overline{c_{j}})\cup I^{+}(e_{i},\overline{c_{j}})\) such that \(PT(a)\cap e_{i+1}\neq\emptyset\), then, for any interval \(b\in I(e_{i},c_{j})\cup I^{+}(e_{i},c_{j})\), we have that \(PT(b)\cap e_{i+1}=\emptyset\)._ Proof: If there exist intervals \(a\in I(e_{i},\overline{c_{j}})\cup I^{+}(e_{i},\overline{c_{j}})\) and \(b\in I(e_{i},c_{j})\cup I^{+}(e_{i},c_{j})\), such that \(e_{i+1}\) intersects both \(PT(a)\) and \(PT(b)\), then \(e_{i+1}\) must intersect \(e_{i}\) (see Figure 1) -- contradiction. The following claim bounds the number of intervals with associated length and orientation \(l(e_{i})+1\) and \(c_{j}\), respectively, that are 'created' on \(e_{i+1}\). **Claim 3**: _At most \(\max\{|I(e_{i},\overline{c_{j}})|,|I^{+}(e_{i},c_{j})|\}+2\) intervals with associated length and orientation \(l(e_{i})+1\) and \(c_{j}\), respectively, are 'created' on \(e_{i+1}\), during the execution of the algorithm._ Proof: There are two ways to reach a point on \(e_{i+1}\) with a path of length \(l(e_{i})+1\) whose last link is of orientation \(c_{j}\). The first is by passing through one of the intervals in \(I(e_{i})\setminus I(e_{i},c_{j})\) and then making a turn in orientation \(c_{j}\). The second is by passing through one of the intervals in \(I^{+}(e_{i},c_{j})\), without making any turn. That is, the intervals on \(e_{i+1}\) with associated length \(l(e_{i})+1\) and associated orientation \(c_{j}\) are determined by the intervals in \(I^{+}(e_{i},c_{j})\cup(I(e_{i})\setminus I(e_{i},c_{j}))\). Consider an interval \(b\in I^{+}(e_{i},c_{j})\) (e.g., the blue interval in Figure 2), and let \(c\in C\) be the orientation of \(e_{i}\) when directed from \(b_{1}\) to \(b_{2}\). We divide \(\delta(b)\setminus I(e_{i},\overline{c_{j}})\) into four subsets as follows: \(A=\{a\in\delta(b)\mid c_{a}\in\phi(c_{j},c)\cup\{c\}\}\), \(B=\{a\in\delta(b)\mid c_{a}\in\phi(c,\overline{c_{j}})\}\), \(C=\{a\in\delta(b)\mid c_{a}\in\phi(\overline{c_{j}},\overline{c})\}\), and \(D=\{a\in\delta(b)\mid c_{a}\in\phi(\overline{c},c_{j})\cup\{\overline{c}\}\}\). We denote by \(R_{b\cup A}\) the region of all points that can be reached by a path that passes through \(b\), or passes through \(a\in A\) and then makes a turn in orientation \(c_{j}\) (i.e., \(R_{b\cup A}=PT(b)\cup\bigcup_{a\in A}\psi(a,c_{j})\)). We compute the boundary of \(R_{b\cup A}\) from \(PT(b)\), by adding the regions \(\psi(a,c_{j})\), one at a time, for each interval \(a\in A\). Let \(\psi(a,c_{j})\), for some \(a\in A\), be the region that is added in the first step (see the red interval in Figure 2). Since \((a_{1},a_{2})\subseteq(b_{1},b_{2})\) and \(c_{a}\in\phi(c_{j},c)\cup\{c\}\), \(Ray(a_{2},c_{a})\) and \(Ray(b_{2},c_{j})\) intersect at a point \(p_{a}\) (see Figure 2). By passing through \(a\) and then turning before reaching \(Ray(b_{2},c_{j})\) (i.e., at one of the points belonging to \(PT(a)\cap PT(b)\)), we cannot reach any point that is not already in \(PT(b)\). However, by turning after crossing \(Ray(b_{2},c_{j})\), we can reach points that are in the area bounded by \(Ray(p_{a},c_{j})\) and \(Ray(p_{a},c_{a})\) (the shaded area in Figure 2). Thus, the region \(R_{b\cup A}\) at the end of the first step, is bounded by \(Ray(b_{1},c_{j})\), \((b_{1},b_{2})\), \((b_{2},p_{a})\) and \(Ray(p_{a},c_{a})\), as can be seen in Figure 2. Notice the semi-infinite convex 2-chain that we obtain at the end of the first step, namely, the chain consisting of \((b_{2},p_{a})\) followed by \(Ray(p_{a},c_{a})\). It is easy to see that the region \(R_{b\cup A}\) at the end of the last step, is bounded by \(Ray(b_{1},c_{j})\), \((b_{1},b_{2})\), and a semi-infinite convex chain, denoted \(l_{A}\), consisting of at most \(|A|+1\) edges (see red chain in Figure 2(c)). Finally, if \(A=\emptyset\), then \(R_{b\cup A}=PT(b)\) and we set \(l_{A}=Ray(b_{2},c_{j})\). Next, we set \(R_{b\cup D}=PT(b)\cup\bigcup_{a\in D}\psi(a,c_{j})\), and compute the convex chain \(l_{D}\), which, together with \(Ray(b_{2},c_{j})\) and \((b_{1},b_{2})\), defines the boundary of \(R_{b\cup D}\). Once again, if \(D=\emptyset\), we set \(l_{D}=ray(b_{1},c_{j})\). Finally, we compute in a similar manner the convex chains \(l_{B}\), which defines (together with \(Ray(b_{1},c_{j})\)) the boundary of \(R_{b\cup B}=PT(b)\cup\bigcup_{a\in B}\psi(a,c_{j})\) (see purple chain in Figure 2(b)), and \(l_{C}\), which defines (together with \(Ray(b_{2},c_{j})\)) the boundary of \(R_{b\cup C}=PT(b)\cup\bigcup_{a\in C}\psi(a,c_{j})\). We now set \(R=R_{b\cup A}\cup R_{b\cup B}\cup R_{b\cup C}\cup R_{b\cup D}\), then \(R\) is the region of all points that can be reached by a path that passes through \(b\), or passes through \(a\in\delta(b)\setminus I(e_{i},\overline{c_{j}})\) and then makes a turn in orientation \(c_{j}\). Therefore, \(R\cap e_{i+1}\) gives us the intervals on \(e_{i+1}\) with length \(l(e_{i})+1\) and orientation \(c_{j}\), which are created by passing through an interval in \(\{b\}\cup\delta(b)\setminus I(e_{i},\overline{c_{j}})\). In order to find these intervals, we identify the boundary of \(R\) in each of the following four cases: * **Case A:**\(B=\emptyset\) and \(C=\emptyset\) (as illustrated in Figure 2(a)) In this case, \(R=R_{b\cup A}\cup R_{b\cup D}\), since \(R_{b\cup B}=R_{b\cup C}=PT(b)\) and \(PT(b)\subseteq R_{b\cup A},R_{b\cup D}\), and \(R\)'s boundary is composed of \(l_{A}\), \(l_{D}\) and \(b\). * **Case B:**\(B\neq\emptyset\) and \(C=\emptyset\) (as illustrated in Figure 2(b)) In this case, the boundary of \(R\) is composed of \(l_{B}\) and \(l_{D}\), since \(R_{b\cup A}\subseteq R_{b\cup B}\). * **Case C:**\(B=\emptyset\) and \(C\neq\emptyset\) (as illustrated in Figure 2(c)) In this case, the boundary of \(R\) is composed of \(l_{A}\) and \(l_{C}\), since \(R_{b\cup D}\subseteq R_{b\cup C}\). * **Case D:**\(B\neq\emptyset\) and \(C\neq\emptyset\) (as illustrated in Figure 2(d)) in this case, \(R=R_{b\cup B}\cup R_{b\cup C}\), and its boundary is the convex chain \(l\) that is obtained from the chains \(l_{B}\) and \(l_{C}\), see Figure 2(d). We now examine how \(e_{i+1}\) can intersect \(R\), in each of these cases. First, if \(e_{i+1}\) does not intersect the boundary of \(R\), then either \(R\cap e_{i+1}=e_{i+1}\) or \(R\cap e_{i+1}=\emptyset\). In the former case, one interval is formed on \(e_{i+1}\), which contains both its endpoints, and in the latter case, no interval is formed on \(e_{i+1}\). Next, assume that \(e_{i+1}\) intersects the boundary of \(R\). We distinguish between the case where there is an interval \(h\in I(e_{i},\overline{c_{j}})\) such that \(PT(h)\cap e_{i+1}\neq\emptyset\), and the case where there is no such interval. **There is an interval \(h\in I(e_{i},\overline{c_{j}})\) such that \(PT(h)\cap e_{i+1}\neq\emptyset\).** Then, by Lemma 2, \(PT(b)\cap e_{i+1}=\emptyset\). **If Case A:** Clearly, \(e_{i+1}\) cannot intersect both \(l_{A}\) and \(l_{D}\), since this would imply \(PT(b)\cap e_{i+1}\neq\emptyset\) (see Figure 4(a)). Therefore, \(e_{i+1}\) intersects exactly one of these chains, either at a single point or at two points. If \(e_{i+1}\) intersects the chain at a single point \(q\), then a single interval is formed on \(e_{i+1}\), whose endpoints are \(q\) and the endpoint of \(e_{i+1}\) that lies in \(R\) (see the edge \(e_{i+1}^{1}\) in Figure 3(a)). If \(e_{i+1}\) intersects the chain at two points, \(p\) and \(p^{\prime}\), then two intervals are formed on \(e_{i+1}\). Figure 3: The boundary of \(R\). The endpoints of these intervals are \(p\) and \(p^{\prime}\) on one side and the corresponding endpoints of \(e_{i+1}\) on the other side (see the edge \(e_{i+1}^{2}\) in Figure 3(a)). **If Case B:** Unlike Case A, the fact that \(PT(b)\cap e_{i+1}=\emptyset\) does not prevent \(e_{i+1}\) from intersecting both \(l_{B}\) and \(l_{D}\). However, \(e_{i+1}\) can intersect these chains in at most two points (in total), and as in Case A at most two intervals are formed on \(e_{i+1}\), where each of them contains an endpoint of \(e_{i+1}\) (see Figure 4(b)). **If Case C:** Since Cases B and C are symmetric, at most two intervals are formed on \(e_{i+1}\), each of which contains an endpoint of \(e_{i+1}\). **If Case D:** If \(e_{i+1}\) intersects \(l\) at a single point \(q\), then a single interval is formed on \(e_{i+1}\), whose endpoints are \(q\) and the endpoint of \(e_{i+1}\) that lies in \(R\). If \(e_{i+1}\) intersects \(l\) at two points \(p\) and \(p^{\prime}\), then \(R\cap e_{i+1}\) consist of all the points on \(e_{i+1}\), except for those in the interior of \((p,p^{\prime})\). Therefore, two intervals are formed on \(e_{i+1}\), and their endpoints are \(p\) and \(p^{\prime}\) on one side and the corresponding endpoints of \(e_{i+1}\) on the other side (see Figure 3(b)) We have shown that by passing through an interval in \(\{b\}\cup\delta(b)\setminus I(e_{i},\overline{c_{j}})\), at most two intervals (with associated length \(l(e_{i})+1\) and orientation \(c_{j}\)) are formed on \(e_{i+1}\). Moreover, each of these intervals contains an endpoint of \(e_{i+1}\). Therefore, the total number of such intervals that are formed on \(e_{i+1}\), by passing through an interval in \(\bigcup_{b\in I^{+}(e_{i},c_{j})}\{b\}\cup\delta(b)\setminus I(e_{i},\overline {c_{j}})\) is at most two. (For each endpoint \(p\) of \(e_{i+1}\), we retain only the longest interval with \(p\) as one of its endpoints.) Finally, observe that by passing through an interval in \(I(e_{i},\overline{c_{j}})\) and turning backwards in orientation \(c_{j}\), at most one interval is formed on \(e_{i+1}\), which does not necessarily contain an endpoint of \(e_{i+1}\). We conclude that at most \(|I(e_{i},\overline{c_{j}})|+2\) intervals (with associated length \(l(e_{i})+1\) and orientation \(c_{j}\)) are formed on \(e_{i+1}\) during the execution of the algorithm (in the case that there is an interval \(h\in I(e_{i},\overline{c_{j}})\) such that \(PT(h)\cap h\) is a \(l(e_{i},\overline{c_{j}})\)-interval in \(h\).) Figure 4: \(e_{i+1}\) intersects \(R\)’s boundary either at a single point \(q\) or at two points \(p\) and \(p^{\prime}\). \(e_{i+1}\neq\emptyset\)). We have used the equality \(\bigcup_{b\in I^{+}(e_{i},c_{j})}\{b\}\cup\delta(b)=I^{+}(e_{i},c_{j})\cup I(e_{i}) \setminus I(e_{i},c_{j})\), which follows from Lemma 1. We now proceed to the complementary case. For any interval \(h\in I(e_{i},\overline{c_{j}})\), \(PT(h)\cap e_{i+1}=\emptyset\).We defer the details of this case (which are similar to those of the previous case) to Appendix 0.A.2. These details lead to the conclusion that at most \(|I^{+}(e_{i},c_{j})|+2\) intervals (with associated length \(l(e_{i})+1\) and orientation \(c_{j}\)) are formed on \(e_{i+1}\) during the execution of the algorithm in this case. Since only one of the two cases holds (i.e., either there is such an interval \(h\) or there is not), we conclude that at most \(max\left\{|I(e_{i},\overline{c_{j}})|+2,|I^{+}(e_{i},c_{j})|+2\right\}\)\(=max\left\{|I(e_{i},\overline{c_{j}})|,|I^{+}(e_{i},c_{j})|\right\}+2\) intervals with associated length \(l(e_{i})+1\) and orientation \(c_{j}\) are formed on \(e_{i+1}\) during the execution of the algorithm. This completes the proof of Claim 3. Lemma 3: _For any interval \(a\in I(e_{i})\) and orientation \(c_{j}\in C\), we do not need to compute the interval on \(e_{i+1}\) with associated length and orientation \(l(e_{i})+2\) and \(c_{j}\), respectively, which is formed by passing through \(a\) and then making two turns, where the first is in orientation \(\overline{c_{a}}\)._ Proof: By Claim 2, \(l(e_{i})\leq l(e_{i+1})\leq l(e_{i})+2\). So, the intervals on \(e_{i+1}\) of length \(l(e_{i})+2\) are only relevant if \(l(e_{i+1})>l(e_{i})\) (Claim 1). Assume therefore that \(l(e_{i+1})>l(e_{i})\), and let \(a\in I(e_{i})\) (e.g., the red interval in Figure 6). Let \(\pi\) be a tour of \(E\) that passes through \(a\) at a point \(p_{i}\), makes a turn in orientation \(\overline{c_{a}}\) at point \(p\), and makes another turn in orientation \(c_{j}\) at point \(p^{\prime}\), such that \(\pi_{i+1}\) (the portion of \(\pi\) from \(s\) to \(e_{i+1}\)) corresponds to an interval of length \(l(e_{i})+2\). Figure 5: Cases A and B, where there is no such interval \(h\). We distinguish between two cases. If \(pp^{\prime}\cap e_{i}=\emptyset\) (i.e., the second turn is before \(\pi\) crosses \(e_{i}\) again), as shown in Figure 5(a), then \(pp^{\prime}\) does not intersect \(e_{i+1}\), since this would imply \(l(e_{i})=l(e_{i+1})\). Therefore, \(\pi\) reaches \(e_{i+1}\) only after the turn at \(p^{\prime}\), and the tour \(\pi^{\prime}\) which is obtained from \(\pi\) by deleting the link \(pp^{\prime}\) (see Figure 5(b)), is a tour of \(E\) of length \(|\pi|-1\), hence \(\pi\) is not a minimum-link tour of \(E\). Since our goal is to find a minimum-link tour of \(E\), we do not need to compute the interval on \(e_{i+1}\) formed by paths such as \(\pi\) satisfying the condition above. If \(pp^{\prime}\cap e_{i}\neq\emptyset\) (i.e., the second turn is not before \(\pi\) crosses \(e_{i}\) again), let \(T\) denote the region of all points that can be reached by such paths, i.e., paths such as \(\pi\) satisfying the condition above (see the orange region in Figure 6(a)). Then \(T\cap e_{i+1}\) is the interval on \(e_{i+1}\) with associated length \(l(e_{i})+2\) and orientation \(c_{j}\), formed by these paths. But, by Lemma 1, there exists \(b\in I^{+}(e_{i},\overline{c_{a}})\) such that \(a\subseteq b\) (see the blue interval in Figure 6(b)), and clearly \(T\subseteq\psi(b,c_{j})\), implying \(T\cap e_{i+1}\subseteq\psi(b,c_{j})\cap e_{i+1}\). The latter interval, i.e., \(\psi(b,c_{j})\cap e_{i+1}\) is computed by our algorithm, so we do not need to compute \(T\cap e_{i+1}\). Lemma 4: _For any interval \(a\in I(e_{i})\), any point \(p\in\mathbb{R}^{2}\), and any orientation \(c_{j}\notin\{c_{a},c_{a+1},c_{a-1}\}\), \(p\) can be reached by a path that passes through \(a\) and then makes a turn in some orientation \(c\neq\overline{c_{a}}\) and another turn in orientation \(c_{j}\)._ Proof: Consider any interval \(a\in I(e_{i})\). Recall that \(\psi(a,\overline{c_{a+1}})\) (\(\psi(a,\overline{c_{a-1}})\)) denotes the region of all the points that can be reached by a path that passes through \(a\) and then makes a turn in orientation \(\overline{c_{a+1}}\) (\(\overline{c_{a-1}}\)) (see Figure 7). It is easy to see that \(\psi(a,c)\subseteq\psi(a,\overline{c_{a+1}})\) for any \(c\in\phi(\overline{c_{a}},c_{a})\) and \(\psi(a,c)\subseteq\psi(a,\overline{c_{a-1}})\) for any \(c\in\phi(c_{a},\overline{c_{a}})\). Therefore, \(\Delta_{a}=\psi(a,\overline{c_{a+1}})\cup\psi(a,\overline{c_{a-1}})\) is the region of all the points that can be reached by a path that passes through \(a\) and then makes a turn in some orientation \(c\neq\overline{c_{a}}\) (see Figure 7(c)). Figure 6: Proof of Lemma 3. The case where the second turn is before \(\pi\) crosses \(e_{i}\) again. Consider any point \(p\in\mathbb{R}^{2}\) and any orientation \(c_{j}\notin\{c_{a},c_{a+1},c_{a-1}\}\). If \(p\in\Delta_{a}\), then \(p\) can be reached by a path that passes through \(a\) and then makes a turn in some orientation \(c\neq\overline{c_{a}}\). By making an additional turn at \(p\) in orientation \(c_{j}\) (without extending the path), we obtain a path that reaches \(p\) as required. If \(p\in\overline{\Delta_{a}}=\mathbb{R}^{2}\backslash\Delta_{a}\), then \(Ray(p,\overline{c_{j}})\cap\Delta_{a}\neq\emptyset\), since \(\overline{c_{j}}\notin\{\overline{c_{a}}_{-1},\overline{c_{a}},\overline{c_{a }}_{+1}\}\) (as shown in Figure 9). Let \(p^{\prime}\) be any point on \(Ray(p,\overline{c_{j}})\cap\Delta_{a}\), then \(p^{\prime}\) can be reached by a path that passes through \(a\) and then makes a turn in some orientation \(c\neq\overline{c_{a}}\), and by extending this path by adding the link \(p^{\prime}p\), we obtain a path that reaches \(p\) as required. Consider the region \(\Delta_{a}=\psi(a,\overline{c_{a}}_{+1})\cup\psi(a,\overline{c_{a}}_{-1})\) defined in the proof of Lemma 4. Then, as mentioned in the proof of Lemma 4, \(\Delta_{a}\) is the region of all the points that can be reached by a path that passes through \(a\) and then makes a turn in some orientation \(c\neq\overline{c_{a}}\). In addition, we notice that by extending such a path by adding a link in orientation \(c_{j}\), for \(c_{j}\in\{c_{a},c_{a+1},c_{a-1}\}\), we cannot leave \(\Delta_{a}\) (see Figure 10), since for any point \(q\in\Delta_{a}\), \(Ray(q,c_{j})\subseteq\Delta_{a}\). The following claim bounds the number of intervals with associated length and orientation \(l(e_{i})+2\) and \(c_{j}\), respectively, that are 'created' on \(e_{i+1}\). **Claim 4**: _At most \(|I^{+}(e_{i},\overline{c_{j}})|+2\) intervals with associated length and orientation \(l(e_{i})+2\) and \(c_{j}\), respectively, are 'created' on \(e_{i+1}\), during the execution of the algorithm._ Proof: The proof can be found in Appendix 0.A.3. Here, we only observe that there are two ways to reach a point on \(e_{i+1}\) with a path of length \(l(e_{i})+2\) whose last link is of orientation \(c_{j}\). The first is by passing through one of the intervals \(a\in I(e_{i})\) and then making two turns, where the first one is in orientation \(c\neq\overline{c_{a}}\) and the second one is in orientation \(c_{j}\) (see Lemma 3). The second way is by passing through one of the intervals in \(I^{+}(e_{i})\setminus I^{+}(e_{i},c_{j})\), and then making a turn in orientation \(c_{j}\). That is, the intervals on \(e_{i+1}\) with associated length \(l(e_{i})+2\) and associated orientation \(c_{j}\) are determined by the intervals in \(I(e_{i})\cup(I^{+}(e_{i})\setminus I^{+}(e_{i},c_{j}))\). Figure 7: Proof of Lemma 3. The case where the second turn is not before \(\pi\) crosses \(e_{i}\) again. The following claim bounds the number of intervals with associated length and orientation \(l(e_{i})+3\) and \(c_{j}\), respectively, that are 'created' on \(e_{i+1}\). **Claim 5**: _For any \(q\in e_{i+1}\) and for any \(c_{j}\in C\), there exists a path of length \(l(e_{i})+3\) from \(s\) to \(q\), whose last link has orientation \(c_{j}\), for \(1\leq i\leq n-1\)._ Proof: Consider any path \(\pi\) from \(s\) to \(e_{i}\) that corresponds to an interval in \(I(e_{i})\), and let \(p\) be the point on \(e_{i}\) where \(\pi\) ends. Since \(C\) spans the plane, there exists a two-link path from \(p\) to \(q\), and by making a turn at \(q\) in orientation \(c_{j}\) (without extending the path), we obtain a three-link path \(\pi_{p,q}\) from \(p\) to \(q\) whose last link has orientation \(c_{j}\). So, the path obtained by concatenating the paths \(\pi\) and \(\pi_{p,q}\) is as desired. **Claim 6**: _For any \(0\leq i\leq n+1\) and \(c_{j}\in C\), \(|I(e_{i},c_{j})|\leq 2i+1\) and \(|I^{+}(e_{i},c_{j})|\leq 2i+1\)._ Proof: The proof is by induction on \(i\). For \(i=0\), the claim is clearly true; \(|I(e_{0},c_{j})|=|I^{+}(e_{0},c_{j})|=1\). Assume now that the claim is true for \(i\), \(0\leq i\leq n\), that is, for any \(c_{j}\in C\), we have \(|I(e_{i},c_{j})|\leq 2i+1\) and \(|I^{+}(e_{i},c_{j})|\leq 2i+1\). We show below that it remains true for \(i+1\). Recall that \(l(e_{i})\leq l(e_{i+1})\leq l(e_{i})+2\) (Claim 2). We show that the claim remains true in each of the resulting three cases. * **Case A**: \(l(e_{i+1})=l(e_{i})\). In this case \(I(e_{i+1},c_{j})\) stores the \(+0\)-intervals on \(e_{i+1}\) with respect to \(I(e_{i},c_{j})\). Since, each interval \(a\in I(e_{i},c_{j})\) 'creates' at most one \(+0\)-interval on \(e_{i+1}\), we get that \(|I(e_{i+1},c_{j})|\leq|I(e_{i},c_{j})|\leq 2i+1\). Figure 8: All the points that can be reached by a path that passes through \(a\) and then makes a turn in some orientation \(c\neq\overline{c_{a}}\). Recall that \(I^{+}(e_{i+1},c_{j})\) is the set of maximal intervals on \(e_{i+1}\) formed by all paths of length \(l(e_{i+1})+1=l(e_{i})+1\), whose last link has orientation \(c_{j}\). By Claim 3, \(|I^{+}(e_{i+1},c_{j})|\leq\max\{|I(e_{i},\overline{c_{j}})|,|I^{+}(e_{i},c_{j})| \}+2\), and therefore \(|I^{+}(e_{i+1},c_{j})|\leq\max\{2i+1,2i+1\}+2=2(i+1)+1\). * **Case B:**\(l(e_{i+1})=l(e_{i})+1\). In this case, \(I(e_{i+1},c_{j})\) is the set of maximal intervals on \(e_{i+1}\) formed by all paths of length \(l(e_{i+1})=l(e_{i})+1\), whose last link has orientation \(c_{j}\). By Claim 3, \(|I(e_{i+1},c_{j})|\leq\max\{|I(e_{i},\overline{c_{j}})|,|I^{+}(e_{i},c_{j})| \}+2\), so, \(|I(e_{i+1},c_{j})|\leq\max\{2i+1,2i+1\}+2=2(i+1)+1\). Now, \(I^{+}(e_{i+1},c_{j})\) is the set of maximal intervals on \(e_{i+1}\) formed by all paths of length \(l(e_{i+1})+1=l(e_{i})+2\), whose last link has orientation \(c_{j}\). By Claim 4, \(|I^{+}(e_{i+1},c_{j})|\leq|I^{+}(e_{i},\overline{c_{j}})|+2\), and therefore \(|I^{+}(e_{i+1},c_{j})|\leq 2i+1+2=2(i+1)+1\). * **Case C:**\(l(e_{i+1})=l(e_{i})+2\). In this case, \(I(e_{i+1},c_{j})\) is the set of maximal intervals on \(e_{i+1}\) formed by all paths of length \(l(e_{i+1})=l(e_{i})+2\), whose last link has orientation \(c_{j}\). Thus, by Claim 4, \(|I(e_{i+1},c_{j})|\leq|I^{+}(e_{i},\overline{c_{j}})|+2\leq 2(i+1)+1\). Moreover, in this case, \(I^{+}(e_{i+1},c_{j})=\{e_{i+1}\}\), so \(|I^{+}(e_{i+1},c_{j})|=1\). **Running time.** We bound the running time of each of the two stages of our algorithm. Consider the \(i\)'th iteration of the main loop of Stage I. We need \(O(|I(e_{i},c_{j})|+|I^{+}(e_{i},c_{j})|)\) time to compute the \(+0\)-intervals on \(e_{i+1}\), \(O(|I(e_{i})\setminus I(e_{i},c_{j})|+|I^{+}(e_{i})\setminus I^{+}(e_{i},c_{j} )|)\) time to compute the \(+1\)-intervals, and \(O(|I(e_{i})|)\) time to compute the \(+2\)-intervals. Since we perform this calculation for each \(c_{j}\in C\), the running time of the i'th iteration is \(O(|C|\cdot\{|I(e_{i})|+|I^{+}(e_{i})|\})\). By Claim 6 we conclude that \(|I(e_{i})|=O(|C|\cdot(2i+1))\) and \(|I^{+}(e_{i})|=O(|C|\cdot(2i+1))\), for \(1\leq i\leq n+1\). Therefore, the running time of Stage I is \(\sum_{i=1}^{n+1}O(|C|\cdot|C|\cdot(2i+1))=O(|C|^{2}\cdot n^{2})\). In stage 2, we run Algorithm 1 for each \(i\) from \(n+1\) to \(1\). The running time of Algorithm 1 is \(O(|I(e_{i-1})|+|I^{+}(e_{i-1})|)\), and by Claim 6 we get \(O(|C|\cdot i)\). Therefore, the running time of Stage II is \(\sum_{i=1}^{n+1}O(|C|\cdot i)=O(|C|\cdot n^{2})\). Thus, the overall running time of the algorithm is \(O(|C|^{2}\cdot n^{2})\), as summarized: Theorem 4.1: _Given a set \(E\) of \(n\) disjoint \(C\)-oriented segments in the plane and points \(s\) and \(t\) that do not belong to any of the segments in \(E\), one can compute a minimum-link \(C\)-oriented tour of \(E\) in \(O(|C|^{2}\cdot n^{2})\) time._ ## 5 Extensions In the case that \(|C|=4\) (e.g., axis-parallel paths and segments), the specialization of our analysis shows a constant upper bound on the number of intervals on each segment; this results in overall time \(O(n)\). Also, our analysis only required that consecutive segments in \(E\) do not intersect each other; they can otherwise intersect. In ongoing and future work we consider more general polygonal regions, possibly overlapping arbitrarily. We also consider query versions of the problem in which we build data structures (shortest path maps) that allow link distance queries on subsequences of the input set of regions, between query points in the plane. Future work might examine problems in 3D. #### Acknowledgements M. Katz was partially supported by the US-Israel Binational Science Foundation (BSF project 2019715 / NSF CCF-2008551). J. Mitchell was partially supported by the National Science Foundation (CCF-2007275) and the US-Israel Binational Science Foundation (BSF project 2016116).
2306.05950
Categorical generalisations of quantum double models
We show that every involutive Hopf monoid in a complete and finitely cocomplete symmetric monoidal category gives rise to invariants of oriented surfaces defined in terms of ribbon graphs. For every ribbon graph this yields an object in the category, defined up to isomorphism, that depends only on the homeomorphism class of the associated surface. This object is constructed via (co)equalisers and images and equipped with a mapping class group action. It can be viewed as a categorical generalisation of the ground state of Kitaev's quantum double model or of a representation variety for a surface. We apply the construction to group objects in cartesian monoidal categories, in particular to simplicial groups as group objects in SSet and to crossed modules as group objects in Cat. The former yields a simplicial set consisting of representation varieties, the latter a groupoid whose sets of objects and morphisms are obtained from representation varieties.
Anna-Katharina Hirmer, Catherine Meusburger
2023-06-09T15:05:38Z
http://arxiv.org/abs/2306.05950v1
# Categorical generalisations of quantum double models ###### Abstract We show that every involutive Hopf monoid in a complete and finitely cocomplete symmetric monoidal category gives rise to invariants of oriented surfaces defined in terms of ribbon graphs. For every ribbon graph this yields an object in the category, defined up to isomorphism, that depends only on the homeomorphism class of the associated surface. This object is constructed via (co)equalisers and images and equipped with a mapping class group action. It can be viewed as a categorical generalisation of the ground state of Kitaev's quantum double model or of a representation variety for a surface. We apply the construction to group objects in cartesian monoidal categories, in particular to simplicial groups as group objects in SSet and to crossed modules as group objects in Cat. The former yields a simplicial set consisting of representation varieties, the latter a groupoid whose sets of objects and morphisms are obtained from representation varieties. ## 1 Introduction Constructions that assign algebraic or geometric objects with mapping class group actions to oriented surfaces are of interest in many contexts. They arise in 3d topological quantum field theories (TQFTs) of Turaev-Viro-Barrett-Westbury or Reshetikhin-Turaev type [TV, BaW, RT] and are encoded in the weaker notion of a modular functor, see [BK, Def. 5.1.1]. For a recent construction of modular functors from finite tensor categories, see Fuchs, Schweigert and Schaumann [FSSa], for a classification via factorisation homology, Brochier and Woike [BrW]. Modular functors also arise in Hamiltonian quantisation formalisms for representation varieties. The associated mapping class group actions were first discovered in the combinatorial quantisation formalism by Alekseev, Grosse, Schomerus [AGSb, AS] and Buffenoir and Roche [BR95, BR96], subsequently related to factorisation homology by Ben-Zvi, Brochier and Jordan [BBJa, BBJb] and studied by Faitg [Fa18, Fa19]. Objects with mapping class group actions also arise from correlators in conformal field theories, see the work by Fuchs, Schweigert and Stigner [FS, FSSb]. They are also present in models from condensed matter physics and topological quantum computing such as Levin-Wen models [LW] and Kitaev's quantum double model [Ki]. Due to the work of Lyubashenko [Lya, Lyb, Lyc] it is well-understood how to construct projective mapping class group actions from Hopf algebras in abelian ribbon categories. Many of these constructions are based on assignments of algebraic data to certain graphs on surfaces, and they require linear categories with duals, often also abelian, finite or semisimple. These restrictions are of course well-motivated from the context of TQFTs, in the quantisation of gauge theories or in condensed matter physics. However, it is also desirable to go beyond them. In this article we show that any involutive Hopf monoid \(H\) in a complete and finitely cocomplete symmetric monoidal category \(\mathcal{C}\) yields an invariant of oriented surfaces. We compute this invariant for examples, such as simplicial groups as Hopf monoids in SSet and crossed modules as Hopf monoids in Cat. The construction is based on the choice of a ribbon graph. It determines an object in \(\mathcal{C}\), defined up to isomorphisms, that depends only on the homeomorphism class of the surface obtained by attaching discs to the faces of the graph. As Hopf monoids are categorical generalisations of Hopf algebras, it can be viewed as a categorical generalisation of Kitaev's quantum double model or of representation varieties or moduli spaces of flat bundles on surfaces. More precisely, we consider for a ribbon graph the \(|E|\)-fold tensor product \(H^{\otimes E}\), where \(E\) is the edge set of the graph. We use the structure morphisms of the Hopf monoid to associate \(H\)-module structures to its vertices and \(H\)-comodule structures to its faces. This requires a choice of a marking for each vertex or face, and each marking defines a Yetter-Drinfeld module structure over \(H\). The object assigned to the graph is obtained by equalising the \(H\)-comodule structures, by coequalising the \(H\)-module structures and by combining them via a categorical image. It generalises the protected space or ground state of Kitaev's quantum double model, and we therefore call it the protected object. We then show **Theorem**.: _(Theorem 5.21) The isomorphism class of the protected object for a ribbon graph depends only on the homeomorphism class of the associated surface._ Essentially the same construction was used by Meusburger and Voss in [MV] to construct mapping class group actions from pivotal Hopf monoids in symmetric monoidal categories. These mapping class group actions are obtained from graphs with a single vertex and face and act on the associated protected object. However, it was not established in [MV] that this object is independent of the graph. By combining our results with the ones from [MV] we obtain **Theorem**.: _(Theorem 8.1) The protected object for a Hopf monoid \(H\) and a surface \(\Sigma\) of genus \(g\geq 1\) is equipped with an action of the mapping class group \(\operatorname{Map}(\Sigma)\) by automorphisms._ The remainder of this article is dedicated to the study of examples. For a finite-dimensional semisimple Hopf algebra \(H\) as a Hopf monoid in \(\operatorname{Vect}_{\mathbb{C}}\) the protected object assigned to a surface coincides with the protected space of the associated quantum double model, as defined by Kitaev and by Buerschaper et a. in [Ki, BMCA]. However, our model is also defined in the non-semisimple case. For instance, for a group algebra \(k[G]\) over a commutative ring \(k\) as a Hopf monoid in \(k\)-Mod and a surface \(\Sigma\) of genus \(g\geq 1\), the protected object is the free \(k\)-module generated by the representation variety \(\operatorname{Hom}(\pi_{1}(\Sigma),G)/G\). A large class of examples of involutive Hopf monoids are group objects in cartesian monoidal categories, where the tensor product is a categorical product. In all our examples of this type, the protected object is given in terms of representation varieties. For a group \(H\) as a Hopf monoid in Set, the object assigned to a connected surface \(\Sigma\) is the representation variety \(\operatorname{Hom}(\pi_{1}(\Sigma),H)/H\). For a simplicial group \(H=(H_{n})_{n\in\mathbb{Z}}\) as a Hopf monoid in SSet, it is a simplicial set given by the representation varieties for the groups \(H_{n}\) and post-composition with the face maps and degeneracies. In this sense the construction can be viewed as a generalisation of representation varieties from groups to group objects. The module structures at vertices correspond to the group action and the comodule structures at faces of the graph to moment maps. Our main example is the case where the underlying symmetric monoidal category is the category Cat of small categories and functors between them. Group objects in Cat are precisely crossed modules. They are given by a group homomorphism \(\partial:A\to B\) and an action \(\blacktriangleright\colon B\times A\to A\) by group automorphisms, subject to some consistency conditions. As Cat is complete and cocomplete, the associated protected objects exist, but are difficult to determine concretely. We relate them to simplicial groups via the nerve functor and its left adjoint, which yields an explicit description of the protected objects and their mapping class group actions. **Theorem**.: _(Theorem 7.19, Corollary 8.3) The protected object for a crossed module \((B,A,\blacktriangleright,\partial)\) as a group object in \(\operatorname{Cat}\) and a surface \(\Sigma\) of genus \(g\geq 1\) is a groupoid \(\mathcal{G}\) with \(\operatorname{Ob}\!\mathcal{G}=\operatorname{Hom}(\pi_{1}(\Sigma),B)/B\) and with equivalence classes of group homomorphisms \(\tau:\pi_{1}(\Sigma)\to A\rtimes B\) as morphisms. The action of the mapping class group \(\operatorname{Map}(\Sigma)\) is induced by its action on \(\operatorname{Hom}(\pi_{1}(\Sigma),A\rtimes B)/A\rtimes B\)._ The equivalence classes of morphisms are given by the equivalence relation \(\tau_{1}\circ\tau_{2}\sim\tau_{1}^{\prime}\circ\tau_{2}^{\prime}\) on the set of group homomorphisms \(\tau:E_{2g}\to A\rtimes B\), whenever the composites exist and \(\tau_{1}\), \(\tau_{1}^{\prime}\) and \(\tau_{2}^{\prime}\),\(\tau_{2}\) are conjugate. We compute the protected object and the associated mapping class group action explicitly for some simple examples of crossed modules. To our knowledge this construction is new and differs substantially from the constructions with crossed modules in higher gauge theory settings such as the work of Martins and Picken [MPa, MPb] on higher holonomies, the work [BC+a, BC+b] by Bullivant et. al. on higher lattices and topological phases, the work [KMM] by Koppen, Martins and Martin on topological phases from crossed modules of Hopf algebras and the recent work [SV] by Sozer and Virelizier on 3d homotopy quantum field theory. In those settings the structure maps of the crossed module often encode higher categorical structures in the topological data, such as homotopies between paths, or relate data in triangulations or cell decompositions. In our approach they enter the formalism as data - a specific example of a group object - but the crossed module structure is not required to encode the topology or geometry. The article is structured as follows. Section 2 introduces the algebraic background for the article. In Section 2.1 we summarise the background on Hopf monoids in symmetric monoidal categories. In Section 2.2 we discuss their (co)modules and the construction of their (co)invariants via (co)equalisers and images in complete and finitely cocomplete symmetric monoidal categories. Section 3 contains the required background on ribbon graphs and surfaces. In Section 4 we formulate the categorical counterpart of Kitaev's quantum double model for an involutive Hopf monoid \(H\) in a complete and finitely cocomplete symmetric monoidal category. This is a simple generalisation of the formulation in [BMCA] for finite-dimensional semisimple complex Hopf algebras, and an almost identical construction was used in [MV]. For each ribbon graph we consider the tensor product \(H^{\otimes E}\), where \(E\) is the edge set of the graph. We assign to each marked vertex a \(H\)-module structure and to each marked face an \(H\)-comodule structure on \(H^{\otimes E}\). The protected object is constructed by (co)equalising these (co)module structures and taking an image. In Section 5 we show that the protected object defined by a ribbon graph depends only on the homeomorphism class of the associated oriented surface \(\Sigma\). We first demonstrate that moving the markings for the (co)module structures and edge reversals yield isomorphic protected objects. We then consider a number of graph transformations that are sufficient to reduce every connected ribbon graph to a standard graph and prove that these induce isomorphisms of the protected object. These sections are necessarily rather technical. The reader primarily interested in the results may skip to the main theorem in Section 5.4, where we also treat some examples. In particular, we show that the protected object for a group \(H\) as a Hopf monoid in Set and a connected surface \(\Sigma\) is the representation variety \(\operatorname{Hom}(\pi_{1}(\Sigma),H)/H\). We then consider group algebras \(H=k[G]\) and their duals \(k[G]^{*}\) for a commutative ring \(k\) as Hopf monoids in \(k\)-Mod. The associated protected objects are the free \(k\)-module \(\langle\operatorname{Hom}(\pi_{1}(\Sigma),G)/G\rangle_{k}\) and the set of maps \(\operatorname{Hom}(\pi_{1}(\Sigma),G)/G\to k\). Section 6 treats the example of a simplicial group \(H=(H_{n})_{n\in\mathbb{N}_{0}}\) as a Hopf monoid in SSet. In this case, the protected object is a simplicial set given by the representation varieties and by post-composition with the face maps and degeneracies of \(H\). This result is required for the construction of the protected object of a crossed module as a group object in \(\operatorname{Cat}\). The construction of the protected object for group objects in \(\operatorname{Cat}\) is more involved and treated in Section 7. We start by summarising the required background on crossed modules in Section 7.1. In Section 7.2 we discuss equalisers and coequalisers in \(\operatorname{Cat}\) and summarise how the latter can be constructed via the nerve functor \(N:\operatorname{Cat}\to\SS\) and its left adjoint. In Section 7.3 we apply these results to (co)equalise the (co)module structures over Hopf monoids in \(\operatorname{Cat}\). In Section 7.4 we apply this to the (co)module structures associated with a ribbon graph and determine the protected object for the associated surface. We describe it explicitly and treat a simple example. In Section 8 we describe the mapping class group action on the protected object. By combining our results with the ones from [MV], we obtain that the mapping class group of an oriented surface \(\Sigma\) acts on the associated protected objects. We show that in the case of a simplicial group \(H=(H_{n})_{n\in\mathbb{N}_{0}}\) this action is the one induced by its action on the representation varieties \(\operatorname{Hom}(\pi_{1}(\Sigma),H_{n})/H_{n}\). For the case of a crossed module \((B,A,\operatorname{\blacktriangleright},\partial)\) as a group object in \(\operatorname{Cat}\) we obtain a mapping class group action by invertible endofunctors on the associated groupoid, which is induced by the action on the representation variety \(\operatorname{Hom}(\pi_{1}(\Sigma),A\rtimes B)/A\rtimes B\). ## 2 Algebraic background ### Involutive Hopf monoids Throughout the article \(\mathcal{C}\) is a symmetric monoidal category with unit object \(e\) and braidings \(\tau_{X,Y}:X\otimes Y\to Y\otimes X\). We also suppose that \(\mathcal{C}\) is complete and finitely cocomplete. In formulas, we suppress associators and unit constraints and coherence data of monoidal functors. **Definition 2.1**.: 1. \(A\) \(\mathbf{Hopf}\) **monoid** _in_ \(\mathcal{C}\) _is an object_ \(H\) _in_ \(\mathcal{C}\) _together with morphisms_ \(m:H\otimes H\to H\)_,_ \(\eta:e\to H\)_,_ \(\Delta:H\to H\otimes H\)_,_ \(\epsilon:H\to e\) _and_ \(S:H\to H\)_, the multiplication, unit, comultiplication, counit and antipode, such that_ * _the (co)multiplication satisfies the (co)associativity and (co)unitality conditions_ \[m\circ(m\otimes 1_{H})=m\circ(1_{H}\otimes m), m\circ(\eta\otimes 1_{H})=m\circ(1_{H}\otimes\eta)=1_{H},\] (1) \[(\Delta\otimes 1_{H})\circ\Delta=(1_{H}\otimes\Delta)\circ\Delta, (\epsilon\otimes 1_{H})\circ\Delta=(1_{H}\otimes\epsilon)\circ \Delta=1_{H},\] * _comultiplication and counit are monoid morphisms_ \[\Delta\circ\eta=\eta\otimes\eta, \Delta\circ m=(m\otimes m)\circ(1_{H}\otimes\tau_{H,H}\otimes 1_{H}) \circ(\Delta\otimes\Delta)\] (2) \[\epsilon\circ\eta=1_{e}, \epsilon\circ m=\epsilon\otimes\epsilon,\] * \(S\) _satisfies the antipode condition_ \[m\circ(S\otimes 1_{H})\circ\Delta=m\circ(1_{H}\otimes S)\circ\Delta=\eta \circ\epsilon.\] (3) _It is called_ **involutive** _if_ \(S\circ S=1_{H}\)_._ 2. \(A\) **morphism of Hopf monoids** _in_ \(\mathcal{C}\) _is a morphism_ \(f:H\to H^{\prime}\) _in_ \(\mathcal{C}\) _with_ \[f\circ m=m^{\prime}\circ(f\otimes f), f\circ\eta=\eta^{\prime}, (f\otimes f)\circ\Delta=\Delta^{\prime}\circ f, \epsilon^{\prime}\circ f=\epsilon.\] (4) _We denote by \(\operatorname{Hopf}(\mathcal{C})\) the category of Hopf monoids and morphisms of Hopf monoids in \(\mathcal{C}\)._ The antipode of a Hopf monoid is unique, and it is an anti-monoid and anti-comonoid morphism \[S\circ m=m^{op}\circ(S\otimes S),\hskip 28.452756ptS\circ\eta=\eta, \hskip 28.452756pt(S\otimes S)\circ\Delta=\Delta^{op}\circ S,\hskip 28.452756pt \epsilon\circ S=\epsilon, \tag{5}\] see for instance Porst [Po, Prop. 36]. If \(H\) is involutive, the antipode satisfies the additional identities \[m^{op}\circ(S\otimes 1_{H})\circ\Delta=m^{op}\circ(1_{H}\otimes S)\circ \Delta=\eta\circ\epsilon. \tag{6}\] Every morphism of Hopf monoids \(f:H\to H^{\prime}\) satisfies \(f\circ S=S^{\prime}\circ f\). This follows as for Hopf algebras by considering the convolution monoid \(\operatorname{Hom}_{\mathcal{C}}(H,H)\) with the product \(f\star g=m\circ(f\otimes g)\circ\Delta\). In the following, we use generalised Sweedler notation for the coproduct in a Hopf monoid and write \(\Delta(h)=h_{(1)}\otimes h_{(2)}\), \((\Delta\otimes 1_{H})\circ\Delta(h)=(1_{H}\otimes\Delta)\circ\Delta(h)=h_{(1)} \otimes h_{(2)}\otimes h_{(3)}\) etc. This is analogous to Sweedler notation for a Hopf algebra. It can be viewed as a shorthand notation for a diagram that describes a morphism in a symmetric monoidal category, see [MV] for examples. We also write \(m^{(n)}:H^{\otimes(n+1)}\to H\) and \(\Delta^{(n)}:H\to H^{\otimes(n+1)}\) for \(n\)-fold products and coproducts. **Example 2.2**.: 1. _For any commutative ring_ \(k\) _a Hopf monoid in_ \(k\)_-Mod is a Hopf algebra over_ \(k\)_. In particular, for any field_ \(\mathbb{F}\) _a Hopf monoid in_ \(\operatorname{Vect}_{\mathbb{F}}\) _is a Hopf algebra over_ \(\mathbb{F}\)_._ 2. _For any finite group_ \(G\) _and commutative ring_ \(k\)_, the group algebra_ \(k[G]\) _and its dual_ \(k[G]^{*}\) _are Hopf monoids in_ \(k-\mathrm{Mod}\)_._ 3. _The tensor product of two Hopf monoids in_ \(\mathcal{C}\) _has a Hopf monoid structure given by the tensor product of (co)units, (co)multiplications and antipodes and the braiding morphisms. Any tensor product of Hopf monoid morphisms is a morphism of Hopf monoids._ 4. _Every Hopf monoid_ \(H=(H,m,\eta,\Delta,\epsilon,S)\) _in a symmetric monoidal category_ \(\mathcal{C}\) _defines a Hopf monoid_ \(H^{*}=(H,\Delta,\epsilon,m,\eta,S)\) _in the symmetric monoidal category_ \(\mathcal{C}^{op}\)_. This generalises the dual Hopf algebra in_ \(\operatorname{Vect}_{\mathbb{F}}\)_._ The following example yields many subexamples, which are a focus in this article. **Example 2.3**.: _Let \((\mathcal{C},\times)\) be a cartesian monoidal category with terminal object \(\bullet\). Let \(\epsilon_{X}:X\to\bullet\) be the terminal morphism and \(\Delta_{X}:X\to X\times X\) the diagonal morphism for an object \(X\). A Hopf monoid in \(\mathcal{C}\) is a_ **group object** _in \(\mathcal{C}\): an object \(H\) together with morphisms \(m:H\times H\to H\), \(\eta:\bullet\to H\) and \(I:H\to H\) such that the following diagrams commute_ (7) _A morphism of Hopf monoids is a_ **morphism of group objects**_: a morphism \(F:H\to H^{\prime}\) with_ \[F\circ m=m^{\prime}\circ(F\times F). \tag{8}\] _Note that this implies \(F\circ\eta=\eta^{\prime}\) and \(I^{\prime}\circ F=F\circ I\)._ **Example 2.4**.: 1. _A group object in the cartesian monoidal category_ \((\mathrm{Set},\times)\) _is a group._ 2. _A group object in the cartesian monoidal category_ \((\mathrm{Top},\times)\) _is a topological group._ 3. _A group object in the cartesian monoidal category_ \((\mathrm{Cat},\times)\) _of small categories and functors between them is a crossed module (cf. Definition_ 7.2_)._ 4. _Let_ \(G\) _be a group and_ \(G-\mathrm{Set}=\mathrm{Set}^{\mathrm{BG}}\) _the cartesian monoidal category of_ \(G\)_-sets and_ \(G\)_-equivariant maps. A group object in_ \(G-\mathrm{Set}\) _is a group with a_ \(G\)_-action by automorphisms._ 5. _A group object in the cartesian monoidal category_ \(\mathrm{SSet}=\mathrm{Set}^{\Delta^{\rho}}\) _of simplicial sets and simplicial maps is a simplicial group (cf. Definition_ 6.1_)._ The last two examples in Example 2.4 have counterparts for any functor category \(\mathcal{C}^{\mathcal{D}}\), where \(\mathcal{D}\) is small and \(\mathcal{C}\) symmetric monoidal. In this case the functor category \(\mathcal{C}^{\mathcal{D}}\) inherits a symmetric monoidal structure from \(\mathcal{C}\), and we have **Lemma 2.5**.: _For any symmetric monoidal category \(\mathcal{C}\) and a small category \(\mathcal{D}\) the monoidal categories \(\mathrm{Hopf}(\mathcal{C}^{\mathcal{D}})\) and \(\mathrm{Hopf}(\mathcal{C})^{\mathcal{D}}\) are symmetric monoidally equivalent._ Proof.: The equivalence is given by the functor \(R:\mathrm{Hopf}(\mathcal{C}^{\mathcal{D}})\to\mathrm{Hopf}(\mathcal{C})^{ \mathcal{D}}\) that sends a Hopf monoid \((H,m,\eta,\Delta,\epsilon,S)\) to the functor \(K:\mathcal{D}\to\mathrm{Hopf}(\mathcal{C})\) with \(K(D)=H(D)\) and the component morphisms \(m_{D}\), \(\eta_{D}\), \(\Delta_{D}\), \(\epsilon_{D}\), \(S_{D}\) for \(D\in\mathrm{Ob}(\mathcal{D})\) and with \(K(f)=H(f)\) for a morphism \(f\) in \(\mathcal{D}\). Hopf monoid morphisms in \(\mathcal{C}^{\mathcal{D}}\) are sent to themselves. The functor \(R\) has an obvious inverse, and both functors are symmetric monoidal. Further examples are obtained by taking the images of Hopf monoids under symmetric monoidal functors. If both of the categories are cartesian monoidal, it is sufficient that the functor preserves finite products, which holds in particular for any right adjoint functor. **Example 2.6**.: 1. _Let_ \(F:\mathcal{C}\to\mathcal{C}^{\prime}\) _be a symmetric monoidal functor. Then for every Hopf monoid_ \(H\) _in_ \(\mathcal{C}\) _the image_ \(F(H)\) _has a canonical Hopf monoid structure._ 2. _If_ \(\mathcal{C},\mathcal{C}^{\prime}\) _are cartesian monoidal categories and_ \(F:\mathcal{C}\to\mathcal{C}^{\prime}\) _a functor that preserves finite products, then_ \(F\) _is symmetric monoidal, and for every group object_ \(H\) _in_ \(\mathcal{C}\) _the image_ \(F(H)\) _is a group object in_ \(\mathcal{C}^{\prime}\)_._ ### (Co)modules and their (co)invariants As their definitions involve only structure maps, (co)modules over Hopf monoids in symmetric monoidal categories are defined analogously to (co)modules over Hopf algebras. The only difference is that linear maps are replaced by morphisms. **Definition 2.7**.: _Let \(H\) be a Hopf monoid in \(\mathcal{C}\)._ 1. _An_ \(H\)**-module _in_ \(\mathcal{C}\) _is an object_ \(M\) _in_ \(\mathcal{C}\) _with a morphism_ \(\rhd:H\otimes M\to M\) _satisfying_ \[\rhd\circ(m\otimes 1_{M})=\rhd\circ(1_{H}\otimes\rhd),\qquad\rhd\circ(\eta \otimes 1_{M})=1_{M}.\] (9) \(A\) **morphism of \(H\)-modules** _is a morphism_ \(f:M\to M^{\prime}\) _in_ \(\mathcal{C}\) _with_ \(\rhd^{\prime}\circ(1_{H}\otimes f)=f\circ\rhd\)_._ 2. _An_ \(H\)**-comodule _in_ \(\mathcal{C}\) _is an object_ \(M\) _in_ \(\mathcal{C}\) _with a morphism_ \(\delta:M\to H\otimes M\) _satisfying_ \[(\Delta\otimes 1_{M})\circ\delta=(1_{H}\otimes\delta)\circ\delta,\qquad( \epsilon\otimes 1_{M})\circ\delta=1_{M}.\] (10) \(A\) **morphism of \(H\)-comodules** _is a morphism_ \(f:M\to M^{\prime}\) _in_ \(\mathcal{C}\) _with_ \((1_{H}\otimes f)\circ\delta=\delta^{\prime}\circ f\)_._ There are analogous notions of right (co)modules and bi(co)modules and morphisms between them. Just as in the case of a Hopf algebra, there are also various compatibility conditions that can be imposed between modules and comodule structures. The most important one in the following is the one for Yetter-Drinfeld modules. **Definition 2.8**.: _Let \(H\) be a Hopf monoid in \(\mathcal{C}\)._ 1. \(A\) **Yetter-Drinfeld module** _over_ \(H\) _is a triple_ \((M,\rhd,\delta)\) _such that_ \((M,\rhd)\) _is an_ \(H\)_-module,_ \((M,\delta)\) _is an_ \(H\)_-comodule and_ \[\delta\circ\rhd=(m^{(2)}\otimes\rhd)\circ(1_{H^{\otimes 2}}\circ\tau_{H,H} \otimes 1_{M})\circ(1_{H^{\otimes 3}}\otimes S\otimes 1_{M})\circ(1_{H}\otimes \tau_{H^{\otimes 2},H}\otimes 1_{M})\circ(\Delta^{(2)}\otimes\delta).\] 2. \(A\) **morphism of Yetter-Drinfeld modules** _is a morphism_ \(f:M\to M^{\prime}\) _that is a module and a comodule morphism._ In Sweedler notation with the conventions \(\delta(m)=m_{(0)}\otimes m_{(1)}\) and \(\Delta(h)=h_{(1)}\otimes h_{(2)}\) the Yetter-Drinfeld module condition in Definition 2.8 reads \[(h\rhd m)_{(0)}\otimes(h\rhd m)_{(1)}=h_{(1)}m_{(0)}S(h_{(3)})\otimes(h_{(2)} \rhd m_{(1)}). \tag{11}\] Yetter-Drinfeld modules over group objects in cartesian monoidal categories are especially simple to describe. In this case, composing the coaction morphism \(\delta:M\to H\times M\) with the projection morphism \(\pi_{1}:H\times M\to H\) yields a morphism \(F=\pi_{1}\circ\delta:M\to H\) reminiscent of a moment map. The Yetter-Drinfeld module condition states that this morphism intertwines the \(H\)-module structure on \(M\) and the conjugation action of \(H\) on itself. **Example 2.9**.: _Let \(H\) be a group object in a cartesian monoidal category, \((M,\rhd)\) a module and \((M,\delta)\) a comodule over \(H\). Then \((M,\rhd,\delta)\) is a Yetter-Drinfeld module over \(H\) iff the morphism \(F:=\pi_{1}\circ\delta:M\to H\) satisfies_ \[F\circ\rhd=m^{(2)}\circ(1_{H}\times\tau_{H,F(M)})\circ(1_{H}\times I\times 1 _{F(M)})\circ(\Delta_{H}\times F). \tag{12}\] If the objects of \(\mathcal{C}\) are sets, condition (12) reads \(F(h\rhd m)=hF(m)h^{-1}\) for all \(h\in H\), \(m\in M\). By an abuse of notation, we sometimes write such formulas for the general case to keep notation simple. By Example 2.6 the images of Hopf monoids under symmetric monoidal functors are Hopf monoids. Analogous statements hold for their (co)modules. **Example 2.10**.: 1. _If_ \(F:\mathcal{C}\to\mathcal{C}^{\prime}\) _is a symmetric monoidal functor and_ \(M\) _a (co)module over a Hopf monoid_ \(H\) _in_ \(\mathcal{C}\)_, then_ \(F(M)\) _is a (co)module over the Hopf monoid_ \(F(H)\)_._ 2. _Let_ \(\mathcal{C},\mathcal{C}^{\prime}\) _be cartesian monoidal categories and_ \(F:\mathcal{C}\to\mathcal{C}^{\prime}\) _a functor that preserves finite products. Then for every (co)module_ \(M\) _over a group object_ \(H\) _in_ \(\mathcal{C}\) _the image_ \(F(M)\) _is a (co)module over the group object_ \(F(H)\)_._ (Co)invariants of (co)modules cannot be generalised directly from Hopf algebras over fields to Hopf monoids in symmetric monoidal categories. To obtain generalised notions of (co)invariants, we require that the symmetric monoidal category \(\mathcal{C}\) has all equalisers and coequalisers. **Definition 2.11**.: _[_4_, Def. 2.6]_ _Let \(\mathcal{C}\) be a symmetric monoidal category that has all equalisers and coequalisers, \(H\) a Hopf monoid in \(\mathcal{C}\)._ _1. The_ **invariants** _of an \(H\)-module \((M,\rhd)\) are the coequaliser \((M^{H},\pi)\) of \(\rhd\) and \(\epsilon\otimes 1_{M}\):_ \[H\otimes M\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon \otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon \otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon \otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon \otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[ \epsilon\otimes 1_{M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M }M]{\xrightarrow[\epsilon\otimes 1_{M]{\xrightarrow[\epsilon\otimes 1_{M}M]{\xrightarrow[\epsilon\otimes 1_{M }M]{\xrightarrow[\epsilon\otimes 1_{M]{\xrightarrow[\epsilon\otimes 1_{\quadquad \quad\ _2. The_ **coinvariants** _of an \(H\)-comodule \((M,\delta)\) are the equaliser \((M^{coH},\iota)\) of \(\delta\) and \(\eta\otimes 1_{M}\):_ As expected, \(H\)-(co)module morphisms induce morphisms between the (co)invariants. This follows directly from the universal properties of the (co)equalisers. **Lemma 2.12**.: _[MV, Lemma 2.7] Suppose that \(\mathcal{C}\) has all equalisers and coequalisers and \(H\) is a Hopf monoid in \(\mathcal{C}\). Then for every \(H\)-module morphism \(f:(M,\triangleright)\to(M^{\prime},\triangleright^{\prime})\) there is a unique morphism \(f^{H}:M^{H}\to M^{\prime H}\) with \(f^{H}\circ\pi=\pi^{\prime}\circ f\). Likewise, for every \(H\)-comodule morphism \(f:(M,\delta)\to(M^{\prime},\delta^{\prime})\) there is a unique morphism \(f^{coH}:M^{coH}\to M^{coH}\) with \(\iota^{\prime}\circ f^{coH}=f\circ\iota\)._ Note that all definitions in this section are symmetric with respect to a Hopf monoid \(H\) in \(\mathcal{C}\) and the dual Hopf monoid \(H^{*}\) in \(\mathcal{C}^{op}\) from Example 2.2. Modules and comodules over \(H\) in \(\mathcal{C}\) correspond to comodules and modules over \(H^{*}\) in \(\mathcal{C}^{op}\), respectively, and the same holds for their (co)invariants. It is also directly apparent from the formula in Definition 2.8 that Yetter-Drinfeld modules over \(H\) correspond to Yetter-Drinfeld modules over \(H^{*}\). For objects in a symmetric monoidal category \(\mathcal{C}\) that are both, modules and comodules over certain Hopf monoids in \(\mathcal{C}\), we combine the notion of invariants and coinvariants and impose both conditions. This requires that the category \(\mathcal{C}\) is equipped with _images_. We work with a general non-abelian notion of image, see Mitchell [Mi, Sec. I.10] and Pareigis [Pa, Sec. 1.13]. There is an analogous notion of a _coimage_, which is the image of the corresponding morphism in \(\mathcal{C}^{op}\), see [Mi, Sec. I.10]. An _image_ of a morphism \(f:C\to C^{\prime}\) in \(\mathcal{C}\) is an object \(\operatorname{im}(f)\) together with a pair \((P,I)\) of a monomorphism \(I:\operatorname{im}(f)\to C^{\prime}\) and a morphism \(P:C\to\operatorname{im}(f)\) with \(I\circ P=f\) and the following universal property: for any pair \((Q,J)\) of a monomorphism \(J:X\to C^{\prime}\) and a morphism \(Q:C\to X\) with \(J\circ Q=f\) there is a unique morphism \(v:\operatorname{im}(f)\to X\) with \(I=J\circ\upsilon\). Images are unique up to unique isomorphism. If \(\mathcal{C}\) has all equalisers, then \(P:C\to\operatorname{im}(f)\) is an epimorphism [Mi, Prop. 10.1, Sec. I.10]. In an abelian category \(\mathcal{C}\) this notion of image coincides with the usual definition of an image as the kernel of the cokernel [Pa, Lemma 3, Sec. 4.2]. If \(\mathcal{C}\) is complete, then all images exist, as any complete category has intersections [Mi, Prop. 2.3, Sec. II.2]. This implies the existence of all images [Mi, Sec. I.10]. **Definition 2.13**.: _[MV, Def. 2.8]3 Let \(\mathcal{C}\) be a complete and finitely cocomplete symmetric monoidal category and \(H,K\) Hopf monoids in \(\mathcal{C}\). The_ **binvariants** _of an \(H\)-module and \(K\)-comodule \(M\) are the image of the morphism \(\pi\circ\iota:M^{coK}\to M^{H}\)_ Footnote 3: Def. 2.8 in [MV] considers only the case \(H=K\), as that is the only one required there. \[M^{coK}\] (13) Requiring \(\mathcal{C}\) to be complete and finitely cocomplete ensures the existence of invariants, coinvariants and biinvariants. Examples of such categories are Set, Top, Grp, Vect\({}_{\mathbb{F}}\), Cat, \(k-\)Mod and the category \(\operatorname{Ch}_{k-\mathrm{Mod}}\) of chain complexes of \(k\)-modules. For a small category \(\mathcal{D}\) and a complete and finitely cocomplete category \(\mathcal{C}\) the category \(\mathcal{C}^{\mathcal{D}}\) is also complete and finitely cocomplete, see for instance Pareigis [Pa, Th. 1, Sec. 2.7]. Hence, \(G-\)Set and SSet also satisfy the requirement. As discussed in [MV, Rem. 2.9] one could also consider the _coimage_ of the morphism \(\pi\circ\iota\) instead of its _image_. This amounts to passing from modules and comodules over the Hopf monoids \(H,K\) in \(\mathcal{C}\) to comodules and modules over the Hopf monoids \(H^{*}\), \(K^{*}\) in \(\mathcal{C}^{op}\) from Example 2.2. We illustrate (co)invariants and biinvariants with a few simple examples. (Co)invariants of (co)modules over Hopf monoids in SSet and Cat and the associated biinvariants for Yetter-Drinfeld modules are treated in Sections 6 and 7.3, respectively. **Example 2.14**.: 1. _A Hopf monoid_ \(H\) _in_ \(\mathcal{C}=\mathrm{Set}\) _(in_ \(\mathcal{C}=\mathrm{Top}\)_) is a (topological) group_ \(H\) _and_ * _an_ \(H\)_-module is a (continuous)_ \(H\)_-Set_ \(\rhd:H\times M\to M\)_,_ * \(M^{H}=\{H\rhd m\mid m\in M\}\) _with_ \(\pi:M\to M^{H}\)_,_ \(m\mapsto H\rhd m\) _(and the quotient topology),_ * _an_ \(H\)_-comodule is given by a (continuous) map_ \(F:M\to H\)_,_ * \(M^{coH}=F^{-1}(1)\) _with the inclusion_ \(\iota:F^{-1}(1)\to M\) _(and the subspace topology),_ * \(M_{inv}=\pi(F^{-1}(1))=\{H\rhd m\mid F(m)=1\}\) _(with the final topology induced by_ \(\pi\)_)._ _An_ \(H\)_-module and_ \(H\)_-comodule_ \((M,\rhd,F)\) _is a Yetter-Drinfeld module iff_ \(F(h\rhd m)=hF(m)h^{-1}\) _for all_ \(m\in M\)_,_ \(h\in H\)_._ 2. _Let_ \(G\) _be a group and_ \(H\) _a group with a_ \(G\)_-action by automorphisms, viewed as a Hopf monoid in_ \(G-\mathrm{Set}=\mathrm{Set}^{BG}\)_. Then_ \(H\)_-modules are_ \(H\rtimes G\)_-sets,_ \(H\)_-comodules are_ \(G\)_-sets_ \(M\) _with_ \(G\)_-equivariant maps_ \(F:M\to H\) _and_ * \(M^{H}=\{H\rhd m\mid m\in M\}\) _is the orbit space for_ \(H\) _with the induced_ \(G\)_-action and_ \(G\)_-equivariant canonical surjection_ \(\pi:M\to M^{H}\)_,_ * \(M^{coH}=F^{-1}(1)\) _with the induced_ \(G\)_-action and_ \(G\)_-equivariant inclusion_ \(\iota:F^{-1}(1)\to M\)_,_ * \(M_{inv}=\pi(F^{-1}(1))\) _with the induced_ \(G\)_-action._ 3. _For a Hopf algebra_ \(H\) _over a commutative ring_ \(k\) _as a Hopf monoid in_ \(k\)_-Mod,_ \(H\)_-(co)modules and Yetter-Drinfeld modules are (co)modules and Yetter-Drinfeld modules over_ \(H\) _in the usual sense. Their (co)invariants and biinvariants are_ * \(M^{H}=M/\langle\{h\rhd m-\epsilon(h)m\mid h\in H,m\in M\}\rangle\)_,_ * \(M^{coH}=\{m\in M\mid\delta(m)=1\otimes m\}\)_,_ * \(M_{inv}=\pi(M^{coH})\)_._ While the coinvariants in Example 2.14, 3. coincide with the usual coinvariants for comodules over a Hopf algebra, the invariants form a quotient rather than a subset. This distinction is irrelevant in the case of semisimple Hopf algebras, but not in general. As our definition is symmetric with respect to Hopf monoids in a symmetric monoidal category \(\mathcal{C}\) and the dual Hopf monoids in \(\mathcal{C}^{op}\), it is more natural in our setting. The following example illustrates this. **Example 2.15**.: _For a finite group \(G\) and a commutative ring \(k\) the group algebra \(k[G]\) and its dual \(k[G]^{*}\) are Hopf monoids in \(k\)-Mod._ _For the group algebra \(H=k[G]\)_ * _the invariants of a_ \(H\)_-module_ \((M,\rhd)\) _are_ \(M^{H}=M/\langle\{g\rhd m-m\mid m\in M,g\in G\}\rangle\)_,_ * _comodules are_ \(G\)_-graded_ \(k\)_-modules_ \(M=\oplus_{g\in G}M_{g}\) _with_ \(\delta(m)=g\otimes m\) _for all_ \(m\in M_{g}\)_,_ * _their coinvariants are_ \(M^{coH}=M_{1}\)_._ _A \(k[G]\)-module and comodule \((M,\rhd,\delta)\) is a Yetter-Drinfeld module iff \(g\rhd M_{h}=M_{ghg^{-1}}\) for all \(g,h\in G\), and in this case \(M_{inv}\cong H_{0}(G,M_{1})\)._ _For the dual Hopf monoid \(H=k[G]^{*}\)_ * _modules are_ \(G\)_-graded_ \(k\)_-modules_ \(M=\oplus_{g\in G}M_{g}\) _with_ \(\delta_{g}\rhd m=\delta_{g}(h)m\) _for_ \(m\in M_{h}\)_,_ * _their invariants are_ \(M^{H}=M/(\oplus_{g\in G,g\neq 1}M_{g})\cong M_{1}\)_,_ * _comodules are_ \(k[G]\)_-right modules_ \((M,\lhd)\) _with_ \(\delta(m)=\sum_{g\in G}\delta_{g}\otimes(m\lhd g)\)_,_ * _their coinvariants are_ \(M^{coH}=\{m\in M\mid m\lhd g=m\,\forall g\in G\}\)_._ _A \(k[G]^{*}\)-module and comodule \((M,\rhd,\delta)\) is a Yetter-Drinfeld module iff \(M_{h}\lhd g=M_{ghg^{-1}}\) for all \(g,h\in G\), and in this case \(M_{inv}\cong H^{0}(G,M_{1})\)._ By Lemma 2.12 morphisms of (co)modules over a Hopf monoid \(H\) induce morphisms between their (co)invariants. The question if morphisms of both, modules and comodules, induce morphisms between the associated biinvariants is more subtle in general. It is shown in [MV, Lemma 2.10] that this always holds for _isomorphisms_. As a direct generalisation we have in the notation of (13) **Lemma 2.16**.: _Let \(\mathcal{C}\) be complete and finitely cocomplete, \(H\), \(K\) Hopf monoids in \(\mathcal{C}\) and \(\Phi:M\to M^{\prime}\) an isomorphism of \(H\)-modules and \(K\)-comodules. There is a unique morphism \(\Phi_{inv}:M_{inv}\to M^{\prime}_{inv}\) with \(\pi^{\prime}\circ\Phi\circ\iota=I^{\prime}\circ\Phi_{inv}\circ P\), and \(\Phi_{inv}\) is an isomorphism._ ## 3 Ribbon graphs and surfaces In this section we summarise the background on _ribbon graphs_, also called _fat graphs_ or _embedded graphs_, for more details we refer to the textbooks of Lando et. al. [L+] and Ellis-Monaghan and Moffatt [EM]. Throughout this article, all graphs are _directed_ graphs with a finite number of vertices and edges. In contrast to [MV] we do not require that the graphs are connected and allow **isolated vertices** with no incident edges. **Definition 3.1**.: \(A\) **ribbon graph** _is a graph with a cyclic ordering of the edge ends at each vertex._ The cyclic ordering of edge ends at the vertices of a ribbon graph allows one to thicken its edges to strips or ribbons and defines the faces of the ribbon graph. One says that a path in a ribbon graph **turns maximally left at a vertex** if it enters the vertex along an edge end and leaves it along an edge end that comes directly before it with respect to the cyclic ordering. A **face** of a ribbon graph is defined as a cyclic equivalence class of closed paths that turn maximally left at each vertex and traverse each edge at most once in each direction. Each isolated vertex is also viewed as a face, and such a face is called an **isolated face**. In the following we denote by \(V,E,F\) the sets of vertices, edges and faces of a ribbon graph and by \(s(\alpha),t(\alpha)\) the starting and target vertex of an edge \(\alpha\). We say that two edge ends incident at a vertex \(v\in V\) are **neighbours** or **neighbouring** if one of them comes directly before or after the other with respect to the cyclic ordering at \(v\). An edge \(\alpha\) with \(s(\alpha)=t(\alpha)\) is called a **loop**. A loop at \(v\) whose starting and target end are neighbours is called an **isolated loop**. When drawing a ribbon graph we take the cyclic ordering of edge ends at vertices as the one in the drawing. Ribbon graphs are directly related to embedded graphs on oriented surfaces. Every graph \(\Gamma\) embedded into an oriented surface \(\Sigma\) inherits a cyclic ordering of the edge ends at each vertex and hence a ribbon graph structure. Attaching discs to the faces of the ribbon graph \(\Gamma\) yields an oriented surface \(\Sigma_{\Gamma}\) such that the connected components of \(\Sigma_{\Gamma}\setminus\Gamma\) are discs and in bijection with faces of \(\Gamma\), see Figure 1. If \(\Gamma\) is embedded into an oriented surface \(\Sigma\), the surface \(\Sigma_{\Gamma}\) is homeomorphic to \(\Sigma\) iff each connected component of \(\Sigma\setminus\Gamma\) is a disc. In this case, we call \(\Gamma\)**properly embedded** in \(\Sigma\). Note that this implies a bijection between connected components of \(\Gamma\) and of \(\Sigma\), and connected components of \(\Sigma\) containing an isolated vertex are spheres. The genus \(g\) of a connected component of \(\Sigma\) is then determined by the Euler characteristic \(2-2g=|V|-|E|+|F|\), where \(|V|,|E|,|F|\) are the number of vertices, edges and faces of the associated connected component of \(\Gamma\). Note that each ribbon graph or embedded graph has a Poincare dual obtained by replacing each vertex (face) with a face (vertex) and each edge with a dual edge. This transforms the paths that characterise faces into paths that go counterclockwise around a vertex and vice versa. Edge ends correspond to edge sides of the dual graph and their cyclic ordering at a vertex to the cyclic ordering of the edge sides in the dual face. In the following we sometimes require a _linear_ ordering of the edge ends at a vertex or of the edge sides in a face. This is achieved by inserting a marking, the _cilium_, that separates the edge ends or edge sides of minimal and maximal order, see for instance Figure 2, Definition 3.3 or Example 4.4. For faces this corresponds to the choice of a starting vertex for the associated cyclic equivalence class of paths. **Definition 3.2**.: 1. \(A\) **ciliated vertex** _in a ribbon graph is a vertex with a choice of linear ordering of the incident edge ends that is compatible with their cyclic ordering._ 2. \(A\) **ciliated face** _in a ribbon graph is a closed path that turns maximally left at each vertex, including the starting vertex, and traverses each edge at most once in each direction._ \(A\) **ciliated ribbon graph** _is a ribbon graph in which each face and vertex is assigned a cilium. Isolated vertices and faces are trivially ciliated._ For a closed surface \(\Sigma\) of genus \(g\geq 0\) we often work with a ciliated ribbon graph with a single vertex and a single face that is given by a set of generators of the fundamental group \[\pi_{1}(\Sigma)=\langle\alpha_{1},\beta_{1},\ldots,\alpha_{g},\beta_{g}\mid[ \beta_{g}^{-1},\alpha_{g}]\cdots[\beta_{1}^{-1},\alpha_{1}]=1\rangle. \tag{14}\] **Definition 3.3**.: _The_ **standard graph** _of an oriented surface \(\Sigma\) of genus \(g\geq 1\) is the graph_ (15) _with the face \(f=[\beta_{g}^{-1},\alpha_{g}]\cdots[\beta_{1}^{-1},\alpha_{1}]\) and the ordering of edge ends at \(v\) given by \(s(\alpha_{1})<s(\beta_{1})<t(\alpha_{1})<t(\beta_{1})<\ldots<s(\alpha_{g})<s( \beta_{g})<t(\alpha_{g})<t(\beta_{g})\). In particular, the standard graph for \(S^{2}\) consists of a single isolated vertex and the associated isolated face._ In the following we use certain graph transformations to relate properly embedded ribbon graphs in a connected surface \(\Sigma\) to its standard graph. **Definition 3.4**.: _Let \(\Gamma\) be a ribbon graph with edge set \(E\) and vertex set \(V\)._ _1. The_ **edge reversal** _reverses the orientation of an edge_ \(\beta\in E\)_._ _2. The_ **contraction** _of an edge_ \(\alpha\in E\) _that is not a loop removes_ \(\alpha\in E\) _and fuses the vertices_ \(s(\alpha)\) _and_ \(t(\alpha)\)_._ _3. The_ **edge slide** _slides an end of_ \(\beta\in E\) _that is a neighbour of an end of_ \(\alpha\in E\) _along_ \(\alpha\)_._ _4. The_ **loop deletion** _removes an isolated loop_ \(\beta\in E\) _from_ \(\Gamma\) Figure 1: Attaching a disc to the face \(f\) yields a torus. _In all cases except 2. the resulting ribbon graph inherits all cilia from \(\Gamma\). In 2. one erases either the cilium of \(t(\alpha)\) or of \(s(\alpha)\) and speaks of contracting \(\alpha\) towards \(t(\alpha)\) and \(s(\alpha)\), respectively._ These graph transformations are illustrated in Figure 2. Note that they are not independent. Contracting an edge \(\alpha\) towards \(t(\alpha)\) is the same as first sliding some edge ends along \(\alpha\) and then contracting \(\alpha\) towards \(t(\alpha)\). Contracting an edge \(\alpha\) towards \(t(\alpha)\) is also the same as first reversing \(\alpha\), then contracting \(\alpha\) towards \(s(\alpha)\) and then reversing \(\alpha\). By reversing \(\alpha\) and \(\beta\) before and after a slide, one can reduce all edge slides to the ones that slide the target end of \(\beta\) along the left of \(\alpha\). There are of course other possible graph transformations such as deleting edges, which is dual to edge contractions. However, the graph transformations in Definition 3.4 are sufficient to transform any connected ribbon graph into a standard graph. This is well-known and appears implicitly in many publications. We summarise the argument for the convenience of the reader. **Proposition 3.5**.: _Every connected ribbon graph can be transformed into the standard graph (15) by edge reversals, edge slides, edge contractions and loop deletions._ Proof.: Selecting a maximal tree in \(\Gamma\) and contracting all edges in the tree transforms \(\Gamma\) into a graph \(\Gamma^{\prime}\) with a single vertex. By applying edge slides one can transform \(\Gamma^{\prime}\) into a graph \(\Gamma^{\prime\prime}\) that coincides with (15) up to edge orientation and up to the presence a number of isolated loops between the cilium and the starting end of \(\alpha_{1}\). This follows from an analogous statement for chord diagrams, which correspond to ribbon graphs with a single vertex, see for instance Chmutov, Duzhin and Mostovoy [CDM, Sec. 4.8.6]. Deleting the isolated loops and reversing edges in \(\Gamma^{\prime\prime}\) then yields the standard graph (15). Figure 2: Examples of graph transformations (Co)modules from Hopf monoids and ribbon graphs In this section we use involutive Hopf monoids in symmetric monoidal categories to assign (co)modules over Hopf monoids to ciliated ribbon graphs. In Section 5 we then show that their biinvariants are topological invariants: their isomorphism classes depend only on the genus of the surface obtained by attaching discs to the faces of the graph. In Sections 6 and 7 we determine these biinvariants for simplicial groups as Hopf monoids in SSet and for crossed modules as group objects in Cat. The construction generalises Kitaev's quantum double model and the toric code from [Ki], which was first formulated for the group algebra of a finite group over \(\mathbb{C}\) and then generalised by Buerschaper et al. in [BMCA] to finite-dimensional semisimple \(C^{*}\)-Hopf algebras. A very similar construction to the one in this article is used in [MV] to obtain mapping class group actions from pivotal Hopf monoids in symmetric monoidal categories. The work [MV] considers the biinvariants of a Yetter-Drinfeld module structure assigned to the standard graph (15), but it does not establish that the biinvariants are graph-independent. The construction of the (co)module structures from an involutive Hopf monoid and a ciliated ribbon graph in this section is directly analogous to the one in [MV], which in turn is a straightforward generalisation of [Ki, BMCA]. The only difference is that \(H^{*}\)-modules in [BMCA] are replaced by \(H\)-comodules and \(D(H)\)-modules by Yetter-Drinfeld modules over \(H\). What differs substantially from [Ki, BMCA] are the notions of (co)invariants, biinvariants and the construction of the topological invariant. The works in [Ki, BMCA] rely on the normalised Haar integral of a finite-dimensional semisimple complex Hopf algebra, which is not available in our setting. Our construction is more general, as the only assumptions are that the underlying symmetric monoidal category is complete and finitely cocomplete and the Hopf monoid involutive. The article [MV] also allows pivotal Hopf monoids. The involutive Hopf monoids in this article are examples of pivotal Hopf monoids, with their unit as pivotal structure. Let \(H\) be an involutive Hopf monoid in a complete and finitely cocomplete symmetric monoidal category \(\mathcal{C}\) and \(\Gamma\) a ciliated ribbon graph with vertex set \(V\), edge set \(E\) and face set \(F\). We consider the \(|E|\)-fold tensor product of \(H\) with itself, together with an assignment of the copies of \(H\) in this tensor product to the edges of \(\Gamma\), which we emphasise by writing \(H^{\otimes E}\). If \(E=\emptyset\), we set \(H^{\otimes E}=e\). The object \(H^{\otimes E}\) can be viewed as the counterpart of the Hilbert space of Kitaev's quantum double model in [Ki, BMCA]. We assign to each edge \(\alpha\in E\) two \(H\)-module structures \(\rhd_{\alpha\pm}:H\otimes H^{\otimes E}\to H^{\otimes E}\) and \(H\)-comodule structures \(\delta_{\alpha\pm}:H^{\otimes E}\to H\otimes H^{\otimes E}\). The \(H\)-module structures \(\rhd_{\alpha+}\) and \(\rhd_{\alpha-}\) are assigned to the target and starting end of \(\alpha\) and the \(H\)-comodule structures to its left and right side, respectively. They are induced by the standard \(H\)-(co)module structures on \(H\) via left (co)multiplication. This requires some notation. Given a morphism \(f:H\to K\) in \(\mathcal{C}\) and an edge \(\alpha\in E\) we write \(f_{\alpha}\) for the morphism that applies \(f\) to the copy of \(H\) in \(H^{\otimes E}\) that belongs to \(\alpha\) and the identity morphism to the other copies. We write \(\tau_{\alpha}:H^{\otimes E}\to H^{\otimes E}\) or \(\tau_{\alpha}:H\otimes H^{\otimes E}\to H\otimes H^{\otimes E}\) for the composite of braidings that moves the copy of \(H\) for \(\alpha\) to the left. We denote by \(m_{\alpha}:H\otimes H^{\otimes E}\to H^{\otimes E}\) the morphism that moves the first copy of \(H\) to the left of the one for \(\alpha\) and then applies \(m\) to them. **Definition 4.1**.: _The \(H\)-module structures \(\rhd_{\alpha\pm}:H\otimes H^{\otimes E}\to H^{\otimes E}\) and \(H\)-comodule structures \(\delta_{\alpha\pm}:H^{\otimes E}\to H\otimes H^{\otimes E}\) for an edge \(\alpha\in E\) are_ \[\rhd_{\alpha+}:=m_{\alpha},\quad\rhd_{\alpha-}:=S_{\alpha}\circ\rhd_{\alpha+} \circ(1_{H}\otimes S_{\alpha}),\quad\delta_{\alpha+}:=\tau_{\alpha}\circ \Delta_{\alpha},\quad\delta_{\alpha-}:=(1_{H}\otimes S_{\alpha})\circ\delta_ {\alpha+}\circ S_{\alpha}.\] By definition, the (co)module structures assigned to different edges of a graph commute, since they (co)act on different copies of \(H\) in the tensor product \(H^{\otimes E}\). A direct computation using (1) and (5) shows that the two \(H\)-(co)module structures assigned to a given edge commute as well. The proof is directly analogous to the ones for Hopf algebras in [BMCA]. **Lemma 4.2**.: _[MV, Lemma 5.2, 2.] For any edge \(\alpha\in E\) the \(H\)-module structures \(\rhd_{\alpha\pm}\) and the \(H\)-comodule structures \(\delta_{\alpha\pm}\) commute:_ \[\rhd_{\alpha-}\circ(1_{H}\otimes\rhd_{\alpha+}) =\rhd_{\alpha+}\circ(1_{H}\otimes\rhd_{\alpha-})\circ(\tau_{H,H} \otimes 1_{H^{\otimes E}})\text{,}\] \[(1_{H}\otimes\delta_{\alpha-})\circ\delta_{\alpha+} =(\tau_{H,H}\otimes 1_{H^{\otimes E}})\circ(1_{H}\otimes\delta_{ \alpha+})\circ\delta_{\alpha-}\text{.}\] The (co)module structures from Definition 4.1 define an \(H\)-module structure on \(H^{\otimes E}\) for each ciliated vertex \(v\) and an \(H\)-comodule structure on \(H^{\otimes E}\) for each ciliated face \(f\) of \(\Gamma\). The former applies the comultiplication to \(H\), distributes the resulting copies of \(H\) to the edge ends at \(v\) according to their ordering and acts on them with \(\rhd_{\alpha\pm}\) according to their orientation. Dually, the coaction applies the \(H\)-coaction \(\delta_{\alpha\pm}\) to each edge \(\alpha\) in \(f\), depending on its orientation relative to \(f\), and multiplies the resulting copies of \(H\) according to the order of the edge sides in \(f\). **Definition 4.3**.: _[MV, Def. 5.3]_ 1. _The_ \(H\)_-module structure_ \(\rhd_{v}:H\otimes H^{\otimes E}\to H^{\otimes E}\) _for a ciliated vertex_ \(v\) _with incident edge ends_ \(\alpha_{1}<\alpha_{2}<\ldots<\alpha_{n}\) _is_ \[\rhd_{v}=\rhd_{\alpha_{1}}\circ(1_{H}\otimes\rhd_{\alpha_{2}})\circ\ldots \circ(1_{H^{\otimes(n-1)}}\otimes\rhd_{\alpha_{n}})\circ(\Delta^{(n-1)} \otimes 1_{H^{\otimes E}})\text{,}\] (16) _where_ \(\rhd_{\alpha}=\rhd_{e(\alpha)+}\) _if_ \(\alpha\) _is incoming,_ \(\rhd_{\alpha}=\rhd_{e(\alpha)-}\) _if_ \(\alpha\) _is outgoing and_ \(e(\alpha)\) _is the edge of_ \(\alpha\)_._ 2. _The_ \(H\)_-comodule structure_ \(\delta_{f}:H^{\otimes E}\to H\otimes H^{\otimes E}\) _for a ciliated face_ \(f\) _that traverses the edges_ \(\alpha_{n},\alpha_{n-1},\ldots,\alpha_{1}\) _in this order is_ \[\delta_{f}=(m^{(n-1)}\otimes 1_{H^{\otimes E}})\circ(1_{H^{\otimes(n-1)}} \otimes\delta_{\alpha_{r}})\circ\ldots\circ(1_{H}\otimes\delta_{\alpha_{2}}) \circ\delta_{\alpha_{1}}\text{,}\] (17) _where_ \(\delta_{\alpha}=\delta_{e(\alpha)+}\) _if_ \(\alpha\) _is traversed with,_ \(\delta_{\alpha}=\delta_{e(\alpha)-}\) _if_ \(\alpha\) _is traversed against its orientation and_ \(e(\alpha)\) _is the edge of_ \(\alpha\)_._ _To an isolated vertex and face we assign the (co)module structures \(\rhd_{v}=\epsilon\otimes 1_{H^{\otimes E}}\) and \(\delta_{f}=\eta\otimes 1_{H^{\otimes E}}\)._ To avoid heavy notation we use Sweedler notation and describe these (co)module structures by labelling edges of a graph with letters representing the associated copies of \(H\). **Example 4.4**.: _The \(H\)-module structure \(\rhd_{v}\) for the ciliated vertex \(v\) with incident edge ends \(t(a)<s(b)<t(b)<t(c)<s(d)\) and the \(H\)-comodule structure \(\delta_{f}\) for the ciliated face \(f=e\circ e^{-1}\circ d\circ c^{-1}\circ b\circ a\) are_ \[h\rhd_{v}\ (a\otimes b\otimes c\otimes d) =h_{(1)}a\otimes h_{(3)}bS(h_{(2)})\otimes h_{(4)}c\otimes dS(h_ {(5)})\text{,}\] \[\delta_{f}\left(a\otimes b\otimes c\otimes d\otimes e\right) =e_{(1)}S(e_{(3)})d_{(1)}S(c_{(2)})b_{(1)}a_{(1)}\otimes a_{(2)} \otimes b_{(2)}\otimes c_{(1)}\otimes d_{(2)}\otimes e_{(2)}\text{.}\] The interaction of the \(H\)-module and \(H\)-comodule structures assigned to ciliated vertices and faces of the graph is investigated in [Ki, BMCA, MV]. They are local in the sense that the \(H\)-(co)module structure for a vertex (face) affects only those copies of \(H\) that belong to their incident edges. As the action \(\rhd_{\alpha+}\) for an edge \(\alpha\in E\) acts by left- and \(\rhd_{\alpha-}\) by right-multiplication, the \(H\)-module structures for different vertices commute. The same holds for the \(H\)-comodule structures at different faces. Moreover, \(H\)-module structures commute with \(H\)-comodule structures unless their cilia share a vertex or a face. The \(H\)-module and \(H\)-comodule structure for each cilium define a Yetter-Drinfeld module structure. **Lemma 4.5**.: _[_MV_, Lemma 5.5]___ 1. _The_ \(H\)_-left module structures for distinct vertices_ \(v\neq v^{\prime}\in V\) _and the_ \(H\)_-left comodule structures for distinct faces_ \(f\neq f^{\prime}\in F\) _commute for all choices of cilia:_ \[\rhd_{v^{\prime}}\circ(1_{H}\otimes\rhd_{v}) =\rhd_{v}\circ(1_{H}\otimes\rhd_{v^{\prime}})\circ(\tau_{H,H} \otimes 1_{H^{\otimes E}}),\] (18) \[(1_{H}\otimes\delta_{f^{\prime}})\circ\delta_{f} =(\tau_{H,H}\otimes 1_{H^{\otimes E}})\circ(1_{H}\otimes\delta_{f} )\circ\delta_{f^{\prime}}.\] (19) 2. _If two cilia are at distinct vertices and distinct faces, the_ \(H\)_-module structure for one of them commutes with the_ \(H\)_-comodule structure for the other:_ \[\delta_{f}\circ\rhd_{v}=(1_{H}\otimes\rhd_{v})\circ(\tau_{H,H}\otimes 1_{H^{ \otimes E}})\circ(1_{H}\otimes\delta_{f}).\] (20) 3. _If_ \(v\in V\) _and_ \(f\in F\) _share a cilium, then_ \((H^{\otimes E},\rhd_{v},\delta_{f})\) _is a Yetter-Drinfeld module over_ \(H\)_._ **Example 4.6**.: _Let \(H\) be an involutive Hopf monoid in \(\mathcal{C}\) and \(\Gamma\) the standard graph (15) on a surface \(\Sigma\) of genus \(g\geq 1\). Then the associated Yetter-Drinfeld module structure on \(H^{\otimes E}\) is_ \[h\rhd(a^{1}\otimes b^{1}\otimes\ldots\otimes a^{g}\otimes b^{g}) \tag{21}\] \[\qquad=h_{(3)}a^{1}S(h_{(1)})\otimes h_{(4)}b^{1}S(h_{(2)}) \otimes\ldots\otimes h_{(4g-1)}a^{g}S(h_{(4g-3)})\otimes h_{(4g)}b^{g}S(h_{(4 g-2)})\] \[\delta(a^{1}\otimes b^{1}\otimes\ldots\otimes a^{g}\otimes b^{g})\] \[\qquad=S(b^{g}_{(3)})a^{g}_{(1)}b^{g}_{(1)}S(a^{g}_{(3)})\cdots S (b^{1}_{(3)})a^{1}_{(1)}b^{1}_{(1)}S(a^{1}_{(3)})\otimes a^{1}_{(2)}\otimes b ^{1}_{(2)}\otimes\ldots\otimes a^{g}_{(2)}\otimes b^{g}_{(2)}.\] _If \(H\) is a group object in a cartesian monoidal category, this reduces to_ \[h\rhd(a_{1},b_{1},\ldots,a_{g},b_{g})=(ha_{1}h^{-1},hb_{1}h^{-1},\ldots,ha_{g}h^{-1},hb_{g}h^{-1}) \tag{22}\] \[\delta(a_{1},b_{1},\ldots,a_{g},b_{g})=([b^{-1}_{g},a_{g}]\cdots[ b^{-1}_{1},a_{1}],a_{1},b_{1},\ldots,a_{g},b_{g}).\] If each vertex and face of \(\Gamma\) is equipped with a cilium, then Definition 4.3 assigns an \(H\)-(co)module structure on \(H^{\otimes E}\) to each vertex (face) of \(\Gamma\). By Lemma 4.5 these (co)module structures commute and hence combine into \(H^{\otimes E}\)-module and \(H^{\otimes F}\)-comodule structures on \(H^{\otimes E}\). **Definition 4.7**.: _The \(H^{\otimes n}\)-module structure for a subset \(\emptyset\neq\mathcal{V}:=\{v_{1},\ldots,v_{n}\}\subset V\) and \(H^{\otimes m}\)-comodule structure for a subset \(\emptyset\neq\mathcal{F}:=\{f_{1},\ldots,f_{m}\}\subset F\) are_ \[\rhd_{\mathcal{V}} :=\rhd_{v_{1}}\circ(1_{H}\otimes\rhd_{v_{2}})\circ\cdots\circ(1 _{H^{\otimes(n-2)}}\otimes\rhd_{v_{n-1}})\circ(1_{H^{\otimes(n-1)}}\otimes \rhd_{v_{n}}):H^{\otimes n}\otimes H^{\otimes E}\to H^{\otimes E}, \tag{23}\] \[\delta_{\mathcal{F}} :=(1_{H^{\otimes(m-1)}}\otimes\delta_{f_{m}})\circ(1_{H^{\otimes (m-2)}}\otimes\delta_{f_{m-1}})\circ\cdots\circ(1_{H}\otimes\delta_{f_{2}}) \circ\delta_{f_{1}}:H^{\otimes E}\to H^{\otimes m}\otimes H^{\otimes E}.\] Equations (18) and (19) ensure that the (co)actions do not depend on the numbering of vertices or faces in Definition 4.7. That (23) defines an \(H^{\otimes n}\)-module structure follows from the identity \[\rhd_{\mathcal{V}^{\prime}}\circ(1_{H^{\otimes|\mathcal{V}^{\prime}|}}\otimes \rhd_{v})\circ(\tau_{H,H^{\otimes|\mathcal{V}^{\prime}|}}\otimes 1_{H^{\otimes E}})=\rhd_{v} \circ(1_{H}\otimes\rhd_{\mathcal{V}^{\prime}}),\] valid for any subset \(\emptyset\neq\mathcal{V}^{\prime}\subset V\), \(v\in V\setminus\mathcal{V}^{\prime}\). The dual statement for \(\delta_{\mathcal{F}}\) follows analogously. The module and comodule structure from Definition 4.7 define the categorical counterpart of the _protected space_ or _ground state_ in Kitaev's quantum double model. In the models based on a finite-dimensional semisimple complex Hopf algebras in [Ki, BMCA] the ground state is an eigenspace of a Hamiltonian that combines these \(H\)-(co)module structures. The normalised Haar integral defines a projector on the ground state. In our setting these structures are not available. Instead, we consider the binvariants from Definition 2.13 for the action and coaction from (23). **Definition 4.8**.: _The_ **protected object** _for an involutive Hopf monoid \(H\) and a ciliated ribbon graph \(\Gamma\) are the binvariants \(M_{inv}=\operatorname{Im}(\pi\circ\iota)\) of \(H^{\otimes E}\) with the module structure \(\rhd_{V}\) and comodule structure \(\delta_{F}\) from (23)._ In the quantum double models for a finite-dimensional semisimple complex Hopf algebra it is directly apparent that imposing (co)invariance under all individual (co)actions at the vertices (faces) of a graph is the same as imposing (co)invariance under the combined action in Definition 4.7. In this setting the (co)invariants for the individual (co)actions are linear subspaces of \(H^{\otimes E}\) and the (co)invariants of the combined (co)actions their intersections. In our setting an analogous statement follows from the universal properties of the coequaliser \(\pi_{\mathcal{V}}:H^{\otimes E}\to M^{H}_{\mathcal{V}}\) for the action \(\rhd_{\mathcal{V}}\) and the equaliser \(\iota_{\mathcal{F}}:M^{coH}_{\mathcal{F}}\to H^{\otimes E}\) for the coaction \(\delta_{\mathcal{F}}\), as given in Definition 2.11. **Lemma 4.9**.: _Let \(\emptyset\neq\mathcal{V}\subset V\), \(\emptyset\neq\mathcal{F}\subset F\) be subsets._ 1. _For any subset_ \(\emptyset\neq\mathcal{V}^{\prime}\subset\mathcal{V}\) _the morphism_ \(\pi_{\mathcal{V}}:H^{\otimes E}\to M^{H}_{\mathcal{V}}\) _satisfies_ \[\pi_{\mathcal{V}}\circ\rhd_{\mathcal{V}^{\prime}}=\pi_{\mathcal{V}}\circ( \epsilon^{|\mathcal{V}^{\prime}|}\otimes 1_{H^{\otimes E}}).\] (24) _There is a unique morphism_ \(\chi_{\mathcal{V}^{\prime},\mathcal{V}}:M^{H}_{\mathcal{V}^{\prime}}\to M^{H} _{\mathcal{V}}\) _with_ \(\chi_{\mathcal{V}^{\prime},\mathcal{V}}\circ\pi_{\mathcal{V}^{\prime}}=\pi_{ \mathcal{V}}\)_. It is an epimorphism._ 2. _For any subset_ \(\emptyset\neq\mathcal{F}^{\prime}\subset\mathcal{F}\) _the morphism_ \(\iota_{\mathcal{F}}:M^{coH}_{\mathcal{F}}\to H^{\otimes E}\) _satisfies_ \[\delta_{\mathcal{F}^{\prime}}\circ\iota_{\mathcal{F}}=(\eta^{|\mathcal{F}^{ \prime}|}\otimes 1_{H^{\otimes E}})\circ\iota_{\mathcal{F}}.\] (25) _There is a unique morphism_ \(\xi_{\mathcal{F}^{\prime},\mathcal{F}}:M^{coH}_{\mathcal{F}}\to M^{coH}_{ \mathcal{F}^{\prime}}\) _with_ \(\iota_{\mathcal{F}^{\prime}}\circ\xi_{\mathcal{F}^{\prime},\mathcal{F}}=\iota_ {\mathcal{F}}\)_. It is a monomorphism._ Proof.: We prove 1., as 2. is the dual statement. It suffices to verify (24) for \(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\), \(\mathcal{V}^{\prime}=\{v_{j}\}\), and the claim follows by induction over \(|\mathcal{V}^{\prime}|\). For this note first that Definition 4.7 implies \[\rhd_{\mathcal{V}}\circ(\eta^{\otimes(j-1)}\otimes 1_{H}\otimes\eta^{\otimes(n-j )}\otimes 1_{H^{\otimes E}})=\rhd_{v_{j}}\qquad\qquad\forall j\in\{1,\ldots,n\}. \tag{26}\] As \(\pi_{\mathcal{V}}\) is the coequaliser of \(\rhd_{\mathcal{V}}\) and \(\epsilon^{\otimes n}\otimes 1_{H^{\otimes E}}\) one obtains \[\pi_{\mathcal{V}}\circ\rhd_{v_{j}} \stackrel{{\eqref{eq:v_j}}}{{=}}\pi_{\mathcal{V}} \circ\rhd_{\mathcal{V}}\circ(\eta^{\otimes(j-1)}\otimes 1_{H}\otimes\eta^{ \otimes(n-j)}\otimes 1_{H^{\otimes E}})\] \[=\pi_{\mathcal{V}}\circ(\epsilon^{\otimes n}\otimes 1_{H^{\otimes E}}) \circ(\eta^{\otimes(j-1)}\otimes 1_{H}\otimes\eta^{\otimes(n-j)}\otimes 1_{H^{ \otimes E}})\,=\,\pi_{\mathcal{V}}\circ(\epsilon\otimes 1_{H^{\otimes E}}).\] Equation (24) and the universal property of the coequaliser \(\pi_{\mathcal{V}^{\prime}}\) imply the existence of a unique morphism \(\chi_{\mathcal{V}^{\prime},\mathcal{V}}:M^{H}_{\mathcal{V}^{\prime}}\to M^{H} _{\mathcal{V}}\) with \(\chi_{\mathcal{V}^{\prime},\mathcal{V}}\circ\pi_{\mathcal{V}^{\prime}}=\pi_{ \mathcal{V}}\). For any two morphisms \(q_{1},q_{2}:M^{H}_{\mathcal{V}}\to X\) with \(q_{1}\circ\chi_{\mathcal{V}^{\prime},\mathcal{V}}=q_{2}\circ\chi_{\mathcal{V} ^{\prime},\mathcal{V}}\) one has \(q_{1}\circ\chi_{\mathcal{V}^{\prime},\mathcal{V}}\circ\pi_{\mathcal{V}^{\prime} }=q_{1}\circ\pi_{\mathcal{V}}=q_{2}\circ\pi_{\mathcal{V}}=q_{2}\circ\chi_{ \mathcal{V}^{\prime},\mathcal{V}}\circ\pi_{\mathcal{V}^{\prime}}\). As \(\pi_{\mathcal{V}}\) is a coequaliser and hence an epimorphism, this implies \(q_{1}=q_{2}\), and \(\chi_{\mathcal{V}^{\prime},\mathcal{V}}\) is an epimorphism. It is also directly apparent from Definition 4.7 that (co)module morphisms with respect to all individual (co)module structures at vertices and faces in \(\mathcal{V}\) and \(\mathcal{F}\) are also (co)module morphisms with respect to the (co)actions \(\rhd_{\mathcal{V}}\) and \(\delta_{\mathcal{F}}\). More precisely, for ciliated ribbon graphs \(\Gamma,\Gamma^{\prime}\), subsets \(\emptyset\neq\mathcal{V}\subset V\), \(\emptyset\neq\mathcal{V}^{\prime}\subset V^{\prime}\) and a bijection \(\varphi:\mathcal{V}\rightarrow\mathcal{V}^{\prime}\), \(v\mapsto v^{\prime}\), any morphism \(g:H^{\otimes E}\to H^{\otimes E^{\prime}}\) that is a module morphism with respect to \(\rhd_{v}\) and \(\rhd_{v^{\prime}}\) for all \(v\in V\) is also a module morphism with respect to \(\rhd_{\mathcal{V}}\) and \(\rhd_{\mathcal{V}^{\prime}}\). An analogous statement holds for \(\delta_{\mathcal{F}}\) and comodule morphisms. Graph independence In this section we show that the protected object from Definition 4.8 is a topological invariant: Although its definition requires a ciliated ribbon graph \(\Gamma\), its isomorphism class depends only on the homeomorphism class of the surface obtained by attaching discs to the faces of \(\Gamma\). To prove this, we show first in Section 5.1 that the (co)invariants associated to the (co)module structures at the vertices (faces) of \(\Gamma\) depend neither on the edge orientation nor on the choices of the cilia. Reversing the orientation of edges and different choices of cilia yield isomorphisms between these (co)invariants and hence also between the bihvariants. We then show in Section 5.2 and 5.3 that the other graph transformations from Definition 3.4 induce isomorphisms between the protected objects, although not necessarily between the (co)invariants. In Section 5.4 we combine these results to obtain topological invariance and treat some simple examples. As in Section 4 we consider a complete and finitely cocomplete symmetric monoidal category \(\mathcal{C}\), an involutive Hopf monoid \(H\) in \(\mathcal{C}\) and a ciliated ribbon graph \(\Gamma\). ### Edge orientation reversal and moving the cilium As edge orientation reversal switches the start and target and the left and right side of an edge \(\alpha\in E\), it exchanges the associated actions \(\rhd_{\alpha\pm}\) and coactions \(\delta_{\alpha\pm}\) from Definition 4.1. It is directly apparent from their definitions that this is achieved by applying the antipode. **Definition 5.1**.: _The automorphism of \(H^{\otimes E}\) associated to the_ **reversal** _of an edge \(\alpha\in E\) is \(S_{\alpha}:H^{\otimes E}\to H^{\otimes E}\)._ **Lemma 5.2**.: _For any ciliated vertex \(v\in V\), ciliated face \(f\in F\) and edge \(\beta\in E\) the edge reversal \(S_{\beta}\) is an isomorphism of \(H\)-modules and \(H\)-comodules with respect to \(\rhd_{v}\) and \(\delta_{f}\)._ Proof.: We denote by \(\rhd_{v}^{\prime}\) and \(\delta_{f}^{\prime}\) the module and comodule structure in the graph where the orientation of \(\beta\) is reversed and verify that \(\rhd_{v}^{\prime}\circ(1_{H}\otimes S_{\beta})=S_{\beta}\circ\rhd_{v}\) and \(\delta_{f}^{\prime}\circ S_{\beta}=(1_{H}\otimes S_{\beta})\circ\delta_{f}\). If \(\beta\) is not incident at \(v\) and \(f\), the copy of \(H\) in \(H^{\otimes E}\) assigned to \(\beta\) is not affected by \(\rhd_{v},\rhd_{v}^{\prime}\) and \(\delta_{f},\delta_{f}^{\prime}\), and the identity follows directly. If \(\beta\) is incident at \(v\) or \(f\), it follows from the expressions for the (co)actions in Definitions 4.1 and 4.3. As a direct consequence of Lemma 5.2, Lemma 2.12 and Lemma 2.16 one has **Corollary 5.3**.: _Reversing the orientation of an edge in \(\Gamma\) to obtain \(\Gamma^{\prime}\) induces isomorphisms between the invariants, coinvariants and protected objects of \(\Gamma\) and \(\Gamma^{\prime}\)._ **Lemma 5.4**.: _The (co)invariants for the \(H\)-(co)module structure at a given vertex (face) do not depend on the choice of cilia: moving the position of the cilium yields isomorphic (co)invariants. This induces isomorphisms of the protected objects._ Proof.: We focus on the \(H\)-module structure and its invariants. We consider a fixed position of the cilium at a vertex \(v\) with associated vertex action \(\rhd_{v}\) and coequaliser \(\pi_{v}:H^{\otimes E}\to M_{v}^{H}\) and compare it to the action \(\rhd_{v}^{\prime}\) and coequaliser \(\pi_{v}^{\prime}:H^{\otimes E}\to M_{v}^{H}\) obtained by rotating the cilium counterclockwise by one position. We first show that the coequaliser \(\pi_{v}:H^{\otimes E}\to M_{v}^{H}\) satisfies \[\pi_{v}\circ\rhd_{v}^{\prime}=\pi_{v}\circ(\epsilon\otimes 1_{H^{\otimes E}}). \tag{27}\] By definition of the \(H\)-module structure \(\rhd_{v}\) and by Lemma 5.2 it is sufficient to prove this for a vertex with \(n\) incoming edges. The computations for vertices with incident loops are analogous. For a vertex with \(n\) incoming edges we have \[\pi_{v}\circ\rhd_{v}^{\prime}\left(h\otimes a^{1}\otimes a^{2}\otimes \ldots\otimes a^{n}\right)\,=\,\pi_{v}\left(h_{(n)}a^{1}\otimes h_{(1)}a^{2} \otimes\ldots\otimes h_{(n-1)}a^{n}\right)\] \[=\pi_{v}\left(h_{(2)}{}_{(1)}S(h_{(1)})h_{(3)}a^{1}\otimes h_{(2) }{}_{(2)}a^{2}\otimes\ldots\otimes h_{(2)}{}_{(n)}a^{n}\right)\] \[=\pi_{v}\circ\rhd_{v}\left(h_{(2)}\otimes S(h_{(1)})h_{(3)}a^{1} \otimes a^{2}\otimes\ldots\otimes a^{n}\right)=\pi_{v}\left(\epsilon(h_{(2)}) \otimes S(h_{(1)})h_{(3)}a^{1}\otimes a^{2}\otimes\ldots\otimes a^{n}\right)\] \[=\pi_{v}\circ(\epsilon\otimes 1_{H^{\otimes n}})\left(h\otimes a^{1} \otimes a^{2}\otimes\ldots\otimes a^{n}\right)\!,\] where we used first the definition of \(\rhd_{v}^{\prime}\), then the defining property of the antipode and that \(S\circ S=1_{H}\), then the definition of \(\rhd_{v}\), the fact that \(\pi_{v}\) coequalises \(\rhd_{v}\) and \(\epsilon\otimes 1_{H^{\otimes E}}\) and then again the defining properties of the antipode and the counitality of \(H\). Inductively, we obtain (27) for all positions of the cilium at \(v\) and the same identity with \(\pi_{v},\rhd_{v}\) and \(\pi_{v}^{\prime},\rhd_{v}^{\prime}\) swapped. With the universal property of the coequalisers \(\pi_{v}\), \(\pi_{v}^{\prime}\) this yields unique morphisms \(\phi:M_{v}^{H}\to M_{v}^{\prime H}\), \(\phi^{\prime}:M_{v}^{\prime H}\to M_{v}^{H}\) with \(\phi\circ\pi_{v}=\pi_{v}^{\prime}\) and \(\phi^{\prime}\circ\pi_{v}^{\prime}=\pi_{v}\). As \(\pi_{v}\), \(\pi_{v}^{\prime}\) are epimorphisms, this implies \(\phi^{\prime}=\phi^{-1}\). The dual claim for the comodule structure and its coinvariants follows analogously. For all positions of the cilium at \(f\) with associated coaction \(\delta_{f}^{\prime}\), there is a unique morphism \(\psi:M^{coH}\to M^{\prime coH}\) with \(\iota_{f}^{\prime}\circ\psi=\iota_{f}\), and \(\psi\) is an isomorphism. Combining these statements for the (co)invariants of all vertices (faces) and using Lemmas 2.16 and 4.9 yields isomorphisms of the protected objects. ### Edge slides and edge contractions We now consider the edge slides and edge contractions from Definition 3.4. Edge slides were already investigated in [MV], where it was shown that they define mapping class group actions. They yield automorphisms of the object \(H^{\otimes E}\) that are morphisms of \(H\)-modules and \(H\)-comodules as long as no edge ends slide over cilia. **Definition 5.5**.: _[_MV_, Def. 6.1]_ _Let \(\alpha\neq\beta\) be edges of \(\Gamma\) with the starting end of \(\alpha\) directly before the target end of \(\beta\) in the ordering at \(s(\alpha)=t(\beta)\). The_ **edge slide** _of the target end of \(\beta\) along \(\alpha\) corresponds to the isomorphism_ \[S_{\alpha,\beta}:=\rhd_{\beta+}\circ\delta_{\alpha+}:H^{\otimes E}\to H^{ \otimes E}\text{ with }S_{\alpha,\beta}^{-1}=\rhd_{\beta+}\circ(S\otimes 1_{H^{ \otimes E}})\circ\delta_{\alpha+}:H^{\otimes E}\to H^{\otimes E}.\] _Edge slides for other edge orientations are defined by reversing edge orientations with the antipode._ **Example 5.6**.: _The isomorphisms induced by the edge slides_ _(a) (b) (c) (d) ( _are obtained by (a) applying Definition 5.5 and (b) first reversing the orientation of \(\alpha\), applying the inverse edge slide from Definition 5.5 and then reversing the orientation of \(\alpha\). This yields_ \[(a) S_{\alpha,\beta}(\alpha\otimes\beta\otimes\gamma\otimes\delta \otimes\mu\otimes\nu) =\alpha_{(2)}\otimes\alpha_{(1)}\beta\otimes\gamma\otimes\delta \otimes\mu\otimes\nu,\] \[(b) S_{\alpha,\beta}(\alpha\otimes\beta\otimes\gamma\otimes\delta \otimes\mu\otimes\nu) =\alpha_{(1)}\otimes\beta\otimes\gamma\otimes\alpha_{(2)}\delta \otimes\mu\otimes\nu.\] By construction, edge slides affect only the two copies of \(H\) in \(H^{\otimes E}\) of the edges involved in the slide and commute with edge orientation reversals. Moreover, they respect the module and comodule structures at vertices and faces and hence induce isomorphisms between the protected objects. **Proposition 5.7**.: _[MV, Prop. 6.2] Let \(v\) and \(f\) be a ciliated vertex and face in a ribbon graph \(\Gamma\) with associated \(H\)-module structure \(\rhd_{v}\) and \(H\)-comodule structure \(\delta_{f}\). Any edge slide that does not slide edge ends over their cilia is an isomorphism of \(H\)-left modules and \(H\)-left comodules with respect to \(\rhd_{v}\) and \(\delta_{f}\)._ **Corollary 5.8**.: _Edge slides from a ribbon graph \(\Gamma\) to a ribbon graph \(\Gamma^{\prime}\) induce isomorphisms between the invariants, coinvariants and protected objects of \(\Gamma\) and \(\Gamma^{\prime}\)._ Proof.: For edge slides that do not slide edge ends over cilia, this follows directly from Lemmas 2.12, 2.16 and Proposition 5.7. If an edge end slides over a cilium, we can apply Lemma 5.4 to move the cilium and obtain the same result. We now consider edge contractions. Recall from Definition 3.4 that an edge \(\alpha\in E\) may only be contracted if its starting and target vertex differ and that contracting \(\alpha\) towards \(v\in\{s(\alpha),t(\alpha)\}\) erases the cilium at \(v\), while the cilium at the other vertex is preserved. **Definition 5.9**.: _The morphism \(c_{\alpha,v}:H^{\otimes E}\to H^{\otimes(E-1)}\) induced by an_ **edge contraction** _of an edge \(\alpha\) towards \(v\in\{s(\alpha),t(\alpha)\}\) is_ \[c_{\alpha,v}=\begin{cases}\rhd_{v,\alpha}\circ\tau_{\alpha}\circ S_{\alpha}& \text{ if }v=t(\alpha)\\ \rhd_{v,\alpha}\circ\tau_{\alpha}&\text{ if }v=s(\alpha)\end{cases}\] _where \(\rhd_{v,\alpha}:H^{\otimes E}\to H^{\otimes(E-1)}\) denotes the \(H\)-module structure from Definition 4.3 at \(v\), where \(\alpha\) is replaced by a cilium and \(\tau_{\alpha}\) is given before Definition 4.1. If \(v\) is univalent, then \(c_{\alpha,v}=\epsilon_{\alpha}\)._ **Example 5.10**.: _Contracting the edge \(\alpha\) towards \(v\) in_ _gives the morphism \(c_{\alpha,v}\) with_ \[c_{\alpha,v}\left(\alpha\otimes b\otimes c\otimes d\otimes k\otimes l\right)= \alpha_{(3)}b\otimes cS(\alpha_{(4)})\otimes\alpha_{(1)}dS(\alpha_{(2)}) \otimes k\otimes l.\] It follows directly from Definition 5.9 that first reversing the orientation of an edge \(\beta\) and then contracting it is the same as just contracting \(\beta\). It also follows from Definitions 4.1 and 4.3 that reversing the orientation of an edge \(\beta\) commutes with contractions of all edges \(\alpha\neq\beta\). The contraction of an edge \(\alpha\) also commutes with edge slides along \(\alpha\), which allows one to express any edge contraction as a composite of edge slides and an edge contraction towards a univalent vertex. **Lemma 5.11**.: _Let \(\Gamma^{\prime}\) be obtained by reversing an edge \(\beta\) in \(\Gamma\). Then_ \[c^{\prime}_{\beta,v}\circ S_{\beta} =c_{\beta,v} c^{\prime}_{\alpha,v}\circ S_{\beta} =S_{\beta}\circ c_{\alpha,v}\quad\text{for }\alpha\neq\beta. \tag{28}\] **Lemma 5.12**.: _Contracting an edge \(\alpha\) gives the same morphism as first sliding edge ends along \(\alpha\) and then contracting \(\alpha\)._ Proof.: It suffices to slide a single edge end along \(\alpha\), as the statement follows inductively. We denote by \(c_{\alpha,v}\) the contraction of \(\alpha\) in \(\Gamma\) and by \(c^{\prime}_{\alpha,v}\) the contraction of \(\alpha\) in the graph \(\Gamma^{\prime}\) obtained by sliding an edge \(b\) along \(\alpha\). Suppose that there are no loops incident at \(s(\alpha)\) and \(t(\alpha)\) in \(\Gamma\) and \(\Gamma^{\prime}\). As edge slides and edge contractions commute with edge reversals by Definition 5.5 and Lemma 5.11, respectively, we can assume \(v=s(\alpha)\) and all other edge ends at \(v\) and \(w=t(\alpha)\) are incoming. It is then sufficient to consider an edge slide of \(b\) along the left and right of \(\alpha\): \[\tikzfig{height=1.5} \tag{29}\] Omitting the copies of \(H\) for edges not incident at \(v,w\) we compute for the edge slides in (29) \[c^{\prime}_{\alpha,v}\circ S_{\alpha,b}(\alpha\otimes b\otimes c \otimes d\otimes k\otimes l) =c^{\prime}_{\alpha,v}(\alpha_{(2)}\otimes\alpha_{(1)}b\otimes c \otimes d\otimes k\otimes l)\] \[=\alpha_{(1)}b\otimes\alpha_{(2)}c\otimes\alpha_{(3)}d\otimes k \otimes l\,=\,c_{\alpha,v}(\alpha\otimes b\otimes c\otimes d\otimes k \otimes l),\] \[c^{\prime}_{\alpha,v}\circ S_{\alpha,b}(\alpha\otimes b\otimes c \otimes d\otimes k\otimes l) =c^{\prime}_{\alpha,v}(\alpha_{(1)}\otimes\alpha_{(2)}b\otimes c \otimes d\otimes k\otimes l)\] \[=\alpha_{(3)}b\otimes\alpha_{(1)}c\otimes\alpha_{(2)}d\otimes k \otimes l\,=\,c_{\alpha,v}(\alpha\otimes b\otimes c\otimes d\otimes k \otimes l).\] As edge slides from \(w\) to \(v\) are the inverses of edge slides from \(v\) to \(w\), the corresponding identities for those follow by pre-composing with the inverses. The proof for vertices with different numbers of incident edge ends or incident loops is analogous. Next, we consider the interaction of edge contractions with the (co)module structures for the vertices (faces) of the graph. For this, note that the contraction of an edge \(\alpha\) towards \(v\in\{s(\alpha),t(\alpha)\}\) defines a bijection between the sets \(F,F^{\prime}\) of faces before and after the contraction and likewise a bijection between the sets \(V\setminus\{v\}\) and \(V^{\prime}\). If faces and vertices are identified via these bijections, the edge contraction becomes a (co)module morphism. In contrast, the module structure \(\rhd_{v}\) is coequalised. **Lemma 5.13**.: _The contraction of an edge \(\alpha\) towards a ciliated vertex \(v\) coequalises \(\rhd_{v}\) and \(\epsilon\otimes 1_{H^{\otimes E}}\) and is a (co)module morphism with respect to the (co)actions \(\rhd_{z}\) and \(\delta_{f}\) for all ciliated vertices \(z\neq v\) and ciliated faces \(f\in F\) that do not start at \(v\):_ \[c_{\alpha,v}\circ\rhd_{v} =c_{\alpha,v}\circ(\epsilon\,\otimes\,1_{H^{\otimes E}}), \tag{30}\] \[c_{\alpha,v}\circ\rhd_{z} =\rhd_{z}^{\prime}\circ(1_{H}\,\otimes\,c_{\alpha,v}),\] (31) \[\delta^{\prime}_{f}\circ c_{\alpha,v} =(1_{H}\,\otimes\,c_{\alpha,v})\circ\delta_{f}. \tag{32}\] Proof.: As edge slides along \(\alpha\) are module and comodule isomorphisms by Proposition 5.7 and commute with the contraction of \(\alpha\) by Lemma 5.12, we can assume that \(v\) is univalent. With Lemma 5.11 we can assume that \(v=t(\alpha)\) and that all edge ends at \(w=s(\alpha)\) are incoming: \[\tikzfig{height=1.5} \tag{33}\] For the vertices \(v\) and \(w\) in (33) we compute \[c_{\alpha,v}\circ\rhd_{v}(h\otimes\alpha\otimes b\otimes c\otimes d) =c_{\alpha,v}(h\alpha\otimes b\otimes c\otimes d)=\epsilon(h\alpha)b \otimes c\otimes d=c_{\alpha,v}(\epsilon(h)\alpha\otimes b\otimes c\otimes d)\] \[=c_{\alpha,v}\circ(\epsilon\otimes 1_{H^{\otimes E}})(h\otimes\alpha \otimes b\otimes c\otimes d)\] \[c_{\alpha,v}\circ\rhd_{w}(h\otimes\alpha\otimes b\otimes c \otimes d) =c_{\alpha,v}(\alpha S(h_{(3)})\otimes h_{(4)}b\otimes h_{(1)}c \otimes h_{(2)}d)\] \[=\epsilon(\alpha)h_{(3)}b\otimes h_{(1)}c\otimes h_{(2)}d=\rhd_{w }^{\prime}\circ(1_{H}\otimes c_{\alpha,v})(h\otimes\alpha\otimes b\otimes c \otimes d).\] The computations for graphs with a different number of edge ends or loops incident at \(w\) are analogous. For vertices \(z\in V\setminus\{v,w\}\) the action \(\rhd_{z}\) does not affect the copy of \(H\) for \(\alpha\) and commutes with \(\rhd_{v,\alpha}\) and hence with \(c_{v,\alpha}\). This proves (30) and (31). If \(f\) is a face that contains \(\alpha\), but does not start at \(v\), then the associated coaction is of the form \[\delta_{f}(\alpha\otimes b\otimes c\otimes d\otimes\ldots)=(\cdots S(d_{(2)}) S(\alpha_{(3)})\alpha_{(1)}b_{(1)}\cdots)\otimes\alpha_{(2)}\otimes b_{(2)} \otimes c\otimes d_{(1)}\otimes\ldots,\] where the dots stand for contributions of parts of \(\Gamma\) that are not drawn in (33). This yields \[(1_{H}\otimes c_{\alpha,v})\circ\delta_{f}(\alpha\otimes b\otimes c \otimes d\otimes\ldots)=\epsilon(\alpha_{(2)})(\cdots S(d_{(2)})S(\alpha_{(3) })\alpha_{(1)}b_{(1)}\cdots)\otimes b_{(2)}\otimes c\otimes d_{(1)}\otimes\ldots\] \[\stackrel{{(\ref{eq:f})}}{{=}}\epsilon(\alpha)( \cdots S(d_{(2)})b_{(1)}\cdots)\otimes b_{(2)}\otimes c\otimes d_{(1)} \otimes\ldots=\delta^{\prime}_{f}\circ c_{\alpha,v}(\alpha\otimes b\otimes c \otimes d\otimes\ldots).\] If \(f\) does not contain \(\alpha\), the edge \(\alpha\) does not contribute to the coaction \(\delta_{f}\), which proves (32). With these results, we investigate how edge contractions interact with the (co)invariants of the \(H\)-(co)module structures at ciliated vertices and faces of \(\Gamma\). For subsets \(\emptyset\neq\mathcal{V}\subset V\) and \(\emptyset\neq\mathcal{F}\subset F\) we denote by \(\rhd_{\mathcal{V}}\) and \(\delta_{\mathcal{F}}\) the associated \(H^{\otimes\mathcal{V}}\)-module structure and \(H^{\otimes\mathcal{F}}\)-comodule structure from (23) and by \(\pi_{\mathcal{V}}\) and \(\iota_{\mathcal{F}}\) their invariants and coinvariants from Definition 2.11. We then find that edge contractions send coinvariants for \(\delta_{\mathcal{F}}\) to coinvariants for the corresponding face set in the contracted graph. The same holds for the invariants of the action \(\rhd_{\mathcal{V}}\), as long as \(\mathcal{V}\) contains the starting and target vertex of the contracted edge. The morphism \(\eta_{\alpha}\) that creates a copy of \(H\) assigned to \(\alpha\) by applying the unit of \(H\) is right inverse to the edge contraction \(c_{v,\alpha}\) and a left inverse on the coinvariants. This corresponds to the following technical lemma. **Lemma 5.14**.: _Let \(\Gamma^{\prime}\) be obtained from \(\Gamma\) by contracting an edge \(\alpha\) incident at \(v,w\in V\). Then \(\eta_{\alpha}:H^{\otimes(E-1)}\to H^{\otimes E}\) is right inverse to the edge contraction \(c_{\alpha,v}:H^{\otimes E}\to H^{\otimes(E-1)}\), and for all subsets \(\{v,w\}\subset\mathcal{V}\subset V\), \(\emptyset\neq\mathcal{F}\subset F\) one has_ \[\delta^{\prime}_{\mathcal{F}}\circ c_{\alpha,v}\circ\iota_{ \mathcal{F}} =(\eta^{\otimes|\mathcal{F}|}\otimes c_{\alpha,v})\circ\iota_{ \mathcal{F}} \tag{34}\] \[\delta_{\mathcal{F}}\circ\eta_{\alpha}\circ\iota^{\prime}_{ \mathcal{F}} =(\eta^{\otimes|\mathcal{F}|}\otimes\eta_{\alpha})\circ\iota^{ \prime}_{\mathcal{F}}\] (35) \[\pi_{\mathcal{V}}\circ\eta_{\alpha}\circ c_{\alpha,v} =\pi_{\mathcal{V}}\] (36) \[\pi^{\prime}_{\mathcal{V}}\circ c_{\alpha,v}\circ\rhd_{\mathcal{V }} =\pi^{\prime}_{\mathcal{V}}\circ(\epsilon^{\otimes|\mathcal{V}|} \otimes c_{\alpha,v})\] (37) \[\pi_{\mathcal{V}}\circ\eta_{\alpha}\circ\rhd_{\mathcal{V}}^{\prime} =\pi_{\mathcal{V}}\circ(\epsilon^{\otimes|\mathcal{V}|-1}\otimes \eta_{\alpha}). \tag{38}\] Proof.: 1. It follows directly from Definition 5.9 that the morphism \(\eta_{\alpha}\) is a right inverse to \(c_{\alpha,v}\). From the formula for the (co)action in Definition 4.3 it is apparent that \(\eta_{\alpha}\) is a comodule morphism for the coactions \(\delta_{f}\) at all ciliated faces and a module morphism with respect to the actions \(\rhd_{z}\) at all vertices \(z\in V\setminus\{v,w\}\). Moreover, it is clear from Definition 5.5 that sliding edge ends over \(\alpha\) after applying \(\eta_{\alpha}\) yields a morphism \(\eta_{\alpha}^{\prime\prime}\) which splits the vertex \(w\) in a different way. Thus, we have \[c_{\alpha,v}\circ\eta_{\alpha}=1_{H^{\otimes E}},\quad\delta_{f}\circ\eta_{ \alpha}=\eta_{\alpha}\circ\delta^{\prime}_{f},\quad\eta_{\alpha}\circ\rhd_{z} ^{\prime}=\rhd_{z}\circ\eta_{\alpha},\quad S_{\alpha,\beta}\circ\eta_{ \alpha}=\eta_{\alpha}^{\prime\prime} \tag{39}\] for all vertices \(z\in V\setminus\{v,w\}\) and faces \(f\in F\) and edge slides \(S_{\alpha,\beta}\) along \(\alpha\). We can therefore assume that the vertex \(v=t(\alpha)\) is univalent, all edge ends at \(w=s(\alpha)\) are incoming, the graph \(\Gamma\) is locally given by (33) and the edge contraction by \(c_{\alpha,v}=\epsilon_{\alpha}\), as in the proof of Lemma 5.13. 2. We prove the auxiliary identities \[\pi_{v}\circ\eta_{\alpha}\circ c_{\alpha,v} =\pi_{v}, \tag{40}\] \[\pi_{\{v,w\}}\circ\eta_{\alpha}\circ(\epsilon\,\otimes\,1_{H^{ \otimes(E-1)}}) =\pi_{\{v,w\}}\circ\eta_{\alpha}\circ\rhd_{w}^{\prime},\] (41) \[\delta_{f}^{\prime}\circ c_{\alpha,v}\circ\iota_{f} =(\eta\,\otimes\,1_{H^{\otimes(E-1)}})\circ c_{\alpha,v}\circ \iota_{f}\qquad\forall f\in F. \tag{42}\] Omitting all copies of \(H\) in \(H^{\otimes E}\) except the one for \(\alpha\), we verify (40) \[\pi_{v}(\alpha\otimes\ldots)=\pi_{v}\circ(\alpha\rhd_{v})(1\otimes\ldots)=\pi _{v}(\epsilon(\alpha)\,1\otimes\ldots)=\pi_{v}\circ\eta_{\alpha}\circ\epsilon _{\alpha}(1\otimes\ldots)=\pi_{v}\circ\eta_{\alpha}\circ c_{\alpha,v}(1\otimes \ldots).\] To show (41), we consider the graph (33) and compute with Lemma 4.9 \[\pi_{\{v,w\}}\circ\eta_{\alpha}\circ\rhd_{w}^{\prime}(h\otimes b \otimes c\otimes d)=\pi_{\{v,w\}}\circ\eta_{\alpha}(h_{(3)}b\otimes h_{(1)}c \otimes h_{(2)}d)=\pi_{\{v,w\}}(1\otimes h_{(3)}b\otimes h_{(1)}c\otimes h_{( 2)}d)\] \[=\pi_{\{v,w\}}(h_{(3)}S(h_{(4)})\otimes h_{(5)}b\otimes h_{(1)}c \otimes h_{(2)}d)=\pi_{\{v,w\}}\circ h_{(3)}\rhd_{v}(S(h_{(4)})\otimes h_{(5) }b\otimes h_{(1)}c\otimes h_{(2)}d)\] \[=\pi_{\{v,w\}}(\epsilon(h_{(3)})\,S(h_{(4)})\otimes h_{(5)}b \otimes h_{(1)}c\otimes h_{(2)}d)=\pi_{\{v,w\}}(S(h_{(3)})\otimes h_{(4)}b \otimes h_{(1)}c\otimes h_{(2)}d)\] \[=\pi_{\{v,w\}}\circ h\rhd_{w}\,(1\otimes b\otimes c\otimes d)= \pi_{\{v,w\}}(\epsilon(h)\,1\otimes b\otimes c\otimes d)=\pi_{\{v,w\}}\circ \eta_{\alpha}\circ(\epsilon\otimes 1_{H^{\otimes(E-1)}})(b\otimes c\otimes d).\] Identity (42) follows from identity (32) in Lemma 5.13 for all faces \(f\in F\) that do not start at \(v\). If \(f\) starts at \(v\) one has for the graph in (33) \[\delta_{f}^{\prime}\circ c_{\alpha,v}(\alpha\otimes b\otimes c \otimes d)=\epsilon(\alpha)\delta_{f}^{\prime}(b\otimes c\otimes d)=\epsilon( \alpha)b_{(1)}\cdots S(d_{(2)})\otimes b_{(2)}\otimes c\otimes d_{(1)}\] \[=S(\alpha_{(2)})\alpha_{(1)}b_{(1)}\cdots S(d_{(2)})S(\alpha_{(4) })\alpha_{(3)}\otimes b_{(2)}\otimes c\otimes d_{(1)}\] \[=(\spherical_{ad}\otimes 1_{H^{\otimes(E-1)}})\circ(1_{H}\otimes\tau_{ \alpha})\circ\delta_{f}(\alpha\otimes b\otimes c\otimes d),\] where \(\spherical_{ad}:H\otimes H\to H\), \(h\otimes\alpha\mapsto S(\alpha_{(1)})h\alpha_{(2)}\). In this case, contracting \(\alpha\) deletes the cilium of \(f\), but Lemma 5.4 allows one to place a new cilium for \(f\) in any position. As \(\spherical_{ad}\circ(\eta\otimes 1_{H})=\eta\circ\epsilon:H\to H\) this yields \[\delta_{f}^{\prime}\circ c_{\alpha,v}\circ\iota_{f}=(\spherical_{ad }\otimes 1_{H^{\otimes(E-1)}})\circ(1_{H}\otimes\tau_{\alpha})\circ\delta_{f} \circ\iota_{f}=(\spherical_{ad}\otimes 1_{H^{\otimes(E-1)}})\circ(1_{H}\otimes\tau_{ \alpha})\circ(\eta\otimes 1_{H^{\otimes E}})\circ\iota_{f}\] \[=(\eta\circ\epsilon\otimes 1_{H^{\otimes(E-1)}})\circ\tau_{ \alpha}\circ\iota_{f}=(\eta\otimes 1_{H^{\otimes(E-1)}})\circ\epsilon_{\alpha} \circ\iota_{f}=(\eta\otimes 1_{H^{\otimes(E-1)}})\circ\epsilon_{\alpha,v}\circ\iota_{f}.\] 3. We prove the identities in the Lemma. Identity (34) follows by pre-composing (42) with the morphism \(\xi_{f,\mathcal{F}}:=\xi_{\{f\},\mathcal{F}}\) from Lemma 4.9 and inductively applying this equation for all \(f\in\mathcal{F}\). Likewise, identity (35) follows by applying the identity \(\delta_{f}\circ\eta_{\alpha}\circ\iota_{f}^{\prime}=(\eta\,\otimes\,1_{H^{ \otimes E}})\circ\eta_{\alpha}\circ\iota_{f}^{\prime}\) obtained from the second identity in (39) and pre-composing it with \(\xi_{f,\mathcal{F}}^{\prime}\). Post-composing (40) with the morphism \(\chi_{v,\mathcal{V}}:=\chi_{\{v\},\mathcal{V}}\) from Lemma 4.9 yields (36). From (31), we obtain for all \(z\in V\setminus\{v,w\}\) \[\pi_{\mathcal{V}}^{\prime}\circ c_{\alpha,v}\circ\rhd_{z}=\chi_{z,\mathcal{V}} ^{\prime}\circ\pi_{z}^{\prime}\circ c_{\alpha,v}\circ\rhd_{z}=\chi_{z, \mathcal{V}}^{\prime}\circ\pi_{z}^{\prime}\circ\rhd_{z}^{\prime}\circ(1_{H} \otimes c_{\alpha,v})=\pi_{\mathcal{V}}^{\prime}\circ(\epsilon\otimes c_{ \alpha,v}). \tag{43}\] Together with the identity \(\pi_{\mathcal{V}}^{\prime}\circ c_{\alpha,v}\circ\rhd_{w}\circ(1_{H}\otimes \rhd_{v})=\pi_{\mathcal{V}}^{\prime}\circ(\epsilon^{\otimes 2}\otimes c_{\alpha,v})\), which follows from (30) and (31) with \(z=w\) and the identity \(\pi_{\mathcal{V}}^{\prime}=\chi_{w,\mathcal{V}}^{\prime}\circ\pi_{w}^{\prime}\), this yields (37). Identity (38) follows by post-composing (41) with \(\chi_{\{v,w\},\mathcal{V}}\) and the third identity in (39) with \(\pi_{\mathcal{V}}=\chi_{z,\mathcal{V}}\circ\pi_{z}\). We now apply Lemma 5.14 to show that edge contractions induce morphisms between the coinvariants for \(\emptyset\neq\mathcal{F}\subset F\). If \(\mathcal{V}\) contains the starting and target vertex of the contracted edge, they also induce isomorphisms between the invariants and isomorphisms between the protected objects. For this, we consider a ciliated ribbon graph \(\Gamma\) and the graph \(\Gamma^{\prime}\) obtained by contracting an edge \(\alpha\) in \(\Gamma\). We denote by \(\mathcal{M}^{\circ H}\), \(\mathcal{M}^{H}\), \(\mathcal{M}_{inv}\) the coinvariants, invariants and biinvariants of \(\delta_{\mathcal{F}}\), \(\rhd_{\mathcal{V}}\) for \(\Gamma\) and by \(\mathcal{M}^{coH}\), \(\mathcal{M}^{H}\), \(\mathcal{M}^{\prime}_{inv}\) the corresponding quantities for \(\Gamma^{\prime}\). As in Lemma 4.9 we write \(\iota_{\mathcal{F}}\) and \(\pi_{\mathcal{V}}\) for the associated equaliser and coequaliser and \(I:\mathcal{M}_{inv}\to\mathcal{M}^{H}\) and \(P:\mathcal{M}^{coH}\to\mathcal{M}_{inv}\) for the monomorphism and epimorphism that characterise \(\mathcal{M}_{inv}\) as the image of \(\pi_{\mathcal{V}}\circ\iota_{\mathcal{F}}\). The corresponding morphisms for \(\Gamma^{\prime}\) are denoted \(\iota^{\prime}_{\mathcal{F}}\), \(\pi^{\prime}_{\mathcal{V}}\), \(I^{\prime}\) and \(P^{\prime}\). **Proposition 5.15**.: _Let \(\Gamma^{\prime}\) be obtained from a ciliated ribbon graph \(\Gamma\) by contracting an edge \(\alpha\) incident at \(v,w\) towards \(v\). Then for all \(\{v,w\}\subset\mathcal{V}\subset V\), \(\emptyset\neq\mathcal{F}\subset F\) the contraction of \(\alpha\) induces_ * _a morphism_ \(u:M^{coH}_{\mathcal{F}}\to M^{toH}_{\mathcal{F}}\) _with a right inverse that satisfies_ \(\iota^{\prime}_{\mathcal{F}}\circ u=c_{\alpha,v}\circ\iota_{\mathcal{F}}\)_,_ * _an isomorphism_ \(r:M^{H}_{\mathcal{V}}\to M^{\prime H}_{\mathcal{V}}\) _that satisfies_ \(r\circ\pi_{\mathcal{V}}=\pi^{\prime}_{\mathcal{V}}\circ c_{\alpha,v}\)_,_ * _an isomorphism_ \(\phi_{inv}:M_{inv}\to M^{\prime}_{inv}\) _with_ \(I=r^{-1}\circ I^{\prime}\circ\phi_{inv}\)_._ Proof.: Using equation (34) together with the universal property of the equaliser \(\iota^{\prime}_{\mathcal{F}}\) yields a unique morphism \(u:M^{coH}_{\mathcal{F}}\to M^{\prime coH}_{\mathcal{F}}\) with \(\iota^{\prime}_{\mathcal{F}}\circ u=c_{\alpha,v}\circ\iota_{\mathcal{F}}\). Equation (35) and the equaliser \(\iota_{\mathcal{F}}\) yield a unique morphism \(u^{-1}:M^{foH}_{\mathcal{F}}\to M^{\prime coH}_{\mathcal{F}}\) with \(\iota_{\mathcal{F}}\circ u^{-1}=\eta_{\alpha}\circ\iota^{\prime}_{\mathcal{F}}\). To show that \(u^{-1}\) is a right inverse of \(u\) note that \(\iota^{\prime}_{\mathcal{F}}\circ u\circ u^{-1}=c_{\alpha,v}\circ\iota_{ \mathcal{F}}\circ u^{-1}=c_{\alpha,v}\circ\eta_{\alpha}\circ\iota^{\prime}_{ \mathcal{F}}=\iota^{\prime}_{\mathcal{F}}\), since \(\eta_{\alpha}\) is right inverse to \(c_{\alpha,v}\). As \(\iota^{\prime}_{\mathcal{F}}\) is a monomorphism, this implies \(u\circ u^{-1}=1_{M^{\prime coH}_{\mathcal{F}}}\). Analogously, (37) and the universal property of the coequaliser \(\pi_{\mathcal{V}}\) define a unique morphism \(r:M^{H}_{\mathcal{V}}\to M^{\prime H}_{\mathcal{V}}\) with \(r\circ\pi_{\mathcal{V}}=\pi^{\prime}_{\mathcal{V}}\circ c_{\alpha,v}\). The coequaliser \(\pi^{\prime}_{\mathcal{V}}\) together with (38) yields a unique morphism \(r^{-1}:M^{\prime H}_{\mathcal{V}}\to M^{H}_{\mathcal{V}}\) with \(r^{-1}\circ\pi^{\prime}_{\mathcal{V}}=\pi_{\mathcal{V}}\circ\eta_{\alpha}\). The morphisms \(r\) and \(r^{-1}\) are mutually inverse isomorphisms, since \(\pi^{\prime}_{\mathcal{V}}\), \(\pi_{\mathcal{V}}\) are epimorphisms with \[r\circ r^{-1}\circ\pi^{\prime}_{\mathcal{V}} =r\circ\pi_{\mathcal{V}}\circ\eta_{\alpha}=\pi^{\prime}_{\mathcal{ V}}\circ c_{\alpha,v}\circ\eta_{\alpha}=\pi^{\prime}_{\mathcal{V}},\] \[r^{-1}\circ r\circ\pi_{\mathcal{V}} =r^{-1}\circ\pi^{\prime}_{\mathcal{V}}\circ c_{\alpha,v}=\pi_{ \mathcal{V}}\circ\eta_{\alpha}\circ c_{\alpha,v}\stackrel{{\eqref{eq:M }}}{{=}}\pi_{\mathcal{V}}.\] Hence, we constructed commuting diagrams To construct the isomorphism \(\phi_{inv}\), we set \(j:=r^{-1}\circ I^{\prime}:M^{\prime}_{inv}\to M^{H}_{\mathcal{V}}\) and \(q:=P^{\prime}\circ u:M^{coH}_{\mathcal{F}}\to M^{\prime}_{inv}\). As \(r^{-1}\) is an isomorphism and \(I^{\prime}\) a monomorphism, the morphism \(j\) is a monomorphism. The composite \(j\circ q\) satisfies \[j\circ q=r^{-1}\circ I^{\prime}\circ P^{\prime}\circ u=r^{-1}\circ\pi^{\prime} _{\mathcal{V}}\circ\iota^{\prime}_{\mathcal{F}}\circ u=\pi_{\mathcal{V}}\circ \eta_{\alpha}\circ\iota^{\prime}_{\mathcal{F}}\circ u=\pi_{\mathcal{V}}\circ \eta_{\alpha}\circ c_{\alpha,v}\circ\iota_{\mathcal{F}}\stackrel{{ \eqref{eq:M}}}{{=}}\pi_{\mathcal{V}}\circ\iota_{\mathcal{F}}.\] The universal property of the image \(M_{inv}\) then yields a unique morphism \(\phi_{inv}:M_{inv}\to M^{\prime}_{inv}\) with \(I=j\circ\phi_{inv}=r^{-1}\circ I^{\prime}\circ\phi_{inv}\). To construct its inverse we set \(j^{\prime}:=r\circ I:M_{inv}\to M^{\prime H}_{\mathcal{V}}\) and \(q^{\prime}:=P\circ u^{-1}:M^{\prime coH}_{\mathcal{F}}\to M_{inv}\). As \(r\) is an isomorphism and \(I\) a monomorphism, \(j^{\prime}\) is a monomorphism, and we have \[j^{\prime}\circ q^{\prime}=r\circ I\circ P\circ u^{-1}=r\circ\pi_{\mathcal{V}} \circ\iota_{\mathcal{F}}\circ u^{-1}=\pi^{\prime}_{\mathcal{V}}\circ c_{\alpha,v} \circ\iota_{\mathcal{F}}\circ u^{-1}=\pi^{\prime}_{\mathcal{V}}\circ c_{\alpha,v} \circ\eta_{\alpha}\circ\iota^{\prime}_{\mathcal{F}}=\pi^{\prime}_{\mathcal{V}} \circ\iota^{\prime}_{\mathcal{F}},\] where we used that \(\eta_{\alpha}\) is right inverse to \(c_{\alpha,v}\) in the last step. By the universal property of the image \(M^{\prime}_{inv}\) there is a unique morphism \(\phi_{inv}^{-1}:M^{\prime}_{inv}\to M_{inv}\) with \(I^{\prime}=j^{\prime}\circ\phi_{inv}^{-1}=r\circ I\circ\phi_{inv}^{-1}\) and \[I\circ\phi_{inv}^{-1}\circ\phi_{inv} =r^{-1}\circ r\circ I\circ\phi_{inv}^{-1}\circ\phi_{inv}=r^{-1} \circ I^{\prime}\circ\phi_{inv}=I,\] \[I^{\prime}\circ\phi_{inv} \circ\phi_{inv}^{-1} =r\circ r^{-1}\circ I^{\prime}\circ\phi_{inv}\circ\phi_{inv}^{-1}=r \circ I\circ\phi_{inv}^{-1}=I^{\prime}.\] As \(I,I^{\prime}\) are monomorphisms, it follows that \(\phi_{inv}\) and \(\phi_{inv}^{-1}\) are mutually inverse isomorphisms. **Corollary 5.16**.: _Edge contractions induce isomorphisms between protected objects._ ### Deleting isolated loops We now consider the last graph transformation from Definition 3.4, the deletion of isolated loops. The morphism associated to the deletion of an isolated loop \(\alpha\) applies the counit to the corresponding copy of the Hopf monoid \(H\). Just as edge contractions, this is in general not an isomorphism in \(\mathcal{C}\). The morphism \(\eta_{\alpha}\) that creates a copy of \(H\) for \(\alpha\) by applying the unit is a right inverse and corresponds to inserting a loop. **Definition 5.17**.: _The morphism induced by_ **deleting an isolated loop**_\(\alpha\) is \(\epsilon_{\alpha}:H^{\otimes E}\to H^{\otimes E\setminus\{\alpha\}}\)._ As for edge contractions we investigate how these morphisms interact with the coinvariants for the \(H^{\otimes\mathcal{F}}\)-comodule structure \(\delta_{\mathcal{F}}\) and the \(H^{\otimes\mathcal{V}}\)-module structure \(\rhd_{\mathcal{V}}\) from Definition 4.7 for subsets \(\emptyset\neq\mathcal{F}\subset F\) and \(\emptyset\neq\mathcal{V}\subset V\). We find that loop deletions send the invariants for \(\rhd_{\mathcal{V}}\) to invariants for the corresponding vertex set of the graph with the loop removed. The same holds for coinvariants of \(\delta_{\mathcal{F}}\), as long as the two faces incident to the loop are contained in \(\mathcal{F}\). Analogous statements hold for the right inverse \(\eta_{\alpha}\), and on the coinvariants \(\eta_{\alpha}\) is also a left inverse. This is a consequence of the following technical lemma. **Lemma 5.18**.: _Let \(\Gamma^{+}\) be obtained from a ciliated ribbon graph \(\Gamma\) by removing an isolated loop \(\alpha\) with adjacent faces \(f_{1},f_{2}\) at a vertex \(v\). Then for all subsets \(\emptyset\neq\mathcal{V}\subset V\) and \(\{f_{1},f_{2}\}\subset\mathcal{F}\subset F\)_ \[\pi_{\mathcal{V}}^{+}\circ\epsilon_{\alpha}\circ\rhd_{\mathcal{V}} =\pi_{\mathcal{V}}^{+}\circ(\epsilon^{\otimes|\mathcal{V}|}\otimes \epsilon_{\alpha}) \tag{44}\] \[\delta_{\mathcal{F}}^{+}\circ\epsilon_{\alpha}\circ\iota_{ \mathcal{F}} =(\eta^{\otimes|\mathcal{F}|-1}\otimes\epsilon_{\alpha})\circ\iota_{ \mathcal{F}}\] (45) \[\eta_{\alpha}\circ\epsilon_{\alpha}\circ\iota_{\mathcal{F}} =\iota_{\mathcal{F}}\] (46) \[\pi_{\mathcal{V}}\circ\eta_{\alpha}\circ\rhd_{\mathcal{V}}^{+} =\pi_{\mathcal{V}}\circ(\epsilon^{\otimes|\mathcal{V}|}\otimes \eta_{\alpha})\] (47) \[\delta_{\mathcal{F}}\circ\eta_{\alpha}\circ\iota_{\mathcal{F}}^{+} =(\eta^{\otimes|\mathcal{F}|}\otimes\eta_{\alpha})\circ\iota_{ \mathcal{F}}^{+}. \tag{48}\] Proof.: 1. We first prove some auxiliary identities for the interaction of the morphisms \(\epsilon_{\alpha}\) and \(\eta_{\alpha}\) with the module and comodules structures at the vertices and faces. 1.(a) As \(\eta_{\alpha}\) and \(\epsilon_{\alpha}\) affect only the copy of \(H\) for \(\alpha\), we have for any vertex \(z\neq v\) and any ciliated face \(f\) that does not contain \(\alpha\) \[\epsilon_{\alpha}\circ\rhd_{z}=\rhd_{z}^{+}\circ(1_{H}\otimes \epsilon_{\alpha}) \eta_{\alpha}\circ\rhd_{z}^{+}=\rhd_{z}\circ(1_{H}\otimes\eta_{ \alpha}) \tag{49}\] \[(1_{H}\otimes\epsilon_{\alpha})\circ\delta_{f}=\delta_{f}^{+} \circ\epsilon_{\alpha} (1_{H}\otimes\eta_{\alpha})\circ\delta_{f}^{+}=\delta_{f}\circ \eta_{\alpha}. \tag{50}\] 1.(b) For the \(H\)-module structure at the vertex \(v\), we show that \[\pi_{v}^{+}\circ\epsilon_{\alpha}\circ\rhd_{v}=\pi_{v}^{+}\circ( \epsilon\otimes\epsilon_{\alpha}) \tag{51}\] \[\pi_{v}\circ\eta_{\alpha}\circ\rhd_{v}^{+}=\pi_{v}\circ(\epsilon \otimes\eta_{\alpha}). \tag{52}\] As reversing edge orientations commutes with \(\epsilon_{\alpha},\eta_{\alpha}\) and \(\rhd_{v}\), we can assume that all edges \(\beta\neq\alpha\) at \(v\) are incoming and that \(s(\alpha)\) is directly before \(t(\alpha)\) with respect to the cyclic ordering at \(v\): For this graph we compute \[\pi_{v}^{+}\circ\epsilon_{\alpha}\circ\rhd_{v}(h\otimes\alpha \otimes b\otimes c\otimes d)=\pi_{v}^{+}\circ\epsilon_{\alpha}\left(h_{(4)} \alpha S(h_{(3)})\otimes h_{(5)}b\otimes h_{(1)}c\otimes h_{(2)}d\right)\] \[=\pi_{v}^{+}\left(\epsilon(\alpha)\otimes h_{(3)}b\otimes h_{(1) }c\otimes h_{(2)}d\right)=\pi_{v}^{+}\circ\left(\epsilon\otimes\rhd_{v}^{+} \right)(\alpha\otimes h\otimes b\otimes c\otimes d)\] \[=\pi_{v}^{+}\circ\left(\epsilon\otimes\epsilon\otimes 1_{H^{\otimes 3}} \right)(h\otimes\alpha\otimes b\otimes c\otimes d)=\pi_{v}^{+}\circ\left( \epsilon\otimes\epsilon_{\alpha}\right)(h\otimes\alpha\otimes b\otimes c \otimes d),\] \[\pi_{v}\circ\eta_{\alpha}\circ\rhd_{v}^{+}(h\otimes b\otimes c \otimes d)=\pi_{v}\circ\eta_{\alpha}\left(h_{(3)}b\otimes h_{(1)}c\otimes h_{( 2)}d\right)\] \[=\pi_{v}\left(h_{(4)}S(h_{(3)})\otimes h_{(5)}b\otimes h_{(1)}c \otimes h_{(2)}d\right)=\pi_{v}\circ\rhd_{v}(h\otimes 1\otimes b\otimes c \otimes d)\] \[=\pi_{v}\circ\left(\epsilon\otimes 1_{H^{\otimes 4}}\right)(h \otimes 1\otimes b\otimes c\otimes d)=\pi_{v}\circ\left(\epsilon\otimes\eta_{ \alpha}\right)(h\otimes b\otimes c\otimes d),\] which proves (51) and (52). The computation for graphs with a different number of edge ends at \(v\) are analogous. The claim for the case where the cilium is between the edge ends of \(\alpha\) follows, because the invariants of the \(H\)-module structure at \(v\) do not depend on the choice of the cilium by Lemma 5.4. It can also be verified directly by analogous computations. 1.(c) We consider the \(H\)-comodule structures at the faces \(f_{1},f_{2}\). Under the assumption that the starting end of \(\alpha\) comes directly before its target end with respect to the cyclic ordering at \(v\), one of these faces coincides with \(\alpha\), and we assume it is \(f_{1}=\alpha\). We then have \[\delta_{f_{1}}\circ\eta_{\alpha} =\eta\otimes\eta_{\alpha} \tag{53}\] \[\delta_{f_{2}}\circ\eta_{\alpha} =(1_{H}\otimes\eta_{\alpha})\circ\delta_{f_{2}}^{+}\qquad\delta_ {f_{2}}\circ\eta_{\alpha}\circ\iota_{f_{2}}^{+}=(\eta\otimes\eta_{\alpha}) \circ\iota_{f_{2}}^{+}. \tag{54}\] Equation (53) is obvious, and to prove (54), we can assume that \(\Gamma\) is locally given by \[\tikzfig{height=1.5} \tag{55}\] as edge reversals commute with the module and comodule structures at the vertices and faces and with the morphisms \(\eta_{\alpha}\) and \(\epsilon_{\alpha}\). We then compute \[\delta_{f_{2}}\circ\eta_{\alpha}(b\otimes c)=\delta_{f_{2}}(1 \otimes b\otimes c)=b_{(1)}c_{(1)}\otimes 1\otimes b_{(2)}\otimes c_{(2)}=(1_{H} \otimes\eta_{\alpha})\circ\delta_{f_{2}}^{+}(b\otimes c).\] The computations for graphs with different numbers of edges in \(f_{2}\) are analogous, and with the identity \(\delta_{f_{2}}^{+}\circ\iota_{f_{2}}^{+}=(\eta\otimes 1_{H^{\otimes(E-1)}}) \circ\iota_{f_{2}}^{+}\) we obtain the second identity in (54). 2. We prove the identities (44) to (48). To show (46) it is sufficient to consider the graph (55) with \[\delta_{f_{1}}(\alpha\otimes b\otimes c) =\alpha_{(1)}\otimes\alpha_{(2)}\otimes b\otimes c\] \[\delta_{f_{2}}(\alpha\otimes b\otimes c) =b_{(1)}S(\alpha_{(2)})c_{(1)}\otimes\alpha_{(1)}\otimes b_{(2)} \otimes c_{(2)}.\] As \(f_{1},f_{2}\in\mathcal{F}\), this yields \[\iota_{\mathcal{F}}=((\epsilon\circ\eta)\otimes 1_{H^{\otimes E}})\circ \iota_{\mathcal{F}}=(\epsilon\otimes 1_{H^{\otimes E}})\circ\delta_{f_{2}} \circ\iota_{\mathcal{F}}\stackrel{{(*)}}{{=}}\eta_{\alpha}\circ \epsilon_{\alpha}\circ\iota_{\mathcal{F}},\] where we apply in \((*)\) the coinvariance under \(\delta_{f_{1}}\). Identity (44) follows inductively from the identity \(\pi_{\mathcal{V}}^{+}\circ\epsilon_{\alpha}\circ\rhd_{z}=\pi_{\mathcal{V}}^{+} \circ(\epsilon\otimes\epsilon_{\alpha})\) for all vertices \(z\in V\), which is obtained for \(z\neq v\) by post-composing the first identity in (49) with \(\pi_{\mathcal{V}}^{+}=\chi_{z,\mathcal{V}}^{+}\circ\pi_{z}^{+}\) and for \(z=v\) by post-composing (51) with \(\chi_{v,\mathcal{V}}\). Identity (45) follows from (46) and the first identities in (50), (54), which yield for all \(f\in\mathcal{F}\setminus\{f_{1}\}\) \[\delta_{f}^{+}\circ\epsilon_{\alpha}\circ\iota_{\mathcal{F}} =(1_{H}\otimes(\epsilon_{\alpha}\circ\eta_{\alpha}))\circ\delta_{f }^{+}\circ\epsilon_{\alpha}\circ\iota_{\mathcal{F}}\stackrel{{\eqref {eq:f_f ### Protected objects Combining the results from Section 5.1 to 5.3 one has that ciliated ribbon graphs related by moving cilia, edge reversals, edge contractions and deletions of isolated loops have isomorphic protected objects. As these are sufficient to relate any connected ribbon graph to the standard graph from (15), the protected object of a ciliated ribbon graph is determined up to isomorphisms by the genera of the connected components of the associated surface. **Theorem 5.21**.: _The isomorphism class of the protected object for an involutive Hopf monoid \(H\) and a ciliated ribbon graph \(\Gamma\) depends only on \(H\) and the homeomorphism class of the surface for \(\Gamma\)._ Proof.: By Lemma 5.4 the invariants, coinvariants and hence the protected object of a ciliated ribbon graph are independent of the choice of the cilia. By Proposition 3.5 every ribbon graph can be transformed into a disjoint union of standard graphs by edge reversals, edge contractions, edge slides and removing isolated loops. In each step the cilia can be arranged in such a way that no edge ends slide over cilia. By Corollaries 5.3, 5.8, 5.16 and 5.20 these graph transformations induce isomorphisms between the protected objects. As the protected object is a topological invariant, one can use any embedded graph whose complement is a disjoint union of discs to compute the protected object. For a sphere, the simplest such graph consists of a single isolated vertex. This is associated with the trivial \(H\)-(co)module structure on \(e\) given by the (co)unit of \(H\) and yields the tensor unit as protected object. **Example 5.22**.: _The protected object for a sphere \(S^{2}\) is the tensor unit of \(H\): \(\mathcal{M}_{inv}=e\)._ We now focus on oriented surfaces \(\Sigma\) of genus \(g\geq 1\) and use the standard graphs (15) to determine their protected objects. The associated module and comodule structures are given in Example 4.6 and form a Yetter-Drinfeld module. **Example 5.23**.: _For a group \(H\) as a Hopf monoid in \(\mathcal{C}=\mathrm{Set}\) the coinvariants are the set of group homomorphisms from \(\pi_{1}(\Sigma)\) to \(H\)_ \[M^{coH}=\{(a_{1},b_{1},\ldots,a_{g},b_{g})\in H^{\times 2g}:[b_{g}^{-1},a_{g}] \cdot\ldots\cdot[b_{1}^{-1},a_{1}]=1\}\cong\mathrm{Hom}(\pi_{1}(\Sigma),H). \tag{56}\] _The invariants are the set of orbits for the conjugation action \(\rhd\) from (22) on \(H^{\times 2g}\), and the protected object is the representation variety or moduli space of flat \(H\)-bundles \(M_{inv}\cong\mathrm{Hom}(\pi_{1}(\Sigma),H)/H\)._ **Example 5.24**.: _For a topological group \(H\) as a Hopf monoid in \(\mathcal{C}=\mathrm{Top}\) the protected object is \(M_{inv}\cong\mathrm{Hom}(\pi_{1}(\Sigma),H)/H\) as a set by Example 2.14. It is equipped with the quotient topology induced by the canonical surjection \(\pi:\mathrm{Hom}(\pi_{1}(\Sigma),H)\to\mathrm{Hom}(\pi_{1}(\Sigma),H)/H\) and the compact-open topology on \(\mathrm{Hom}_{\mathrm{Top}}(\pi_{1}(\Sigma),H)\) for the discrete topology on \(\pi_{1}(\Sigma)\)._ **Example 5.25**.: _For a Hopf monoid \(H\) in \(\mathcal{C}=G-\mathrm{Set}\,=\mathrm{Set}^{\mathrm{B}G}\) the coinvariants for the comodule structure \(\delta\) from Example 4.6 are the set (56) with the diagonal \(G\)-action. The invariants for the module structure \(\rhd\) are the associated orbit space. By Example 2.14, 2. the protected object is the representation variety \(M_{inv}\cong\mathrm{Hom}(\pi_{1}(\Sigma),H)/H\) with the induced \(G\)-set structure._ **Example 5.26**.: _Let \(k\) be a commutative ring, \(\mathcal{C}=k\)-Mod and \(G\) a finite group._ _For the group algebra \(H=k[G]\) as a Hopf monoid in \(\mathcal{C}\) and the standard graph in (15) one has \(M=k[G]^{2g}\cong k[G^{\times 2g}]\). The Yetter-Drinfeld module structure of \(M\) is given by (22) on a basis. The coinvariants and invariants are_ \[M^{coh} =\langle\{(a_{1},b_{1},\ldots,a_{g},b_{g})\mid[b_{g}^{-1},a_{g}] \cdots[b_{1}^{-1},a_{1}]=1\}\rangle_{k}\cong\langle\mathrm{Hom}(\pi_{1}( \Sigma),G)\rangle_{k}\] \[M^{H} =k[G^{\times 2g}]/\langle\{(a_{1},\ldots,b_{g})-(ha_{1}h^{-1}, \ldots,hb_{g}h^{-1})\mid a_{1},b_{1},\ldots,a_{g},b_{g},h\in G\}\rangle,\] _and the protected object is the free \(k\)-module generated by the representation variety \(\mathrm{Hom}(\pi_{1}(\Sigma),G)/G\)_ \[M_{inv}=\langle\mathrm{Hom}(\pi_{1}(\Sigma),G)/G\rangle_{k}. \tag{57}\] _For the dual Hopf monoid \(H=k[G]^{*}=\mathrm{Map}(G,k)\) of maps from \(G\) to \(k\) with Hopf monoid structure_ \[\delta_{g}\cdot\delta_{h}=\delta_{g}(h)\delta_{g},\quad 1=\sum_{g\in G} \delta_{g},\quad\Delta(\delta_{g})=\sum_{x,y\in G,xy=g}\delta_{x}\otimes\delta _{y},\quad\epsilon(\delta_{g})=\delta_{g}(e),\quad S(\delta_{g})=\delta_{g^{- 1}} \tag{58}\] _one has \(M=\mathrm{Map}(G,k)^{\otimes 2g}\cong\mathrm{Map}(G^{\times 2g},k)\) with the Yetter-Drinfeld module structure_ \[\delta_{h}\rhd(\delta_{a_{1}}\otimes\delta_{b_{1}}\otimes\ldots \otimes\delta_{a_{g}}\otimes\delta_{b_{g}})=\delta_{h}([b_{g}^{-1},a_{g}] \cdots[b_{1}^{-1},a_{1}])\,\delta_{a_{1}}\otimes\delta_{b_{1}}\otimes\ldots \otimes\delta_{a_{g}}\otimes\delta_{b_{g}} \tag{59}\] \[\delta(\delta_{a_{1}}\otimes\delta_{b_{1}}\otimes\ldots\otimes \delta_{a_{g}}\otimes\delta_{b_{g}})=\Sigma_{h\in G}\delta_{h^{-1}}\otimes \delta_{h{a_{1}}^{-1}}\otimes\delta_{h{b_{1}}^{-1}}\otimes\ldots\otimes \delta_{h{a_{g}}^{h^{-1}}}\otimes\delta_{h{b_{g}}^{h^{-1}}}\] _computed from (21) and (58). It follows that the coinvariants and invariants are given by_ \[M^{coH} =\mathrm{Map}(G^{\times 2g},k)^{G} \tag{60}\] \[M^{H} =\{f:G^{\times 2g}\to k\mid\mathrm{supp}(f)\subseteq\{(a_{1}, \ldots,b_{g})\mid[b_{g}^{-1},a_{g}]\cdots[b_{1}^{-1},a_{1}]=1\}\},\] _and the protected object is the set of functions_ \[M_{inv}=\mathrm{Map}(\mathrm{Hom}(\pi_{1}(\Sigma),G)/G,k). \tag{61}\] Example 5.26 shows that the protected object in this article indeed generalises the protected space of Kitaev's quantum double models. If one sets \(k=\mathbb{C}\) in Example 5.26 one obtains precisely the protected space for Kitaev's quantum double model for the group algebra \(\mathbb{C}[G]\) and its dual, see [Ki, Sec. 4]. However, Example 5.26 also yields an analogous result for any commutative ring \(k\), for which the usual quantum double models are not defined. ## 6 Protected objects in SSet In this section, we investigate protected objects for group objects in the category SSet. We denote by \(\Delta\) the simplex category with finite ordinals \([n]=\{0,1,\ldots,n\}\) for \(n\in\mathbb{N}_{0}\) as objects and weakly monotonic maps \(\alpha:[m]\to[n]\) as morphisms from \([m]\) to \([n]\). Objects in \(\mathrm{SSet}=\mathrm{Set}^{\Delta^{op}}\) are simplicial sets, functors \(X:\Delta^{op}\to\mathrm{Set}\) that are specified by sets \(X_{n}\), face maps \(d_{i}:X_{n+1}\to X_{n}\) and degeneracies \(s_{i}:X_{n}\to X_{n+1}\) for \(n\in\mathbb{N}_{0}\) and \(i\in\{0,\ldots,n\}\) that satisfy the simplicial relations \[d_{j}\circ d_{i} =d_{i}\circ d_{j+1}\text{ if }i\leq j, s_{i}\circ s_{j}=s_{j+1}\circ s_{i}\text{ if }i\leq j, \tag{62}\] \[d_{i}\circ s_{j} =s_{j-1}\circ d_{i}\text{ if }i<j, d_{i}\circ s_{j}=\text{id if }i\in\{j,j+1\}, d_{i}\circ s_{j}=s_{j}\circ d_{i-1}\text{ if }i>j+1.\] Morphisms in SSet are simplicial maps, natural transformations \(f:X\to Y\) specified by component maps \(f_{n}:X_{n}\to Y_{n}\) satisfying \(f_{n-1}\circ d_{i}=d_{i}\circ f_{n}\) and \(f_{n+1}\circ s_{i}=s_{i}\circ f_{n}\) for \(n\in\mathbb{N}_{0}\) and admissible \(i\). The category SSet is cartesian monoidal with the objectwise product induced by the product in Set. Unpacking the definition of a group object in a cartesian monoidal category from Example 2.3 yields **Definition 6.1**.: 1. _A group object in_ SSet _is a_ **simplicial group**_: a simplicial set_ \(H:\Delta^{op}\to\mathrm{Set}\) _with group structures on the sets_ \(H_{n}\) _such that all face maps and degeneracies are group homomorphisms._ 2. _A morphism of group objects in_ SSet _is a_ **morphism of simplicial groups**_: a simplicial map_ \(f:H\to H^{\prime}\) _such that all maps_ \(f_{n}:H_{n}\to H_{n}^{\prime}\) _are group homomorphisms._ For examples of simplicial groups, see Section 7.2, in particular Corollary 7.9 and Example 7.10. Modules, comodules and Yetter-Drinfeld modules over simplicial groups are given by Example 2.9. **Lemma 6.2**.: _Let \(H:\Delta^{op}\to\operatorname{Set}\) be a simplicial group._ 1. \(A\) **module** _over_ \(H\) _is a simplicial set_ \(M:\Delta^{op}\to\operatorname{Set}\) _together with a collection of_ \(H_{n}\)_-actions_ \(\rhd_{n}:H_{n}\times M_{n}\to M_{n}\) _that define a simplicial map_ \(\rhd:H\times M\to M\)_._ 2. \(A\) **comodule** _over_ \(H\) _is a simplicial set_ \(M:\Delta^{op}\to\operatorname{Set}\) _with a simplicial map_ \(F:M\to H\)_._ 3. _If_ \((M,\rhd)\) _is a module and_ \((M,F)\) _a comodule over_ \(H\)_, then_ \((M,\rhd,F)\) _is a_ **Yetter-Drinfeld module** _over_ \(H\) _iff_ \(F_{n}(g\rhd_{n}m)=g\cdot F_{n}(m)\cdot g^{-1}\) _for all_ \(m\in M_{n}\)_,_ \(g\in H_{n}\) _and_ \(n\in\mathbb{N}_{0}\)_._ As (co)limits in SSet are objectwise, see for instance Riehl [R, Prop. 3.3.9] or Leinster [L, Th. 6.2.5], (co)invariants of a (co)module over a group object in SSet are obtained from (co)equalisers in Set. It is also straightforward to compute the biinvariants of a Yetter-Drinfeld module. **Proposition 6.3**.: _Let \(H\) be a simplicial group._ 1. _The coinvariants_ \(\mathcal{M}^{coH}\) _of a_ \(H\)_-comodule_ \(M\) _defined by a simplicial map_ \(F:M\to H\) _are given by the sets_ \(M_{n}^{coH}=\{m\in M_{n}\mid F_{n}(m)=e\}\) _and the induced face maps and degeneracies._ 2. _The invariants_ \(M^{H}\) _of a_ \(H\)_-module_ \((M,\rhd)\) _are given by the sets_ \(M_{n}^{H}=\{H_{n}\rhd_{n}m\mid m\in M_{n}\}\) _and the induced face maps and degeneracies._ 3. _The biinvariants_ \(M_{inv}\) _of a Yetter-Drinfeld module_ \((M,\rhd,F)\) _over_ \(H\) _are given by the sets_ \((M_{inv})_{n}=\{H_{n}\rhd_{n}m\mid m\in M_{n},F_{n}(m)=e\}\) _and the induced face maps and degeneracies._ Proof.: 1. The coinvariant object of a \(H\)-comodule \((M,F)\) is the equaliser of the simplicial maps \(\delta=F\times\operatorname{id}:M\to H\times M\) and \(\eta\times\operatorname{id}:M\to H\times M\). As limits in SSet are objectwise, this is the simplicial set \(M^{coH}:\Delta^{op}\to\operatorname{Set}\) that assigns to an ordinal \([n]\) the equaliser in Set of the maps \(F_{n}\times\operatorname{id}:M_{n}\to H_{n}\times M_{n}\) and \(\eta_{n}\times\operatorname{id}:M_{n}\to H_{n}\times M_{n}\), which is \(M_{n}^{coH}=\{m\in M_{n}\mid F_{n}(m)=e\}\). The face maps and degeneracies are induced by the ones of \(M\), and the simplicial map \(\iota:M^{coH}\to M\) is given by the maps \(\iota_{n}:M_{n}^{coH}\to M_{n}\), \(m\mapsto m\). 2. Analogously to 1., the invariant object of \((M,\rhd)\) is the simplicial set \(M^{H}:\Delta^{op}\to\operatorname{Set}\) that assigns to the ordinal \([n]\) the coequaliser in Set of the maps \(\rhd_{n}:H_{n}\times M_{n}\to M_{n}\) and \(\epsilon_{n}\times\operatorname{id}:H_{n}\times M_{n}\to M_{n}\). This is the set \(M_{n}^{H}=M_{n}/\sim_{n}\) with \(m\sim_{n}m^{\prime}\) iff there is a \(g\in H_{n}\) with \(m^{\prime}=g\rhd_{n}m\). The simplicial map \(\pi:M\to M^{H}\) is given by the maps \(\pi_{n}:M_{n}\to M_{n}^{H}\), \(m\mapsto H_{n}\rhd_{n}m\). 3. The simplicial maps \(I:M_{inv}\to M^{H}\) and \(P:M^{coH}\to M_{inv}\) with \(\pi\circ\iota=I\circ P\) that characterise \(M_{inv}\) with \((M_{inv})_{n}=\{H_{n}\rhd_{n}m\mid m\in M_{n}^{coH}\}\) as the image of \(\pi\circ\iota\) are given by \[I_{n}:(M_{inv})_{n}\to M_{n}^{H},H_{n}\rhd_{n}m\mapsto H_{n}\rhd_{n}m,\qquad P _{n}:M_{n}^{coH}\to(M_{inv})_{n},m\mapsto H_{n}\rhd_{n}m.\] As monomorphisms and epimorphisms in SSet are those simplicial maps whose component morphisms are injective and surjective, see for instance [L, Ex. 6.2.20], it follows directly that \(I\) is a monomorphism and \(P\) an epimorphism in SSet. Every pair \((J,Q)\) of a monomorphism \(J:X\to M^{H}\) and morphism \(Q:M^{coH}\to X\) in SSet with \(J\circ Q=\pi\circ\iota\) defines injective maps \(J_{n}:X_{n}\to M_{n}^{H}\) and thus identifies \(Q(M_{n}^{coH})\) with a subset of \(M_{n}^{H}\). As \(J_{n}\) is a monomorphism and due to the identity \(J_{n}\circ Q_{n}(g\rhd_{n}m)=\pi_{n}\circ\iota_{n}(g\rhd_{n}m)=\pi_{n}\circ\iota _{n}(m)=J_{n}\circ Q_{n}(m)\), we have \(Q_{n}(g\rhd_{n}m)=Q_{n}(m)\) for all \(m\in M_{n}^{coH}\) and \(g\in H_{n}\). The maps \(V_{n}:(M_{inv})_{n}\to X_{n}\), \(H_{n}\rhd_{n}m\mapsto Q_{n}(m)\) define a simplicial map \(V:M_{inv}\to X\) with \(I=J\circ V\). We now determine the coinvariants, invariants and the protected objects for Kitaev models on oriented surfaces \(\Sigma\) of genus \(g\geq 1\) and for a simplicial group \(H\) as a Hopf monoid in SSet. **Proposition 6.4**.: _Let \(H\) be a simplicial group and \(\Sigma\) an oriented surface of genus \(g\geq 1\). The associated protected object is the simplicial set \(X:\Delta^{op}\to\mathrm{Set}\) with \(X_{n}=\mathrm{Hom}(\pi_{1}(\Sigma),H_{n})/H_{n}\), where the quotient is with respect to conjugation by \(H_{n}\), and face maps and degeneracies given by_ \[d_{i}:X_{n}\to X_{n-1},\;[\rho]\mapsto[d_{i}\circ\rho],\qquad s_{i}:X_{n}\to X _{n+1},\;[\rho]\mapsto[s_{i}\circ\rho].\] Proof.: By Theorem 5.21 the protected object of \(\Sigma\) can be computed from the standard graph in (15). This yields a Yetter-Drinfeld module \((M,\rhd,F)\) over \(H\) given by formula (22) in Example 4.6. Hence, we have \(M_{n}=H_{n}^{\times 2g}\) for all \(n\in\mathbb{N}_{0}\) with the face maps and degeneracies of \(H\) applied to each component simultaneously. The Yetter-Drinfeld module structure is given by \[F_{n}:H_{n}^{\times 2g}\to H_{n},\;(a_{1},b_{1},\ldots,a_{g},b_{g}) \mapsto[b_{g}^{-1},a_{g}]\cdots[b_{1}^{-1},a_{1}]\] \[\rhd_{n}:H_{n}\times H_{n}^{\times 2g}\to H_{n}^{\times 2g},\;(h,a_{1},b _{1},\ldots,a_{g},b_{g})\mapsto(ha_{1}h^{-1},hb_{1}h^{-1},\ldots,ha_{g}h^{-1},hb _{g}h^{-1}).\] By Proposition 6.3 the associated protected object is the simplicial set \(\mathcal{M}_{inv}\) with \[(M_{inv})_{n}=\{H_{n}\rhd_{n}(a_{1},b_{1},\ldots,a_{g},b_{g})\in H_{n}^{\times 2 g}\mid[b_{g}^{-1},a_{g}]\cdots[b_{1}^{-1},a_{1}]=e\}\cong\mathrm{Hom}(\pi_{1}( \Sigma),H_{n})/H_{n}.\] Face maps and degeneracies are given by post-composing group homomorphisms \(\rho:\pi_{1}(\Sigma)\to H_{n}\) with the face maps \(d_{i}:H_{n}\to H_{n-1}\) and degeneracies \(s_{i}:H_{n}\to H_{n+1}\). ## 7 Protected objects in Cat ### Crossed modules as group objects in Cat We consider the category Cat of small categories and functors between them as a cartesian monoidal category with terminal object \(\{\cdot\}\). For a finite product \(\mathcal{C}_{1}\times\ldots\times\mathcal{C}_{n}\) of small categories, we denote by \(\pi_{i}:\mathcal{C}_{1}\times\ldots\times\mathcal{C}_{n}\to\mathcal{C}_{i}\) the associated projection functors. For a small category \(\mathcal{C}\) we denote by \(\mathrm{Ob}(\mathcal{C})\) the set of objects and by \(\mathcal{C}^{(1)}=\bigcup_{X,Y\in\mathrm{Ob}(\mathcal{C})}\mathrm{Hom}_{ \mathcal{C}}(X,Y)\) the set of all morphisms in \(\mathcal{C}\). **Definition 7.1**.: 1. \(A\) _group object__in Cat is a small category_ \(H\) _together with functors_ \(m:H\times H\to H\)_,_ \(\eta:\{\cdot\}\to H\) _and_ \(I:H\to H\) _such that the diagrams_ (7) _commute._ 2. \(A\) _morphism_ \(F:(H,m,\eta,I)\to(H^{\prime},m^{\prime},\eta^{\prime},I^{\prime})\) _of group objects__is a functor_ \(F:H\to H^{\prime}\) _that satisfies_ (8)_._ _We denote by \(\mathcal{G}(\mathrm{Cat})\) the category of group objects and morphisms of group objects in Cat and write \(e:=\eta(\cdot)\), \(f^{-1}=I(f)\), \(g\cdot f=m(g,f)\) and likewise for multiple products._ Brown und Spencer [BS] showed that group objects in Cat correspond to crossed modules. We summarise this correspondence for the convenience of the reader. **Definition 7.2**.: \(A\) _crossed module__is a quadruple_ \((B,A,\blacktriangleright,\partial)\) _of groups_ \(A\) _and_ \(B\)_, a group homomorphism_ \(\partial:A\to B\) _and a group action_ \(\blacktriangleright\colon B\times A\to A\) _by automorphisms that satisfy the Peiffer identities_ \[\partial(b\blacktriangleright a)=b\partial(a)b^{-1},\qquad\partial(a)\blacktriangleright a ^{\prime}=aa^{\prime}a^{-1}\qquad\forall a,a^{\prime}\in A,b\in B. \tag{63}\] \(A\) _morphism of crossed modules__\(f=(f_{1},f_{2}):(B,A,\blacktriangleright,\partial)\to(B^{\prime},A^{\prime}, \blacktriangleright^{\prime},\partial^{\prime})\) is a pair of group homomorphisms \(f_{1}:B\to B^{\prime}\), \(f_{2}:A\to A^{\prime}\) such that_ \[\partial^{\prime}\circ f_{2}=f_{1}\circ\partial\text{,}\qquad\blacktriangleright^ {\prime}\circ(f_{1}\times f_{2})=f_{2}\circ\blacktriangleright\.\] _We denote by \(\mathcal{CM}\) the category of crossed modules and morphisms between them._ **Example 7.3**.: 1. _A normal subgroup_ \(A\subset B\) _defines a crossed module with the inclusion_ \(\partial:A\to B\)_,_ \(a\mapsto a\) _and the conjugation action_ \(\blacktriangleright\colon B\times A\to A\)_,_ \(b\blacktriangleright a=bab^{-1}\)_._ 2. _Any crossed module_ \((A,B,\blacktriangleright,\partial)\) _yields a crossed module_ \((A/\ker\partial,B,\blacktriangleright^{\prime},\partial^{\prime})\) _with injective_ \(\partial^{\prime}\)_. This identifies_ \(A/\ker\partial\) _with a normal subgroup of_ \(B\) _and hence yields 1._ 3. _Any group action_ \(\blacktriangleright\colon B\times A\to A\) _by automorphisms of an abelian group_ \(A\) _yields a crossed module with_ \(\partial\equiv e_{B}\)_._ 4. _Any group_ \(A\) _defines a crossed module with_ \(B=\operatorname{Aut}(A)\)_,_ \(\blacktriangleright\colon\operatorname{Aut}(A)\times A\to A\)_,_ \(\phi\blacktriangleright a=\phi(a)\) _and_ \(\partial:A\to\operatorname{Aut}(A)\)_,_ \(g\mapsto C_{g}\)_, where_ \(C_{g}(x)=gxg^{-1}\)_._ 5. _Every extension of a group_ \(G\) _by a group_ \(X\)__ \[\xy@{->}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} {<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} <{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<} <{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{}<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{<}{}{<} {<}{<}{<}{<}{<}{}<<{}<}{}{<}{<}{<}{<}{}{<}{<}{<}{}{<}{<}{<}{<}{<}{<}{<}{<}{}{<}{<}{<}{<}{<}{ ### Equalisers and coequalisers in \(\operatorname{Cat}\) To determine the coinvariants, invariants and the protected object for a group object in \(\operatorname{Cat}\), we require equalisers, coequalisers and images in \(\operatorname{Cat}\). It is well-known that \(\operatorname{Cat}\) is complete and cocomplete, see for instance [R, Prop. 3.5.6, Cor. 4.5.16]. The following result on equalisers is standard, see for example Schubert [Sch, Sec. 7.2]. **Lemma 7.5**.: _The equaliser of two functors \(F,K:\mathcal{C}\to\mathcal{D}\) between small categories is the subcategory \(\mathcal{E}\subset\mathcal{C}\) with_ * \(\operatorname{Ob}(\mathcal{E})=\{C\in\operatorname{Ob}(\mathcal{C})\mid F( C)=K(C)\}\)_,_ * \(\operatorname{Hom}_{\mathcal{E}}(C,C^{\prime})=\{f\in\operatorname{Hom}_{ \mathcal{C}}(C,C^{\prime})\mid F(f)=K(f)\}\)_._ To describe coequalisers in \(\operatorname{Cat}\) we use that \(\operatorname{Cat}\) is a reflective subcategory of \(\operatorname{SSet}\) with the inclusion given by the nerve functor \(N:\operatorname{Cat}\to\operatorname{SSet}\), which is full and faithful. Its left adjoint is the homotopy functor \(h:\operatorname{SSet}\to\operatorname{Cat}\), and the composite \(hN:\operatorname{Cat}\to\operatorname{Cat}\) is naturally isomorphic to the identity functor via the counit of the adjunction, see Riehl [R, Ex. 4.5.14 (vi)] or Lurie [Lu, Sec. 1.2]. As a left adjoint, \(h\) preserves colimits. This allows one to compute colimits in \(\operatorname{Cat}\) by applying the homotopy functor \(h\) to the associated colimits in \(\operatorname{SSet}\), see for instance [R, Prop. 4.5.15]. **Lemma 7.6**.: _The coequaliser of two functors \(F,K:\mathcal{C}\to\mathcal{D}\) between small categories is the functor \(h(\pi):hN(\mathcal{D})\to h(X)\), where \(\pi:N(\mathcal{D})\to X\) is the coequaliser of \(N(F),N(K)\) in \(\operatorname{SSet}\)._ To compute such coequalisers, we require an explicit description of the nerve and the homotopy functor. We summarise the details from [R, Ex. 4.5.14 (vi)] and [Lu, Sec. 1.2]. For \(n\in\mathbb{N}_{0}\) we denote by \([n]\) the ordinals in \(\Delta\) as well as the associated categories with objects \(0,1,\ldots,n\) and a single morphism from \(i\) to \(j\) if \(i\leq j\). As every weakly monotonic map \(\alpha:[m]\to[n]\) defines a functor \(\alpha:[m]\to[n]\), this defines an embedding \(\iota:\Delta\to\operatorname{Cat}\). **Definition 7.7**.: _The_ **nerve**_\(N:\operatorname{Cat}\to\operatorname{SSet}\) is the functor that sends a small category \(\mathcal{C}\) to the simplicial set \(N(\mathcal{C}):\Delta^{op}\to\operatorname{Set}\) with_ * \(N(\mathcal{C})_{n}=\operatorname{Hom}_{\operatorname{Cat}}([n],\mathcal{C})\)_,_ * \(N(\mathcal{C})(\alpha):N(\mathcal{C})_{n}\to N(\mathcal{C})_{m}\)_,_ \(F\mapsto F\circ\alpha\) _for every weakly monotonic_ \(\alpha:[m]\to[n]\)_,_ _and a functor \(F:\mathcal{C}\to\mathcal{D}\) to the simplicial map \(N(F):N(\mathcal{C})\to N(\mathcal{D})\) that post-composes with \(F\)._ By definition, \(N(\mathcal{C})_{0}=\operatorname{Ob}\mathcal{C}\) and \(N(\mathcal{C})_{n}\) is the set of sequences \((f_{1},\ldots,f_{n}):C_{0}\xrightarrow{f_{1}}\ldots\xrightarrow{f_{n}}C_{n}\) of composable morphisms in \(\mathcal{C}\) for \(n\in\mathbb{N}\). The simplicial set structure is given by the face maps \(d_{i}:N(\mathcal{C})_{n}\to N(\mathcal{C})_{n-1}\) and degeneracies \(s_{i}:N(\mathcal{C})_{n}\to N(\mathcal{C})_{n+1}\) for \(i\in\{0,...,n\}\). The face maps act on a sequence \((f_{1},\ldots,f_{n})\) by removing \(f_{1}\) and \(f_{n}\) for \(i=0\) and \(i=n\), respectively, and by replacing \((\ldots,f_{i},f_{i+1},\ldots)\) with \((\ldots,f_{i+1}\circ f_{i},\ldots)\) for \(1\leq i\leq n-1\). For \(n=1\) and \(f_{1}:C_{0}\to C_{1}\) one has \(d_{0}(f_{1})=C_{1}\) and \(d_{1}(f_{1})=C_{0}\). The degeneracies act on \((f_{1},\ldots,f_{n})\) by inserting the identity morphism \(1_{C_{i}}\). In particular, for \(n=0\) one has \(s_{0}(C)=1_{C}\) for every \(C\in\operatorname{Ob}\mathcal{C}\). The simplicial map \(N(F)\) for a functor \(F:\mathcal{C}\to\mathcal{D}\) applies \(F\) to all morphisms in \((f_{1},\ldots,f_{n})\). The left adjoint of the nerve \(N:\operatorname{Cat}\to\operatorname{SSet}\) is the homotopy functor \(h:\operatorname{SSet}\to\operatorname{Cat}\). It is the left Kan extension along the Yoneda embedding \(y:\Delta\to\operatorname{SSet}\) of the embedding functor \(\iota:\Delta\to\operatorname{Cat}\). Concretely, it is given as follows. **Definition 7.8**.: _The_ **homotopy functor**_\(h:\operatorname{SSet}\to\operatorname{Cat}\) sends a simplicial set \(X\) to the category \(hX\) with \(\operatorname{Ob}hX=X_{0}\), generating morphisms \(\sigma:d_{1}(\sigma)\to d_{0}(\sigma)\) for \(\sigma\in X_{1}\) and relations_ \[s_{0}(x)=1_{x}\text{ for }x\in X_{0},\qquad\qquad\qquad d_{1}(\sigma)=d_{0}( \sigma)\circ d_{2}(\sigma)\text{ for }\sigma\in X_{2}. \tag{64}\] _It sends a simplicial map \(f:X\to Y\) to the functor \(hf:hX\to hY\) given by \(f\) on the generators._ The simplicial relations imply that for elements of \(X_{2}\) that are in the image of a degeneracy map, the second relation in (64) is satisfied trivially. In this case one of the two morphisms on the right is an identity and the other coincides with the morphism on the left. Only non-degenerate elements of \(X_{2}\) give rise to non-trivial relations in \(hX\). In general, morphisms in the homotopy category of a simplicial set \(X\) are finite sequences of composable elements of \(X_{1}\). However, if the simplicial set \(X\) is an \(\infty\)-category, which is always the case if \(X=N(\mathcal{C})\) for some category \(\mathcal{C}\), every morphism in \(hX\) is represented by a single element in \(X_{1}\), see for instance [Lu, Sec. 1.2.5]. Most of the simplicial sets considered in the following are even Kan complexes, as they are nerves of groupoids. As a right adjoint, the nerve preserves limits, and as a left adjoint, the homotopy functor preserves colimits. It follows directly from its definition that the nerve also preserves coproducts, and the homotopy functor preserves finite products, see for instance Joyal [Jo, Prop. 1.3]. This implies with Examples 2.6 and 2.10 **Corollary 7.9**.: _The nerve \(N:\mathrm{Cat}\to\mathrm{SSet}\) and the homotopy functor \(h:\mathrm{SSet}\to\mathrm{Cat}\) are symmetric monoidal with respect to the cartesian monoidal category structures of \(\mathrm{Cat}\) and \(\mathrm{SSet}\). In particular:_ _1. The nerve of a crossed module is a simplicial group._ _2. The homotopy category of a simplicial group is a crossed module._ _3. The nerve of a (co)module over a crossed module is a (co)module over its nerve._ _4. The homotopy category of a (co)module over a simplicial group is a (co)module over its homotopy category._ Concretely, the nerve of a crossed module \((B,A,\blacktriangleright,\partial)\) is the simplicial group \(H\) with \(H_{n}=A^{\times n}\times B\) for \(n\in\mathbb{N}_{0}\) with group multiplication \[(a_{1},...,a_{n},b)\cdot(a^{\prime}_{1},...,a^{\prime}_{n},b^{\prime})=(a_{1} (b\blacktriangleright a^{\prime}_{1}),a_{2}(\partial(a_{1})b\blacktriangleright a^{ \prime}_{2}),...,a_{n}(\partial(a_{n-1}\cdots a_{1})b\blacktriangleright a^{ \prime}_{n}),bb^{\prime}) \tag{65}\] and face maps and degeneracies \[d_{i}:H_{n}\to H_{n-1},\quad(a_{1},...,a_{n},b)\mapsto\begin{cases}(a_{2},..., a_{n},\partial(a_{1})b)&i=0\\ (a_{1},...,a_{i+1}a_{i},...,a_{n},b)&1\leq i\leq n-1\\ (a_{1},...,a_{n-1},b)&i=n\\ \end{cases} \tag{66}\] **Example 7.10**.: 1. _A group action_ \(\blacktriangleright\colon B\times A\to A\) _by automorphisms on an abelian group_ \(A\) _yields a simplicial group with_ \(H_{n}=A^{\times n}\rtimes^{\prime}B\)_, where_ \(B\) _acts diagonally via_ \(\blacktriangleright\)_, and with the face maps and degeneracies (_66_) for_ \(\partial\equiv 1\)_._ 2. _Every injective group homomorphism_ \(\partial:A\to B\) _from an abelian group_ \(A\) _into the centre of a group_ \(B\) _yields a simplicial group, where_ \(H_{n}=A^{\times n}\times B\) _is the direct product, and the face maps and degeneracies are given by (_66_)._ 3. _Every abelian group_ \(A\) _is a simplicial group with_ \(H_{n}=A^{\times n}\)_, the group multiplication of_ \(A^{\times n}\)_, the face maps and degeneracies (_66_) for_ \(B=\{e\}\) _and_ \(\partial\equiv 1\)_._ 4. _Any normal subgroup_ \(A\subset B\) _determines a simplicial group with_ \(H_{n}=A^{\times n}\times B\) _and group multiplication (_65_), face maps and degeneracies (_66_), where_ \(\partial:A\to B\) _is the inclusion and_ \(\blacktriangleright\colon B\times A\to A\) _the conjugation action._ ### (Co)invariants of (co)modules over group objects in \(\mathrm{Cat}\) The coinvariants of a comodule \((\mathcal{M},\delta)\) over a group object \((H,m,\eta,I)\) in \(\mathrm{Cat}\) are given as the equaliser of \(\delta=(F\times 1_{\mathcal{M}})\circ\Delta:\mathcal{M}\to H\times\mathcal{M}\) and \(\eta\times 1_{\mathcal{M}}:\mathcal{M}\to H\times\mathcal{M}\). This is the subcategory on which \(\delta\) and \(\eta\times 1_{\mathcal{M}}\) coincide, together with its inclusion functor, see Lemma 7.5. In terms of the associated functor \(F:\mathcal{M}\to H\) from Example 2.9 we have **Lemma 7.11**.: _Let \((\mathcal{M},\delta)\) be a comodule over a group object \((H,m,\eta,I)\) in \(\operatorname{Cat}\). Then the coinvariants are given by the subcategory \(\mathcal{M}^{coH}\subset\mathcal{M}\) with_ * \(\operatorname{Ob}(\mathcal{M}^{coH})=\{A\in\operatorname{Ob}(\mathcal{M})\mid F (A)=e\}\)_,_ * \(\operatorname{Hom}_{\mathcal{M}^{coH}}(A,A^{\prime})=\{f\in\operatorname{ Hom}_{\mathcal{M}}(A,A^{\prime})\mid F(f)=1_{e}\}\)_,_ _and the inclusion functor \(\iota:\mathcal{M}^{coH}\to\mathcal{M}\)._ The invariants of a module \((\mathcal{M},\rhd)\) over a group object \(H\) in \(\operatorname{Cat}\) are the coequaliser of the functors \(\rhd,\pi_{2}:H\times\mathcal{M}\to\mathcal{M}\). They are computed with Lemma 7.6. **Proposition 7.12**.: _Let \((\mathcal{M},\rhd)\) be a module over a group object \(H=\bigtriangledown(B,A,\operatorname{\blacktriangleright},\delta)\) in \(\operatorname{Cat}\). Then its invariants are the category \(\mathcal{M}^{H}\), whose_ * _objects are orbits of the_ \(B\)_-action on_ \(\operatorname{Ob}(\mathcal{M})\)_,_ * _morphisms are generated by orbits of the_ \(A\rtimes B\)_-action on_ \(\mathcal{M}^{(1)}\) _subject to the relations_ \([f_{2}]\circ[f_{1}]=[f_{2}\circ f_{1}]\) _for all_ \(A\rtimes B\)_-orbits_ \([f_{1}]\)_,_ \([f_{2}]\) _of composable morphisms_ \(f_{1},f_{2}\) _in_ \(\mathcal{M}\)_._ _We denote by \(\pi:\mathcal{M}\to\mathcal{M}^{H}\) the projection functor that sends each object of \(\mathcal{M}\) to its \(B\)-orbit and each morphism in \(\mathcal{M}\) to the equivalence class of its \(A\rtimes B\)-orbit._ Proof.: By Corollary 7.9, applying the nerve to the group object \(H\) in \(\operatorname{Cat}\) and to a module \((\mathcal{M},\rhd)\) over \(H\) yields a simplicial group \(N(H)\) and a module \(N(\mathcal{M})\) over \(N(H)\) in \(\operatorname{SSet}\). By Lemma 7.6 the coequaliser of the morphisms \(\rhd,\pi_{2}:H\times\mathcal{M}\to\mathcal{M}\) in \(\operatorname{Cat}\) is obtained by applying the homotopy functor to the coequaliser of \(\rhd^{\prime}=N(\rhd),\pi_{2}^{\prime}=N(\pi_{2}):N(H)\times N(\mathcal{M}) \to N(\mathcal{M})\) in \(\operatorname{SSet}\). As colimits in \(\operatorname{SSet}\) are computed objectwise, see for instance [R, Prop. 3.3.9], the coequaliser of \(\rhd^{\prime},\pi_{2}^{\prime}\) is the simplicial set \(N(\mathcal{M})^{H}\) with \(N(\mathcal{M})_{n}^{H}=N(\mathcal{M})_{n}/\sim_{n}\), where \(\sim_{n}\) is the equivalence relation on \(N(\mathcal{M})_{n}\) defined by the \(N(H)\)-action: \(m\sim_{n}m^{\prime}\) iff there is a \(g\in N(H)_{n}\) with \(m^{\prime}=g\rhd^{\prime}m\). The face maps and degeneracies of \(N(\mathcal{M})^{H}\) are induced by the ones of \(N(\mathcal{M})\). As \(N(H)_{0}=\operatorname{Ob}H=B\) and \(N(\mathcal{M})_{0}=\operatorname{Ob}\mathcal{M}\), the elements of \(N(\mathcal{M})_{0}^{H}\) are the orbits of the \(B\)-action on \(\operatorname{Ob}\mathcal{M}\). As \(N(H)_{1}=H^{(1)}=A\rtimes B\), the set \(N(\mathcal{M})_{1}^{H}\) contains the orbits of the \(A\rtimes B\)-action on \(\mathcal{M}^{(1)}\). Elements of \(N(\mathcal{M})_{2}\) and \(N(H)_{2}\) are pairs of composable morphisms in \(\mathcal{M}\) and \(H\). Thus, the set \(N(\mathcal{M})_{2}^{H}\) consists of equivalence classes of pairs \((f_{1},f_{2})\) of composable morphisms in \(\mathcal{M}\) with \((f_{1},f_{2})\sim(f_{1}^{\prime},f_{2}^{\prime})\) if there are \((a_{1},b_{1}),(a_{2},b_{2})\in A\rtimes B\) with \(\partial(a_{1})b_{1}=b_{2}\) such that \(f_{1}^{\prime}=(a_{1},b_{1})\rhd^{\prime}f_{1}\) and \(f_{2}^{\prime}=(a_{2},b_{2})\rhd^{\prime}f_{2}\). For any composable pair \((f_{1},f_{2})\in N(\mathcal{M})_{2}\), one has \(d_{0}(f_{1},f_{2})=f_{2}\), \(d_{1}(f_{1},f_{2})=f_{2}\circ f_{1}\) and \(d_{2}(f_{1},f_{2})=f_{1}\). This implies \(d_{0}[(f_{1},f_{2})]=[f_{2}]\), \(d_{1}[(f_{1},f_{2})]=[f_{2}\circ f_{1}]\) and \(d_{2}[(f_{1},f_{2})]=[f_{1}]\) for their equivalence classes in \(N(\mathcal{M})_{2}^{H}\) and \(N(\mathcal{M})_{1}^{H}\). Applying the homotopy functor from Definition 7.8 thus yields a category \(\mathcal{M}^{H}\) with objects \(\operatorname{Ob}\mathcal{M}^{H}=N(\mathcal{M})_{0}^{H}=\operatorname{Ob} \mathcal{M}/B\). Its generating morphisms are \(A\rtimes B\)-orbits of morphisms in \(\mathcal{M}\), and the second relation in (64) translates into the relation \([f_{2}]\circ[f_{1}]=[f_{2}\circ f_{1}]\) for the \(A\rtimes B\)-orbits of composable pairs \((f_{1},f_{2})\) of morphisms in \(\mathcal{M}\). We now restrict attention to Yetter-Drinfeld modules \((\mathcal{M},\rhd,\delta)\) over group objects \(H\) in \(\operatorname{Cat}\) and determine their binvariants. We denote again by \(F:\mathcal{M}\to H\) the functor defined by \(\delta\) from Example 2.9, by \(\iota:\mathcal{M}^{coH}\to\mathcal{M}\) the inclusion functor from Lemma 7.11 and by \(\pi:\mathcal{M}\to\mathcal{M}^{H}\) the projection functor from Proposition 7.12. **Proposition 7.13**.: _Let \((\mathcal{M},\rhd,F)\) be a Yetter-Drinfeld module over a group object \(H\) in \(\mathrm{Cat}\). Then \(\mathcal{M}_{inv}\) is given by_ \[\mathrm{Ob}\mathcal{M}_{inv}=\{\pi(M)\mid M\in\mathrm{Ob}\mathcal{M}\text{ with }F(M)=e\},\] \[\mathrm{Hom}_{\mathcal{M}_{inv}}(\pi(M_{1}),\pi(M_{2}))=\{\pi(f)\mid f\in \mathcal{M}^{(1)}\text{ with }\pi(s(f))=\pi(M_{1}),\pi(t(f))=\pi(M_{2}),F(f)=1_{e}\}.\] Proof.: 1. We verify that \(\mathcal{M}_{inv}\) is a category. If \(F(M)=e\) for an object \(M\) in \(\mathcal{M}\), then \(F(g\rhd M)=g\cdot F(M)\cdot g^{-1}=e\) for all objects \(g\) in \(H\) by the Yetter-Drinfeld module condition in Example 2.9. Likewise, if \(f\) is a morphism in \(\mathcal{M}\) with \(F(f)=1_{e}\), then \(F(g\rhd f)=g\cdot F(f)\cdot g^{-1}=1_{e}\) for all \(g\in H^{(1)}\). This shows that for every object \(M\) and morphism \(f\) of \(\mathcal{M}^{coH}\) the entire \(\mathrm{Ob}\,H\)-orbit of \(M\) and \(H^{(1)}\)-orbit of \(f\) is contained in \(\mathcal{M}^{coH}\). Any identity morphism on an object \(M\in\mathrm{Ob}\,\mathcal{M}^{coH}\) satisfies \(F(1_{M})=1_{e}\) and hence is contained in \(\mathcal{M}^{coH}\). If \((f_{1},f_{2})\) is a pair of composable morphisms in \(\mathcal{M}^{coH}\), then \(F(f_{2}\circ f_{1})=F(f_{2})\circ F(f_{1})=1_{e}\) and hence \(f_{2}\circ f_{1}\in\mathcal{M}^{coH}\) as well. Suppose now that \(f_{1}:M_{0}\to M_{1}\) and \(f_{2}:M_{1}^{\prime}\to M_{2}\) are morphisms in \(\mathcal{M}^{coH}\) such that \(\pi(f_{1})\) and \(\pi(f_{2})\) are composable in \(\mathcal{M}^{H}\). Then there is a \(g\in\mathrm{Ob}\,H\) with \(M_{1}^{\prime}=g\rhd M_{1}\), and the morphisms \(f_{1}\) and \(g^{-1}\rhd f_{2}\) are composable in \(\mathcal{M}^{coH}\). With the relations of \(\mathcal{M}_{inv}\) one obtains \(\pi(f_{2})\circ\pi(f_{1})=\pi(g^{-1}\rhd f_{2})\circ\pi(f_{1})=\pi((g^{-1} \rhd f_{2})\circ f_{1})\) with \((g^{-1}\rhd f_{2})\circ f_{1}\in\mathcal{M}^{coH}\). 2. We show that \(\mathcal{M}_{inv}\) has the universal property of the image in \(\mathrm{Cat}\). The inclusion functor \(I:\mathcal{M}_{inv}\to\mathcal{M}^{H}\) is a monomorphism in \(\mathrm{Cat}\) and satisfies \(IP=\pi_{t}\), where \(P:\mathcal{M}^{coH}\to\mathcal{M}_{inv}\) is the functor that sends an object \(M\) in \(\mathcal{M}^{coH}\) to \(\pi(M)\) and a morphism \(f\) in \(\mathcal{M}^{coH}\) to \(\pi(f)\). If \((J,Q)\) is a pair of a monomorphism \(J:\mathcal{C}\to\mathcal{M}^{H}\) and a functor \(Q:\mathcal{M}^{coH}\to\mathcal{C}\) with \(JQ=\pi_{t}\), then \(J\) is a monomorphism in \(\mathrm{Cat}\), which allows one to identify \(\mathcal{C}\) with a subcategory of \(\mathcal{M}^{H}\) and \(J\) with its inclusion functor. As \(JQ=\pi_{t}\), the subcategory \(\mathcal{C}\subset\mathcal{M}^{H}\) contains \(\mathcal{M}_{inv}\) as a subcategory \(\mathcal{M}_{inv}\subset\mathcal{C}\) and hence there is a unique functor, the inclusion \(V:\mathcal{M}_{inv}\to\mathcal{C}\), with \(I=JV\). **Remark 7.14**.: _Coequalisers in \(\mathrm{Cat}\) can also be determined via the construction of Bednarczyk, Borzyszkowski and Pawlowski [BBP], using generalised congruences and associated quotient categories. For a summary of this construction, see also Bruckner [Br] and Haucourt [Ha]. For a module \((\mathcal{M},\rhd)\) over a group object in \(\mathrm{Cat}\) the associated quotient category gives the invariants of \((\mathcal{M},\rhd)\) as in Proposition 7.12. For a triple \((\mathcal{M},\rhd,\delta)\) the generalised congruence \((\sim_{0},\sim_{m})\) restricts to a generalised congruence on \(\mathcal{M}^{coH}\) whose quotient category are the biinvariants of \((\mathcal{M},\rhd,\delta)\)._ ### Protected objects for group objects in \(\mathrm{Cat}\) We now give a concrete description of the coinvariants and the protected objects for oriented surfaces \(\Sigma\) of genus \(g\geq 1\) and group objects \(H=\bigtriangledown(B,A,\blacktriangleright,\partial)\) in \(\mathrm{Cat}\). We start by considering the Yetter-Drinfeld module and the coinvariants for the standard graph from (15) and show that they are given by group homomorphisms \(\rho:F_{2g}\to A\rtimes B\) and \(\rho:\pi_{1}(\Sigma)\to A\rtimes B\), respectively. To describe their category structure, we consider group-valued \(1\)-cocycles. **Definition 7.15**.: _Let \(K,A\) be groups and \(\blacktriangleright\colon K\times A\to A\) a group action of \(K\) on \(A\) by automorphisms._ _1. A_ \(1\)_-cocycle is a map_ \(\phi:K\to A\) _with_ \(\phi(\lambda\mu)=\phi(\lambda)\cdot(\lambda\blacktriangleright\phi(\mu))\) _for all_ \(\lambda,\mu\in K\)_._ _2. A_ \(1\)_-coboundary is a map_ \(\eta_{a}:K\to A\)_,_ \(\lambda\mapsto a(\lambda\blacktriangleright a^{-1})\) _for some_ \(a\in A\)_._ _3. \(\phi,\psi:K\to A\) are **related by a coboundary** if \(\psi(\lambda)=a\cdot\phi(\lambda)\cdot(\lambda\blacktriangleright a^{-1})\) _for some_ \(a\in A\)_._ If \(A\) is abelian, \(1\)-cocycles form a group \(Z^{1}(K,A,\blacktriangleright)\) with pointwise multiplication and coboundaries a subgroup \(B^{1}(K,A,\blacktriangleright)\). The factor group is the first cohomology group \(H^{1}(K,A,\blacktriangleright)\). More generally, \(1\)-cocycles with values in a (not necessarily abelian) group \(A\) arise from group homomorphisms into a semidirect product \(A\rtimes B\). **Lemma 7.16**.: _Let \(\blacktriangleright\colon B\times A\to A\) a group action by automorphisms and \(A\rtimes B\) the associated semidirect product._ 1. _Group homomorphisms_ \(\sigma:K\to A\rtimes B\) _correspond to pairs_ \((\phi,\rho)\) _of a group homomorphism_ \(\rho:K\to B\) _and a_ \(1\)_-cocycle_ \(\phi:K\to A\) _for the action_ \(\rho^{*}\blacktriangleright\colon K\times A\to A\)_,_ \((\lambda,a)\mapsto\rho(\lambda)\blacktriangleright a\)_._ 2. _Two 1-cocycles_ \(\phi,\phi^{\prime}:K\to A\) _for_ \(\rho^{*}\blacktriangleright\) _are related by a coboundary iff the group homomorphisms_ \((\phi,\rho),(\phi^{\prime},\rho):K\to A\rtimes B\) _are related by conjugation with_ \(A\subset A\rtimes B\)_._ If the semidirect product in Lemma 7.16 arises from a crossed module \((B,A,\blacktriangleright,\partial)\), the group homomorphism \(\partial:A\to B\), allows one to organise the group homomorphisms \(\rho:K\to B\) and \(1\)-cocycles \(\phi:K\to A\) from Lemma 7.16 into a groupoid. Denoting by \(\phi\cdot\psi\) and \(\phi^{-1}\) the pointwise product and inverse of maps \(\phi,\psi:K\to A\) we have **Lemma 7.17**.: _Any group \(K\) and crossed module \((B,A,\blacktriangleright,\partial)\) defines a groupoid \(\operatorname{Hom}(K,B\blacktriangleright A)\) with_ * _group homomorphisms_ \(\rho:K\to B\) _as objects,_ * \(\operatorname{Hom}(\rho,\rho^{\prime})=\{(\phi,\rho)\mid\phi:K\to A\) _1-cocycle for_ \(\rho^{*}\blacktriangleright\) _with_ \((\partial\circ\phi)\cdot\rho=\rho^{\prime}\}\)_,_ * _composition of morphisms:_ \((\psi,(\partial\circ\phi)\cdot\rho)\circ(\phi,\rho)=(\psi\cdot\phi,\rho)\)_,_ * _inverse morphisms:_ \((\phi,\rho)^{-1}=(\phi^{-1},(\partial\circ\phi)\cdot\rho)\)_._ Proof.: A direct computation using (63) shows that for any pair \((\phi,\rho)\) of a group homomorphism \(\rho:K\to B\) and a \(1\)-cocycle \(\phi:K\to A\) for \(\rho^{*}\blacktriangleright\), the map \((\partial\circ\phi)\cdot\rho:K\to B\) is another group homomorphism. Similarly, if \(\phi\) is a \(1\)-cocycle for \(\rho^{*}\blacktriangleright\) and \(\psi\) a \(1\)-cocycle for \(((\partial\circ\phi)\cdot\rho)^{*}\blacktriangleright\), then \(\psi\cdot\phi\) is another \(1\)-cocycle for \(\rho^{*}\blacktriangleright\). The formula for the inverse morphism follows directly. By applying this lemma to Example 4.6, we obtain a groupoid that describes the Yetter-Drinfeld module for a group object \(H=\bigtriangledown(B,A,\blacktriangleright,\partial)\) in \(\operatorname{Cat}\) and the standard graph (15), if we set \(K=F_{2g}\) and identify the generators of \(F_{2g}\) with the edges of the graph. An analogous result holds for the associated coinvariants for \(K=\pi_{1}(\Sigma)\) and any properly embedded graph with a single vertex. **Proposition 7.18**.: _Let \(\Gamma\) be a properly embedded graph with a single vertex on a surface \(\Sigma\) of genus \(g\geq 1\) and \(H=\bigtriangledown(B,A,\blacktriangleright,\partial)\) a group object in \(\operatorname{Cat}\). Then the associated coinvariants are the groupoid from Lemma 7.17 for \(K=\pi_{1}(\Sigma)\)._ Proof.: By Theorem 5.21 it suffices to consider the graph in (15). By Example 4.6 the coinvariants are the equaliser of the morphisms \(\eta\,\epsilon,F:H^{\times 2g}\to H\) in \(\operatorname{Cat}\), where \(\epsilon:H^{\times 2g}\to\{\cdot\}\) is the terminal morphism, \(\eta:\{\cdot\}\to H\) is as in Definition 7.1 and \(F:H^{\times 2g}\to H\) is given by \(F(a_{1},b_{1},\dots,a_{g},b_{g})=[b_{g}^{-1},a_{g}]\cdots[b_{1}^{-1},a_{1}]\). By Lemma 7.5, this equaliser is the subcategory \(\mathcal{E}\subset H^{\times 2g}\) consisting of objects \(C\) and morphisms \(f\) with \(F(C)=e\) and \(F(f)=1_{e}\). For \(H=\bigtriangledown(B,A,\blacktriangleright,\partial)\), this yields with Theorem 7.4 \[\operatorname{Ob}(\mathcal{E}) =\{(a_{1},b_{1},\dots,a_{g},b_{g})\in B^{\times 2g}\mid[b_{g}^{-1},a_ {g}]\cdots[b_{1}^{-1},a_{1}]=1\}\] \[\mathcal{E}^{(1)} =\{(a_{1},b_{1},\dots,a_{g},b_{g})\in(A\rtimes B)^{2g}\mid[b_{g} ^{-1},a_{g}]\cdots[b_{1}^{-1},a_{1}]=1\}.\] Thus, every object \(\rho\in\operatorname{Ob}(\mathcal{E})\) corresponds to a group homomorphism \(\rho:\pi_{1}(\Sigma)\to B\) and every morphism \(\sigma\in\mathcal{E}^{(1)}\) to a group homomorphism \(\sigma:\pi_{1}(\Sigma)\to A\rtimes B\). By Lemma 7.16 the latter defines a pair \(\sigma=(\phi,\rho)\) of a group homomorphism \(\rho:\pi_{1}(\Sigma)\to B\) and a \(1\)-cocycle \(\phi\) for \(\rho^{*}\blacktriangleright\). We now use the description of the coinvariants in Proposition 7.18 and the description of the image object in \(\operatorname{Cat}\) from Proposition 7.13 to compute the protected object for a surface \(\Sigma\) of genus \(g\geq 1\) and a crossed module \((B,A,\blacktriangleright,\partial)\). **Theorem 7.19**.: _The protected object for a group object \(H=\bigtriangledown(B,A,\blacktriangleright,\partial)\) in \(\mathrm{Cat}\) and a surface \(\Sigma\) of genus \(g\geq 1\) is a groupoid \(\mathcal{M}_{H,\Sigma}\) with_ * _conjugacy classes of group homomorphisms_ \(\rho:\pi_{1}(\Sigma)\to B\) _as objects,_ * _equivalence classes of group homomorphisms_ \(\tau=(\phi,\rho):\pi_{1}(\Sigma)\to A\rtimes B\) _as morphisms from_ \([\rho]\) _to_ \([(\partial\circ\phi)\cdot\rho]\)_._ _The equivalence relation is given by \(\tau_{2}\circ\tau_{1}\sim\tau_{2}^{\prime}\circ\tau_{1}^{\prime}\) for all composable pairs \((\tau_{1},\tau_{2})\) and \((\tau_{1}^{\prime},\tau_{2}^{\prime})\) of group homomorphisms \(\tau_{i},\tau_{i}^{\prime}:F_{2g}\to A\rtimes B\) such that \(\tau_{i},\tau_{i}^{\prime}\) are conjugate and \(\tau_{2}\circ\tau_{1},\tau_{2}^{\prime}\circ\tau_{1}^{\prime}\) define group homomorphisms \(\pi_{1}(\Sigma)\to A\rtimes B\)._ Proof.: By Theorem 5.21 the protected object of \(\Sigma\) is a topological invariant and can be computed from the standard graph in (15). This yields a Yetter-Drinfeld module \((\mathcal{M},\rhd,\delta)\) over \(\bigtriangledown(B,A,\blacktriangleright,\partial)\) given by formula (22). Hence, we have \(\mathcal{M}^{(1)}=(A\rtimes B)^{2g}\cong\mathrm{Hom}(F_{2g},A\rtimes B)\) with the module structure given by conjugation and the comodule structure by the defining relation of \(\pi_{1}(\Sigma)\). By Proposition 7.18 the associated coinvariants form a groupoid \(\mathcal{M}^{coH}\) with group homomorphisms \(\rho:\pi_{1}(\Sigma)\to B\) as objects and group homomorphisms \(\tau=(\phi,\rho):\pi_{1}(\Sigma)\to A\rtimes B\) as morphisms from \(\rho\) to \((\partial\circ\phi)\cdot\rho\). By Propositions 7.12 and 7.13 the associated image object is the groupoid, whose objects are orbits of group homomorphisms \(\rho:\pi_{1}(\Sigma)\to B\) under the conjugation action of \(B\) and whose morphisms are the images of group homomorphisms \(\tau:\pi_{1}(\Sigma)\to A\rtimes B\) under the projection functor \(\pi:\mathcal{M}\to\mathcal{M}^{H}\). The latter is given by the equivalence relation in the theorem. There are a number of cases in which the protected object has a particularly simple form. They correspond to crossed modules in which part of the data is trivial. The first corresponds to the case, where the Moore complex of the crossed module has trivial non-abelian homologies, namely \(\ker(\partial)=\{1\}\) and \(B/\partial(A)=1\). The second is the case where the action of \(B\) on \(A\) is trivial. **Example 7.20**.: _Let \(\Sigma\) be a surface of genus \(g\geq 1\) and \((B,A,\blacktriangleright,\partial)\) a crossed module, where \(\partial\) is an isomorphism. Then the protected object has_ * _conjugacy classes of group homomorphisms_ \(\rho:\pi_{1}(\Sigma)\to B\) _as objects,_ * _exactly one morphism between any two objects._ Proof.: All morphism sets in the groupoid from Lemma 7.17 contain exactly one morphism, since \(\mathrm{Hom}(\rho,\sigma)=\{(\partial^{-1}(\sigma\cdot\rho^{-1}),\rho)\}\) for all group homomorphisms \(\rho,\sigma:\pi_{1}(\Sigma)\to B\). Conjugating a morphism in \(\mathrm{Hom}(\rho,\sigma)\) with an element of \((a,b)\in A\rtimes B\) yields the unique morphism from \(b\rho b^{-1}\) to \((\partial(a)b)\,\sigma\,(\partial(a)b)^{-1}\). This shows that all morphisms from conjugates of a group homomorphism \(\rho:\pi_{1}(\Sigma)\to B\) to a conjugate of a group homomorphism \(\sigma:\pi_{1}(\Sigma)\to B\) are conjugated and hence identified in \(\mathcal{M}^{H}\) and in \(\mathcal{M}_{inv}\). **Example 7.21**.: _Let \(\Sigma\) be a surface of genus \(g\geq 1\) and \((B,A,\blacktriangleright,\partial)\) a crossed module with a trivial group action \(\blacktriangleright\). Then the protected object is \(\mathrm{Hom}(\pi_{1}(\Sigma),A\times B)/A\times B\) with_ * _conjugacy classes of group homomorphisms_ \(\rho:\pi_{1}(\Sigma)\to B\) _as objects,_ * _group homomorphisms_ \(\phi:\pi_{1}(\Sigma)\to A\) _as morphisms from_ \([\rho]\) _to_ \([(\partial\circ\phi)\cdot\rho]\)_._ Proof.: If \(\blacktriangleright\colon B\times A\to A\) is trivial, then conditions (63) imply that \(A\) is abelian with \(\partial(A)\subset Z(B)\). As \(A\) is abelian and \(\blacktriangleright\) trivial, the \(1\)-cocycles from Definition 7.15 are simply group homomorphisms \(\phi:\pi_{1}(\Sigma)\to A\) and any \(1\)-coboundary is trivial. The groupoid \(\mathcal{M}^{coH}\) from Lemma 7.17 thus has as objects group homomorphisms \(\rho:\pi_{1}(\Sigma)\to B\) and as morphisms \(\tau=(\phi,\rho):\rho\to(\partial\circ\phi)\cdot\rho\) group homomorphisms \(\tau=(\phi,\rho):\pi_{1}(\Sigma)\to A\times B\). As \(A\) is abelian and \(\blacktriangleright\) trivial, two group homomorphisms \(\tau=(\phi,\rho),\tau^{\prime}=(\phi^{\prime},\rho^{\prime}):\pi_{1}(\Sigma) \to A\rtimes B\) are conjugate iff \(\phi^{\prime}=\phi\) and \(\rho^{\prime}=b\rho b^{-1}\) for some \(b\in B\). Thus, the relation on morphisms in Theorem 7.19 identifies \(\tau\) and \(\tau^{\prime}\) iff \(\phi=\phi^{\prime}\) and \([\rho]=[\rho^{\prime}]\). In the case of a trivial group homomorphism \(\partial:A\to B\) all morphisms in \(\mathcal{M}\), \(\mathcal{M}^{\mathrm{co}H}\), \(\mathcal{M}^{H}\) and \(\mathcal{M}_{inv}\) are automorphisms. This yields **Example 7.22**.: _Let \(\Sigma\) be a surface of genus \(g\geq 1\) and \(H=\bigtriangledown(B,A,\blacktriangleright,\partial)\) with \(A\) abelian and a trivial group homomorphism \(\partial\equiv 1\). Then the associated protected object is_ \[\mathcal{M}_{H,\Sigma}=\amalg_{[\rho]\in\mathrm{Hom}(\pi_{1}(\Sigma),B)/B}G_{ [\rho]},\] _where \(G_{[\rho]}\) is a factor group of \(H^{1}(\pi_{1}(\Sigma),A,\rho^{*}\blacktriangleright)\)._ Proof.: If \(\partial\) is trivial and \(A\) abelian, then every 1-cocycle \(\phi:\pi_{1}(\Sigma)\to A\) for \(\rho^{*}\blacktriangleright\) defines an automorphism of \(\rho\) in \(\mathcal{M}^{\mathrm{co}H}\), which implies \(\mathcal{M}^{\mathrm{co}H}=\amalg_{\rho\in\mathrm{Hom}(\pi_{1}(\Sigma),B)}Z^ {1}(\pi_{1}(\Sigma),A,\rho^{*}\blacktriangleright)\). As all morphisms in \(\mathcal{M}^{\mathrm{co}H}\) are automorphisms, two morphisms given by group homomorphisms \(\tau=(\phi,\rho):\pi_{1}(\Sigma)\to A\rtimes B\) and \(\tau^{\prime}=(\phi^{\prime},\rho^{\prime}):\pi_{1}(\Sigma)\to A\rtimes B\) are composable iff \(\rho=\rho^{\prime}\). By Lemma 7.16, 2. two group homomorphisms \((\phi,\rho):\pi_{1}(\Sigma)\to A\rtimes B\) and \((\phi^{\prime},\rho):\pi_{1}(\Sigma)\to A\rtimes B\) are related by conjugation with \(A\subset A\rtimes B\) iff \(\phi,\phi^{\prime}\) are related by a 1-coboundary. Thus for a group homomorphism \(\rho:\pi_{1}(\Sigma)\to B\), the automorphism group of \([\rho]\) in \(\mathcal{M}_{inv}\) is a factor group of \(H^{1}(\pi_{1}(\Sigma),A,\rho^{*}\blacktriangleright)\). By Theorem 7.19 group homomorphisms \(\tau,\tau^{\prime}:\pi_{1}(\Sigma)\to A\rtimes B\) that are conjugated define the same morphism in \(\mathcal{M}_{inv}\). This implies in particular that the morphism in \(\mathcal{M}_{H,\Sigma}\) defined by a group homomorphism \(\sigma=(\phi,\rho):\pi_{1}(\Sigma)\to A\rtimes B\) depends on \(\phi\) only up to coboundaries. Modifying \(\phi\) with a coboundary yields a group homomorphism \(\sigma^{\prime}=(\phi^{\prime},\rho)\) conjugated to \(\sigma\) by Lemma 7.16, 2. However, except for the situation in Examples 7.20 and 7.21, it is difficult to describe the category \(\mathcal{M}_{H,\Sigma}\) explicitly, even for genus \(g=1\) and crossed modules given by normal subgroups. This is due to the fact that the equivalence relation in Theorem 7.19 also identifies morphisms in \(\mathcal{M}^{\mathrm{co}H}\) in different \(A\rtimes B\)-orbits. This is illustrated by the following two examples. **Example 7.23**.: _Let \(\Sigma\) be the torus with \(\pi_{1}(\Sigma)=\mathbb{Z}\times\mathbb{Z}\) and consider the crossed module \((S_{3},A_{3},\blacktriangleright,\iota)\), where \(\iota:A_{3}\to S_{3}\) is the inclusion and \(\blacktriangleright\colon S_{3}\times A_{3}\to A_{3}\), \(b\blacktriangleright a=bab^{-1}\) the conjugation action._ _We specify group homomorphisms \(\rho:\mathbb{Z}\times\mathbb{Z}\to S_{3}\) and 1-cocycles \(\phi:\mathbb{Z}\times\mathbb{Z}\to A_{3}\) by the images of \((1,0)\) and \((0,1)\) and write \(\rho=(\rho(1,0),\rho(0,1))\) for the former and \(\phi=\langle\phi(1,0),\phi(0,1)\rangle\) for the latter. Then the conjugacy classes of group homomorphisms \(\rho:\mathbb{Z}\times\mathbb{Z}\to S_{3}\) are given by_ \begin{tabular}{|l|l|} \hline \(C_{1}=\{(\mathrm{id},\mathrm{id})\}\) & \\ \hline \(C_{2}=\{(\mathrm{id},c)\mid c\in A_{3}\setminus\{\mathrm{id}\}\}\) & \(C_{2}^{\prime}=\{(c,\mathrm{id})\mid c\in A_{3}\setminus\{\mathrm{id}\}\}\) \\ \hline \(C_{3}=\{(c,c)\mid c\in A_{3}\setminus\{\mathrm{id}\}\}\) & \\ \hline \(C_{4}=\{(c,c^{\prime})\mid c\neq c^{\prime}\in A_{3}\setminus\{\mathrm{id}\}\}\) & \\ \hline \(C_{5}=\{(\mathrm{id},\sigma)\mid\sigma\in S_{3}\setminus A_{3}\}\) & \(C_{5}^{\prime}=\{(\sigma,\mathrm{id})\mid\sigma\in S_{3}\setminus A_{3}\}\) \\ \hline \(C_{6}=\{(\sigma,\sigma)\mid\sigma\in S_{3}\setminus A_{3}\}\) & \\ \hline \end{tabular} _If \(\rho(\mathbb{Z}\times\mathbb{Z})\subset A_{3}\), then 1-cocycles for \(\rho^{*}\blacktriangleright\) are simply group homomorphisms \(\phi:\mathbb{Z}\times\mathbb{Z}\to A_{3}\). If \(\rho\in C_{5}\), \(\rho\in C_{5}^{\prime}\) or \(\rho\in C_{6}\) then the 1-cocycles for \(\rho^{*}\blacktriangleright\) are given by_ \[\langle\mathrm{id},c\rangle(k,l)=\begin{cases}c&l\ odd\\ \mathrm{id}&l\ even\end{cases},\ \langle c,\mathrm{id}\rangle(k,l)=\begin{cases}c&k\ odd\\ \mathrm{id}&k\ even\end{cases},\ \langle c,c\rangle(k,l)=\begin{cases}c&k+l\ odd\\ \mathrm{id}&k+l\ even\end{cases},\] _The protected object \(\mathcal{M}_{inv}\) has the objects \(C_{1}\), \(C_{2}\), \(C_{2}^{\prime}\), \(C_{3}\), \(C_{4}\), \(C_{5}\), \(C_{5}^{\prime}\), \(C_{6}\). Its morphisms are equivalence classes of morphisms in \(\mathcal{M}^{coH}\), that is, of 1-cocycles. The morphisms in \(\mathcal{M}^{coH}\) starting in \(\rho\in C_{1}\) are the trivial 1-cocycle \(\phi\equiv\mathrm{id}\) as identity morphism and the conjugate pairs_ \[\langle\mathrm{id},(123)\rangle:(\mathrm{id},\mathrm{id})\to( \mathrm{id},(123)) \sim \langle\mathrm{id},(132)\rangle:(\mathrm{id},\mathrm{id})\to( \mathrm{id},(132))\] \[\langle(123),\mathrm{id}\rangle:(\mathrm{id},\mathrm{id})\to((12 3),\mathrm{id}) \sim \langle(132),\mathrm{id}\rangle:(\mathrm{id},\mathrm{id})\to((132 ),\mathrm{id})\] \[\langle(123),(123)\rangle:(\mathrm{id},\mathrm{id})\to((123),(12 3)) \sim \langle(132),(132)\rangle:(\mathrm{id},\mathrm{id})\to((132),(132 ))\] \[\langle(123),(132)\rangle:(\mathrm{id},\mathrm{id})\to((123),(132 )) \sim \langle(132),(123)\rangle:(\mathrm{id},\mathrm{id})\to((132),(123 )),\] _where we use cycle notation for elements of \(S_{3}\). As each of these pairs defines a single morphism in \(\mathcal{M}_{inv}\), there is exactly one morphism from \(C_{1}\) to each of the conjugacy classes \(C_{2}\), \(C_{2}^{\prime}\), \(C_{3}\), \(C_{4}\). As \(\mathcal{M}_{inv}\) is a groupoid, there is exactly one morphism between any two of these conjugacy classes. Each of the 1-cocycles \(\langle\mathrm{id},c\rangle\), \(\langle c,\mathrm{id}\rangle\), \(\langle c,c\rangle\) with \(c\in A_{3}\) defines a morphism in \(\mathcal{M}^{coH}\) within the conjugacy classes \(C_{5}\), \(C_{5}^{\prime}\), \(C_{6}\). The morphisms between objects in \(C_{5}\) in \(\mathcal{M}^{coH}\) are_ \[\langle\mathrm{id},\mathrm{id}\rangle:(\mathrm{id},(12))\to( \mathrm{id},(12)) \langle\mathrm{id},\mathrm{id}\rangle:(\mathrm{id},(13))\to( \mathrm{id},(13)) \langle\mathrm{id},\mathrm{id}\rangle:(\mathrm{id},(23))\to( \mathrm{id},(23))\] \[\langle\mathrm{id},(123)\rangle:(\mathrm{id},(12))\to(\mathrm{id},(13)) \langle\mathrm{id},(123)\rangle:(\mathrm{id},(13))\to(\mathrm{id},(23)) \langle\mathrm{id},(123)\rangle:(\mathrm{id},(23))\to(\mathrm{id},(12))\] \[\langle\mathrm{id},(132)\rangle:(\mathrm{id},(12))\to(\mathrm{id},(23)) \langle\mathrm{id},(132)\rangle:(\mathrm{id},(13))\to(\mathrm{id},(12)) \langle\mathrm{id},(132)\rangle:(\mathrm{id},(23))\to(\mathrm{id},(13)).\] _All morphisms in the first line are conjugate. The first morphism in the second line is conjugate to the morphisms in the second and third line via cyclic permutations and transpositions. As_ \[\langle\mathrm{id},\mathrm{id}\rangle =\langle\mathrm{id},(123)\rangle\circ\langle\mathrm{id},(132) \rangle:(\mathrm{id},(12))\to(\mathrm{id},(12))\] \[\langle\mathrm{id},(123)\rangle =\langle\mathrm{id},(132)\rangle\circ\langle\mathrm{id},(132) \rangle:(\mathrm{id},(12))\to(\mathrm{id},(13)).\] _with \(\langle\mathrm{id},(132)\rangle\sim\langle\mathrm{id},(123)\rangle\), all morphisms are identified by the relation in Theorem 7.19 and define a single morphism in \(\mathcal{M}_{inv}\). Hence, the identity morphism is the only automorphism of \(C_{5}\) in \(\mathcal{M}_{inv}\) and likewise for \(C_{5}^{\prime}\) and \(C_{6}\). Thus \(\mathcal{M}_{inv}\) is the groupoid in Figure 3._ **Example 7.24**.: _Let \(\Sigma\) be the torus and consider the crossed module \((S_{3},A_{3},\blacktriangleright,\partial)\) with the trivial group homomorphism \(\partial:A_{3}\to S_{3}\), \(a\mapsto\mathrm{id}\) and \(\blacktriangleright\colon S_{3}\times A_{3}\to A_{3}\), \(b\blacktriangleright a=bab^{-1}\)._ Figure 3: The groupoid \(\mathcal{M}_{inv}\) from Example 7.23 _Then the protected object \(\mathcal{M}_{inv}\) has the same objects as in Example 7.23. As \(\partial\) is trivial, all morphisms in \(\mathcal{M}^{coH}\) and \(\mathcal{M}_{inv}\) are automorphisms. The object \((\mathrm{id},\mathrm{id})\) in \(\mathcal{M}^{coH}\) has the identity morphism \(\langle\mathrm{id},\mathrm{id}\rangle\) and the following four conjugate pairs of automorphisms_ \[\langle(123),(123)\rangle\sim\langle(132),(132)\rangle, \langle(123),(132)\rangle\sim\langle(132),(123)\rangle.\] _The relation for morphisms in Theorem 7.19 then implies_ \[\langle\mathrm{id},\mathrm{id}\rangle =\langle\mathrm{id},(123)\rangle\circ\langle\mathrm{id},(132) \rangle\sim\langle\mathrm{id},(132)\rangle\circ\langle\mathrm{id},(132) \rangle=\langle\mathrm{id},(123)\rangle\] \[\langle\mathrm{id},\mathrm{id}\rangle =\langle(123),\mathrm{id}\rangle\circ\langle(132),\mathrm{id} \rangle\sim\langle(132),\mathrm{id}\rangle\circ\langle(132),\mathrm{id} \rangle=\langle(123),\mathrm{id}\rangle\] \[\langle\mathrm{id},\mathrm{id}\rangle =\langle(123),(123)\rangle\circ\langle(132),(132)\rangle\sim \langle(132),(132)\rangle\circ\langle(132),(132)\rangle=\langle(123),(123)\rangle\] \[\langle\mathrm{id},\mathrm{id}\rangle =\langle(123),(132)\rangle\circ\langle(132),(123)\rangle\sim \langle(132),(123)\rangle\circ\langle(132),(123)\rangle=\langle(123),(132)\rangle.\] _As all automorphisms of \((\mathrm{id},\mathrm{id})\) in \(\mathcal{M}^{coH}\) are identified, \(C_{1}\) has a single automorphism in \(\mathcal{M}_{inv}\). As in Example 7.23, all morphisms between objects in \(C_{5}\) are identified and likewise for \(C_{5}^{\prime}\), \(C_{6}\)._ _In contrast, the automorphism group of each element of \(C_{2}\), \(C_{2}^{\prime}\), \(C_{3}\), \(C_{4}\) in \(\mathcal{M}\) and \(\mathcal{M}^{coH}\) is \(A_{3}\times A_{3}\). Automorphisms of these objects in \(\mathcal{M}\) coincide with their automorphisms in \(\mathcal{M}^{coH}\). Each automorphism of an object in one of these conjugacy classes is conjugate only to itself and to automorphisms of different objects in the same conjugacy class. As any composable sequence of morphisms in \(\mathcal{M}\) involves only automorphisms of the same object, the automorphism groups of these conjugacy classes in \(\mathcal{M}_{inv}\) are given by \(A_{3}\times A_{3}\). Thus, the groupoid \(\mathcal{M}_{inv}\) is as in Figure 4._ ## 8 Mapping class group actions In this section, we describe the mapping class group actions on the protected objects for connected closed surfaces. In the following \(\Sigma\) is a surface of genus \(g\geq 1\) and \(\Sigma\setminus D\) the associated surface with a disc removed and fundamental group \(\pi_{1}(\Sigma\setminus D)=F_{2g}\). The mapping class group of \(\Sigma\) is the quotient of the group \(\mathrm{Homeo}_{+}(\Sigma)\) of orientation preserving homeomorphisms of \(\Sigma\) by the normal subgroup \(\mathrm{Homeo}_{0}(\Sigma)\) of homeomorphisms homotopic to the identity. It is isomorphic to the group of outer automorphisms of the fundamental group \(\pi_{1}(\Sigma)\) \[\mathrm{Map}(\Sigma)=\mathrm{Homeo}_{+}(\Sigma)/\mathrm{Homeo}_{0}(\Sigma) \cong\mathrm{Out}(\pi_{1}(\Sigma))=\mathrm{Aut}(\pi_{1}(\Sigma))/\mathrm{Inn}( \pi_{1}(\Sigma)).\] The mapping class group of \(\Sigma\setminus D\) is defined analogously with the additional condition that all homeomorphisms fix the boundary of \(D\) pointwise, see Farb and Margalit [FM, Sec. 2.1]. The mapping class groups \(\mathrm{Map}(\Sigma)\) and \(\mathrm{Map}(\Sigma\setminus D)\) can be presented with the same generators but with additional relations for \(\mathrm{Map}(\Sigma)\), see for instance the presentation by Gervais [Ge]. Mapping class group actions associated with protected objects for involutive and, more generally, pivotal Hopf monoids in a finitely complete and cocomplete symmetric monoidal category are Figure 4: The groupoid \(\mathcal{M}_{inv}\) from Example 7.24 constructed in [MV]. It is shown in [MV, Th. 9.2] that the mapping class group \(\operatorname{Map}(\Sigma\setminus D)\) acts on the Yetter-Drinfeld module in Example 4.6 by automorphisms. By [MV, Th. 9.5] this induces an action of \(\operatorname{Map}(\Sigma)\) by automorphisms of its biinvariants. The mapping class group actions in [MV] are obtained from a concrete presentation of the mapping class groups \(\operatorname{Map}(\Sigma)\) and \(\operatorname{Map}(\Sigma\setminus D)\) in terms of generating Dehn twists and relations. They associate to each generating Dehn twist a finite sequence of edge slides and prove that resulting automorphisms of \(H^{\otimes E}\) from Definition 5.5 satisfy the relations for \(\operatorname{Map}(\Sigma\setminus D)\) in [Ge]. The induced automorphisms of the protected object then satisfy the additional relations of \(\operatorname{Map}(\Sigma)\) in [Ge]. As we established in Theorem 5.21 that the protected object is independent of the choice of the underlying graph, we can reformulate [MV, Th. 9.5] as follows. **Theorem 8.1**.: _Let \(H\) be an involutive Hopf monoid in \(\mathcal{C}\) and \(\Sigma\) an oriented surface of genus \(g\geq 1\). Then the edge slides from Definition 5.5 induce an action of the mapping class group \(\operatorname{Map}(\Sigma)\) by automorphisms of the protected object._ For group objects in cartesian monoidal categories such as simplicial groups and crossed modules this mapping class group action admits a concrete description in terms of mapping class group actions on representation varieties. For this, recall that for any group \(G\) the group \(\operatorname{Aut}(\pi_{1}(\Sigma))\) acts on the set of group homomorphisms \(\rho:\pi_{1}(\Sigma)\to G\) via \((\phi\rhd\rho)(\lambda)=\rho(\phi^{-1}(\lambda))\) for all \(\lambda\in\pi_{1}(\Sigma)\) and \(\phi\in\operatorname{Aut}(\pi_{1}(\Sigma))\). This induces an action of \(\operatorname{Map}(\Sigma)=\operatorname{Out}(\pi_{1}(\Sigma))=\operatorname{ Aut}(\pi_{1}(\Sigma))/\text{Inn}(\pi_{1}(\Sigma))\) on the representation variety \(\operatorname{Hom}(\pi_{1}(\Sigma),G)/G\). To relate this to the mapping class group actions from [MV, Th. 9.5] note that for a group object \(H\) in a cartesian monoidal category the formulas for the edge slides in Definition 5.5 and Example 5.6 reduce to left and right multiplication with \(H\), sometimes composed with inversions. It follows that any finite sequence of edge slides from the standard graph to itself induces an automorphism of \(H^{\otimes 2g}\) that arises from an automorphism of \(F_{2g}=\pi_{1}(\Sigma\setminus D)\). As it preserves the Yetter-Drinfeld module structure in Example 4.6, it induces automorphisms of \(\mathcal{M}^{coH}\), \(\mathcal{M}^{H}\) and \(\mathcal{M}_{inv}\). Inner automorphism of \(\pi_{1}(\Sigma)\) induce trivial automorphisms of \(\mathcal{M}_{inv}\). For a group \(H\) as a group object in Set it is then directly apparent that the induced action of \(\operatorname{Map}(\Sigma)\) on \(\mathcal{M}_{inv}\) is the one on the representation variety \(\operatorname{Hom}(\pi_{1}(\Sigma),H)/H\), see also Examples 9.6 and 9.7 in [MV]. This result can be applied to determine the mapping class group action for a simplicial group. **Corollary 8.2**.: _Let \(H=(H_{n})_{n\in\mathbb{N}_{0}}\) be a simplicial group as a Hopf monoid in \(\operatorname{SSet}\). Then the action of \(\operatorname{Map}(\Sigma)\) on the representation varieties \(\operatorname{Hom}(\pi_{1}(\Sigma),H_{n})/H_{n}\) induces an action of \(\operatorname{Map}(\Sigma)\) on \(\mathcal{M}_{inv}\) by simplicial maps, and this coincides with the action in [MV, Th. 9.5]._ Proof.: The induced \(\operatorname{Map}(\Sigma)\)-action on \(\mathcal{M}_{inv}\) is by simplicial maps, because the face maps and degeneracies of \(\mathcal{M}_{inv}\) act elements of the representation varieties \(\operatorname{Hom}(\pi_{1}(\Sigma),H_{n})/H_{n}\) by post-composition with the face maps and degeneracies \(d_{i}:H_{n}\to H_{n-1}\) and \(s_{i}:H_{n}\to H_{n+1}\), whereas \(\operatorname{Map}(\Sigma)\) acts by pre-composition. This coincides with the action from [MV, Th. 9.5], because the latter reduces to the \(\operatorname{Map}(\Sigma)\)-action on \(\operatorname{Hom}(\pi_{1}(\Sigma),H_{n})/H_{n}\) for the group \(H_{n}\) as an involutive Hopf monoid in Set, and all (co)limits, images and (co)actions in \(\operatorname{SSet}\) are degreewise. In the case of a crossed module as a Hopf monoid in Cat, the mapping class group action on the protected object is induced by the mapping class group action on the representation variety for the associated semidirect product group. **Corollary 8.3**.: _Let \(H=(B,A,\blacktriangleright,\partial)\) be a crossed module. Then the \(\operatorname{Map}(\Sigma)\)-action on \(\mathcal{M}_{inv}\) from Theorem 8.1 is induced by the \(\operatorname{Map}(\Sigma)\)-action on \(\operatorname{Hom}(\pi_{1}(\Sigma),A\rtimes B)/A\rtimes B\)._ Proof.: As the group structure of \(H\) as a group object in \(\operatorname{Cat}\) is the one of the semidirect product \(A\rtimes B\), the \(\operatorname{Map}(\Sigma\setminus D)\)-action on \(\mathcal{M}=H^{\times 2g}\) for the standard graph (15) can be identified with the \(\operatorname{Map}(\Sigma\setminus D)\)-action on \(\mathcal{M}=(A\rtimes B)^{\times 2g}\) one for the group \(A\rtimes B\) as a group object in \(\operatorname{Set}\). The crossed module structure ensures that this \(\operatorname{Map}(\Sigma\setminus D)\)-action respects the category structure of \((A\rtimes B)^{\times 2g}\) and defines a \(\operatorname{Map}(\Sigma\setminus D)\)-action by invertible endofunctors. The \(\operatorname{Map}(\Sigma\setminus D)\)-action on \(\mathcal{M}\) induces the \(\operatorname{Map}(\Sigma)\)-action on the protected object \(\mathcal{M}_{inv}\) for both, the group \(A\rtimes B\) as a group object in \(\operatorname{Set}\) and for \(H\) as a group object in \(\operatorname{Cat}\). The former is the action on the representation variety \(\operatorname{Hom}(\pi_{1}(\Sigma),A\rtimes B)/A\rtimes B\). As the protected object \(\mathcal{M}_{inv}\) is a quotient of this representation variety by Theorem 7.19, its \(\operatorname{Map}(\Sigma)\)-action is induced by the \(\operatorname{Map}(\Sigma)\)-action on the representation variety. **Example 8.4**.: _We consider the mapping class group action on the groupoids \(\mathcal{M}_{inv}\) from Example 7.23, 7.24 for the crossed module \((S_{3},A_{3},\operatorname{\blacktriangleright},\partial)\) and the torus._ _The mapping class group of the torus \(T\) is the group_ \[\operatorname{Map}(T)=\operatorname{SL}(2,\mathbb{Z})=\langle D_{a},D_{b}\mid D _{a}D_{b}D_{a}=D_{b}D_{a}D_{b},(D_{a}D_{b}D_{a})^{4}=1\rangle. \tag{67}\] _It is generated by the Dehn twists \(D_{a},D_{b}\) along the \(a\)- and \(b\)-cycle, which act on \(\pi_{1}(T)=\mathbb{Z}\times\mathbb{Z}\) by_ \[D_{a}:a\mapsto a,b\mapsto b-a D_{b}:a\mapsto a+b,b\mapsto b. \tag{68}\] _In both, Example 7.23 and 7.24, the \(\operatorname{SL}(2,\mathbb{Z})\)-action on the objects of \(\mathcal{M}_{inv}\) is the \(\operatorname{SL}(2,\mathbb{Z})\)-action on the representation variety \(\operatorname{Hom}(\mathbb{Z}\times\mathbb{Z},S_{3})/S_{3}\) with orbits \(\{C_{1}\}\), \(\{C_{2},C_{2}^{\prime},C_{3},C_{4}\}\) and \(\{C_{5},C_{5}^{\prime},C_{6}\}\)._ _In Example 7.23 the \(\operatorname{SL}(2,\mathbb{Z})\)-action on \(\mathcal{M}_{inv}\) is determined uniquely by the action on the objects. This follows, because for all choices of objects \(s,t\in\operatorname{Ob}\mathcal{M}_{inv}\) the groupoid \(\mathcal{M}_{inv}\) has at most one morphism \(f:s\to t\). In Example 7.24 an analogous statement holds for morphisms between the objects \(C_{1},C_{5},C_{5}^{\prime},C_{6}\), since all of them are identity morphisms._ _In contrast, the \(\operatorname{SL}(2,\mathbb{Z})\)-action on the automorphisms of \(C_{2},C_{2}^{\prime},C_{3},C_{4}\) in Example 7.24 is non-trivial and can be identified with an orbit of the \(\operatorname{SL}(2,\mathbb{Z})\)-action on \(\operatorname{Mat}(2\times 2,\mathbb{Z}_{3})\) by left multiplication. In this action, \(D_{a}\) and \(D_{b}\) correspond to left-multiplication with the generators_ \[A=\begin{pmatrix}1&0\\ 1&1\end{pmatrix},\qquad B=\begin{pmatrix}1&-1\\ 0&1\end{pmatrix}.\] _Automorphisms of \(C_{2},C_{2}^{\prime},C_{3},C_{4}\) in \(\mathcal{M}_{inv}\) are given by group homomorphisms \(\tau:\mathbb{Z}\times\mathbb{Z}\to A_{3}^{\times 2}\cong\mathbb{Z}_{3}^{ \times 2}\), which are determined by the images \(\tau(1,0),\tau(0,1)\in\mathbb{Z}_{3}\times\mathbb{Z}_{3}\). Interpreting an element \((c,d)\in\mathbb{Z}_{3}\times\mathbb{Z}_{3}\) as an automorphism \(c:d\to d\) and taking \(\tau(0,1)\) as the first and \(\tau(1,0)\) as the second row of a matrix, we find that the \(\operatorname{SL}(2,\mathbb{Z})\)-action induced by (68) coincides with the \(\operatorname{SL}(2,\mathbb{Z})\)-orbit containing those matrices whose second column is non-trivial._ As our construction yields objects equipped with mapping class group actions and assigns the tensor unit to the sphere \(S^{2}\), it is natural to ask if the protected objects satisfy the axioms of a modular functor from [BK, Def 5.1.1]. Although the latter are formulated for categories of vector spaces, they have obvious generalisations to other symmetric monoidal categories. However, the assignment of protected objects to surfaces can in general not be expected to satisfy these axioms. The problem is axiom (iii) in [BK, Def 5.1.1], which requires that the object assigned to a disjoint union \(\Sigma_{1}\amalg\Sigma_{2}\) of surfaces is the tensor product of the objects assigned to \(\Sigma_{1},\Sigma_{2}\). This does in general not hold for the protected objects from Definition 4.8, as they are constructed by taking equalisers and coequalisers. The tensor product of two (co)equalisers in a symmetric monoidal category \(\mathcal{C}\) is in general not a (co)equaliser of their tensor product. This is already apparent in the symmetric monoidal category \(\mathrm{Ab}=\mathbb{Z}-\mathrm{Mod}\) with the usual tensor product that does not preserve equalisers. Nevertheless, the construction satisfies this axiom, if the underlying symmetric monoidal category is Set, SSet or Cat. **Proposition 8.5**.: _Let \(H\) be a group object in \(\mathcal{C}=\mathrm{Set}\), SSet or Cat and \(\Sigma\) an oriented surface with connected components \(\Sigma_{1},\dots,\Sigma_{k}\). Then the protected object for \(\Sigma\) is the product of the protected objects for \(\Sigma_{1},\dots,\Sigma_{k}\)._ Proof.: The claim follows by induction over \(k\), and it is sufficient to consider \(k=2\). By Theorem 5.21 we can compute the protected object for \(\Sigma\) by choosing a standard graph \(\Gamma_{i}\) from (15) on each connected component \(\Sigma_{i}\). We denote by \(E_{i}\) the edge set of \(\Gamma_{i}\) and by \(E=E_{1}\cup E_{2}\) the edge set of \(\Gamma\). The (co)actions \(\rhd\) and \(\delta\) for \(\Gamma\) are then given by formula (23), and it follows directly that they are the products of the (co)actions \(\rhd_{i}\), \(\delta_{i}\) for \(\Gamma_{i}\), up to braidings. Lemma 4.5 and some simple computations imply that \((H^{\times E},\rhd,\delta)\) is a Yetter-Drinfeld module over \(H\times H\). We denote by \(F_{\Gamma}:H^{\times E}\to H\times H\) the morphism from Example 2.9 for \(\Gamma\) and by \(F_{\Gamma_{i}}:H^{\times E_{i}}\to H\) the corresponding morphisms for \(\Gamma_{i}\). \(\bullet\quad\mathcal{C}=\mathrm{Set}\): As in Example 2.14 we obtain \[M^{coH} =F_{\Gamma}^{-1}(1)=\{(x,y)\in H^{\times E_{1}}\times H^{\times E _{2}}:(F_{\Gamma_{1}}(x),F_{\Gamma_{2}}(y))=(1,1)\}\,=\,M_{1}^{coH}\times M_{2 }^{coH},\] \[M^{H} =\{H^{\times 2}\rhd(m_{1},m_{2}):m_{1}\in H^{\times E_{1}},m_{2}\in H ^{\times E_{2}}\}=M_{1}^{H}\times M_{2}^{H}\] with inclusions \(\iota=(\iota_{1},\iota_{2}):M^{coH}\to H^{\times E}\) and canonical surjections \(\pi=(\pi_{1},\pi_{2}):H^{\times E}\to M^{H}\). As the image of a morphism \(f:A\to B\) in Set is the usual image of a map, we have \(\mathrm{im}((f_{1},f_{2}))=(\mathrm{im}(f_{1}),\mathrm{im}(f_{2}))\) and \(M_{inv}=M_{inv,1}\times M_{inv,2}\). \(\bullet\quad\mathcal{C}=\mathrm{SSet}\): By Proposition 6.3 the coinvariants are given by the sets \[M_{n}^{coH}=\{(m_{1},m_{2})\in(H^{\times E_{1}}\times H^{\times E_{2}})_{n}:(F _{\Gamma_{1},n}(m_{1}),F_{\Gamma_{2},n}(m_{2}))=(1,1)\}\,=\,M_{1,n}^{coH} \times M_{2,n}^{coH}.\] Face maps and degeneracies are induced by the ones of \(H^{\times E_{1}}\times H^{\times E_{2}}\) and the simplicial map \(\iota:M^{coH}\to H^{\times E}\) is given by the maps \(\iota_{n}=(\iota_{1,n},\iota_{2,n})\). As the product in SSet is objectwise, this yields \(M^{coH}=M_{1}^{coH}\times M_{2}^{coH}\). An analogous argument shows that the sets \(M_{n}^{H}\), \((M_{inv})_{n}\) from Proposition 6.3 are given by \(M_{n}^{H}=M_{1,n}^{H}\times M_{2,n}^{H}\) and \((M_{inv})_{n}=(M_{inv,1})_{n}\times(M_{inv,2})_{n}\). \(\bullet\quad\mathcal{C}=\mathrm{Cat}\): By Lemma 7.11 the coinvariants \(M^{coH}\) for the comodule \(M=H^{\times E}\cong H^{E_{1}}\times H^{\times E_{2}}\) are the subcategory with objects \[\mathrm{Ob}(M^{coH})=\{(A_{1},A_{2})\mid A_{i}\in\mathrm{Ob}(H^{ \times E_{i}}),F_{\Gamma_{i}}(A_{i})=e\}\] \[\mathrm{Hom}_{M^{coH}}((A_{1},A_{2}),(A_{1}^{\prime},A_{2}^{ \prime}))=\{(f_{1},f_{2})\mid f_{i}\in\mathrm{Hom}_{M}(A_{i},A_{i}^{\prime}),F _{\Gamma_{i}}(f_{i})=1_{e}\}\] and hence \(M^{coH}\) is the product category \(M_{1}^{coH}\times M_{2}^{coH}\). For the invariants we can apply Lemma 7.6 and the results from SSet. As a right adjoint, the nerve \(N\) preserves products, and hence \(N(\rhd)=N(\rhd_{1})\times N(\rhd_{2})\), up to braidings, and the same holds for the trivial actions \(\epsilon\otimes 1_{H^{\otimes E}}\) and \(\epsilon\otimes 1_{H^{\otimes E_{i}}}\). It follows that the coequaliser of \(N(\rhd)\) and \(N(\epsilon^{\times 2}\times 1_{H^{\times E}})\) is the product of the coequalisers of \(N(\rhd_{i})\) and \(N(\epsilon\times 1_{H^{\times E_{i}}})\). As the homotopy functor preserves finite products, see for instance [Jo, Prop. 1.3], this yields \(M^{H}=M_{1}^{H}\times M_{2}^{H}\) and \(M_{inv}=M_{inv,1}\times M_{inv,2}\). ### Acknowledgements A.-K. Hirmer gratefully acknowledges a PhD fellowship of the Erika Giehrl foundation, Friedrich-Alexander-Universitat Erlangen-Nurnberg.
2303.07741
Introduction to Faraday tomography and its future prospects
Faraday tomography is a new method of the study of cosmic magnetic fields enabled by broadband low-frequency radio observations. By Faraday tomography, it is possible to obtain the Faraday dispersion function which contains information on the line-of-sight distributions of magnetic fields, thermal electron density, and cosmic-ray electron density by measuring the polarization spectrum from a source of synchrotron radiation over a wide band. Furthermore, by combining it with 2-dimensional imaging, Faraday tomography allows us to explore the 3-dimensional structure of polarization sources. The application of Faraday tomography has been active in the last 20 years, when broadband observation has become technically feasible. However, the Faraday dispersion function is mathematically the Fourier transform of the polarization spectrum, and since the observable band is finite, it is impossible to obtain a complete Faraday dispersion function by performing Fourier transform. In addition, the Faraday dispersion function does not directly reflect the distribution of magnetic field, thermal-electron density, and cosmic-ray electron density in the physical space, and its physical interpretation is not straightforward. Despite these two difficult problems, Faraday tomography is attracting much attention because it has great potential as a new method for studying cosmic magnetic fields and magnetized plasmas. In particular, the next-generation radio telescope SKA (Square Kilometre Array) is capable of polarization observation with unprecedented sensitivity and broad bands, and the application of Faraday tomography is expected to make dramatic progress in the field of cosmic magnetic fields. In this review, we explain the basics of Faraday tomography with simple and instructive examples. Then representative algorithms to realize Faraday tomography are introduced and finally some applications are shown.
Keitaro Takahashi
2023-03-14T09:39:23Z
http://arxiv.org/abs/2303.07741v1
[ ###### Abstract Faraday tomography is a new method of the study of cosmic magnetic fields enabled by broadband low-frequency radio observations. By Faraday tomography, it is possible to obtain the Faraday dispersion function which contains information on the line-of-sight distributions of magnetic fields, thermal electron density, and cosmic-ray electron density by measuring the polarization spectrum from a source of synchrotron radiation over a wide band. Furthermore, by combining it with 2-dimensional imaging, Faraday tomography allows us to explore the 3-dimensional structure of polarization sources. The application of Faraday tomography has been active in the last 20 years, when broadband observation has become technically feasible, and polarization sources such as interstellar space, supernova remnants and galaxies have been investigated. However, the Faraday dispersion function is mathematically the Fourier transform of the polarization spectrum, and since the observable band is finite, it is impossible to obtain a complete Faraday dispersion function by performing Fourier transform. For this purpose, various methods have been developed to accurately estimate the Faraday dispersion function from the observed polarization spectrum. In addition, the Faraday dispersion function does not directly reflect the distribution of magnetic field, thermal-electron density, and cosmic-ray electron density in the physical space, and its physical interpretation is not straightforward. Despite these two difficult problems, Faraday tomography is attracting much attention because it has great potential as a new method for studying cosmic magnetic fields and magnetized plasmas. In particular, the next-generation radio telescope SKA (Square Kilometre Array) is capable of polarization observation with unprecedented sensitivity and broad bands, and the application of Faraday tomography is expected to make dramatic progress in the field of cosmic magnetic fields. In this review, we explain the basics of Faraday tomography with simple and instructive examples. Then representative algorithms to realize Faraday tomography are introduced and finally some applications are shown. magnetic fields, radio astronomy, polarization, galaxies: general 2018 00(0), 1-35 10.1093/pasj/xxx000 1]Review ## 1 Introduction to Faraday tomography and its future prospects [1]Kunitamoto University, Graduate School of Science and Technology, 2-39-1 Kurokami, Chuo-ku, Kumamoto 860-8555, Japan [2]Kumamoto University, International Research Organization for Advanced Science and Technology, 2-39-1 Kurokami, Chuo-ku, Kumamoto 860-8555, Japan [3]E-mail: [email protected] [4]Received ; Accepted ###### Contents * 1 Introduction * 2 Basics * 2.1 Stokes parameters * 2.2 Synchrotron radiation * 2.3 Faraday rotation * 2.4 Depolarization * 3 Principle of Faraday Tomography * 3.1 Faraday tomography * 3.2 Dirty FDF and RMSF * 3.3 Interference in Faraday depth space * 3.4 Gaussian FDF * 3.5 Top-hat FDF * 3.6 Indices for Faraday tomography * 4 Models and interpretation of Faraday dispersion function * 4.1 Coherent magnetic field * 4.2 Faraday caustics * 4.3 Helical magnetic field * 4.4 Turbulent magnetic fields * 4.5 Coherent and turbulent magnetic fields * 4.6 Galactic models * 4.7 Intergalactic magnetic field * 5 Algorithms of Faraday tomography * 5.1 RM CLEAN * 5.2 QU fit * 5.3 Sparse modelling * 5.4 CRAFT * 6 Application of Faraday tomography * 6.1 Resolution in Faraday-depth space * 6.2 Correspondence between physical space and Faraday-depth space * 7 Conclusion ## 1 Introduction Radio observations, especially polarization observations, have played an extremely important role in the study of cosmic magnetic fields. This is because it is possible to obtain information on the magnetic fields of radio sources and interstellar space through synchrotron radiation, which is caused by interactions between high-energy electrons and magnetic fields, and Faraday rotation in magnetized plasma. In recent years, the developments in broadband observation technology for radio waves have made it possible to expect revolutionary advances in the research method of cosmic magnetic fields. This is a technique called Faraday tomography and the idea itself dates back to Burn (1966). In astronomy, it is generally difficult to obtain information on the distribution of physical quantities along the line of sight. As we will see later, contrastingly, Faraday tomography allows us to probe the line-of-sight distribution of magnetic fields, thermal-electron density and cosmic-ray electron density. Thus, combining it with 2-dimensional imaging in the sky, we can study 3-dimensional structure of sources. This will be a milestone for astronomy as a whole. As mentioned above, the idea of Faraday tomography itself dates back to 1966, but it was not actually implemented until recently because it requires wideband polarization observations. As will be explained later, Faraday tomography is mathematically Fourier transform, and without broadband observation data, Fourier transform for obtaining line-of-sight information cannot be effectively performed. Therefore, Faraday tomography has received wide attention and has been applied aggressively since the 2000s, when broadband radio observation became practical (Ideguchi et al. 2018). In particular, the Square Kilometer Array (SKA), which will appear in the latter half of the 2020s, will enable unprecedented broadband and high-sensitivity observations, and is expected to further advance the study of cosmic magnetic fields through Faraday tomography (Akahori et al. 2018). The targets of Faraday tomography are polarization sources with magnetic fields, and one of the most important is galaxies, especially spiral galaxies. Spiral galaxies have both turbulent magnetic fields and global magnetic fields with correlation lengths comparable to their size, and these magnetic fields have a significant influence on the dynamics of the galactic gas. In addition, the magnetic fields are considered to be maintained and amplified by the dynamo mechanism in galaxies, and both the global and turbulent magnetic fields play important roles there. However, many things about galactic dynamo and dynamics are still not well understood, both qualitatively and quantitatively (Widrow 2002). If Faraday tomography can provide information on the 3-dimensional structure of galaxies, such as the magnetic fields, thermal electrons, and cosmic-ray electrons, it will clarify the detailed mechanism of dynamos, the properties of turbulence, and the dynamics of gas. One of the biggest problems about the cosmic magnetic fields is their origin (Kulsrud & Zweibel 2008; Widrow et al. 2012). As mentioned above, galactic magnetic fields are maintained and amplified by the dynamo, but the dynamo cannot create the magnetic fields from zero. Therefore, it needs the seed fields at the galaxy formation. Concerning the seed fields, a variety of hypotheses has been proposed such as cosmological generation in early universe (Takahashi et al. 2005; Ichiki et al. 2006) and magnetogenesis associated with structure formation and cosmic reionization (Ryu et al. 2012), but none of them have been observationally confirmed yet. In any case, the seed magnetic fields are amplified by nonlinear processes in galaxies, and most of the information about the initial condition would have been lost. Nevertheless, it has been pointed out that traces of the seed fields may remain in the shape of large-scale magnetic fields in galaxies. Also, if seed fields are cosmologically generated, they should exist also in intergalactic space, and the detection of weak intergalactic magnetic fields has been reported by some authors (Neronov & Vovk 2010; Takahashi et al. 2012; Takahashi et al. 2013). Faraday tomography can be expected to play an important role here as well. As mentioned above, Faraday tomography can provide information on the distribution of the magnetic field in the line-of-sight direction, which means that, in principle, it is possible to measure the galactic and intergalactic magnetic fields separately. Therefore, Faraday tomography offers a new approach to the origin of the cosmic magnetic fields. In this review, the basic principle of Faraday tomography will be explained with instructive examples, and then algorithms to realize it and some applications are given. This review is organized as follows. In section 2, basic knowledge necessary to understand Faraday tomography, such as Stokes parameters, synchrotron radiation and Faraday rotation, are summarized. The principle of Faraday tomography is explained in section 3. The Faraday dispersion function is the most important quantity in Faraday tomography, but its physical meaning and interpretation are not straightforward. Then, in section 4, we will give simple examples to deepen our understanding of the Faraday dispersion function. In section 5, some algorithms to realize Faraday tomography are introduced. We will see some application of Faraday tomography in section 6. Finally, section 7 is devoted to the conclusion. ## 2 Basics In this section, we briefly summarize the basic knowledge necessary to understand and formulate Faraday tomography. First, we introduce Stokes parameters, which are basic quantities to describe polarization states of radio waves and appear throughout this article. Synchrotron radiation is a major mechanism of polarized emission, which is caused by magnetic fields and high-energy charged particles. The targets of Faraday tomography are basically astronomical objects with synchrotron radiation. Polarized emission is not directly observed in general but affected by Faraday rotation, where polarization angle is rotated by magnetized media during the propagation of the radio waves. This is a fundamental ingredient of Faraday tomography and Faraday rotation itself has been utilized for the study of cosmic magnetic fields. Further, polarized waves can be further affected and attenuated by various reasons and they should be taken into account properly to interpret observation data. More detailed explanations on these matters can be found, for example, in Rybicki & Lightman (1986). ### Stokes parameters Here, we introduce Stokes parameters, which are convenient to express the polarization state of electromagnetic waves. We consider a plane wave propagating along \(z\) axis and take (\(x\),\(y\)) plane perpendicular to it. Complex electric field at a certain spatial point can be written as follows. \[E_{x}(t)=\epsilon_{x}e^{-i\omega t+i\delta_{1}}, \tag{1}\] \[E_{y}(t)=\epsilon_{y}e^{-i\omega t+i\delta_{2}}, \tag{2}\] where \(\epsilon\),\(\omega\),\(\delta\) are the amplitude, frequency and phase. Here, let us consider a quasi-monochromatic wave, which is a superposition of plane waves having slightly different frequencies. If the amplitude and phase are totally random with respect to the frequency, it is not polarized as a whole. However, if they are correlated at different frequencies, polarization remains to some extent. The state of polarization is described by the following four Stokes parameters. \[I=\langle|E_{x}|^{2}+|E_{y}|^{2}\rangle=\langle\epsilon_{x}^{2} \rangle+\langle\epsilon_{y}^{2}\rangle \tag{3}\] \[Q=\langle|E_{x}|^{2}-|E_{y}|^{2}\rangle=\langle\epsilon_{x}^{2} \rangle-\langle\epsilon_{y}^{2}\rangle\] (4) \[U=\langle 2\mathrm{Re}(E_{x}E_{y})\rangle=2\langle\epsilon_{x} \epsilon_{y}\cos{(\delta_{1}-\delta_{2})}\rangle\] (5) \[V=-\langle 2\mathrm{Im}(E_{x}E_{y})\rangle=2\langle\epsilon_{x} \epsilon_{y}\sin{(\delta_{1}-\delta_{2})}\rangle \tag{6}\] Here, \(\langle\cdots\rangle\) is an average with respect to frequency1. \(I\) represents the total intensity, \(Q\) and \(U\) are linear polarization and \(V\) is circular polarization. While \(I\) is non-negative, \(Q\), \(U\) and \(V\) can have negative values. In general, we have, Footnote 1: We can also take a time average for a timescale much longer than the period of the radio waves to define the Stokes parameters for a specific frequency. \[I^{2}\geq Q^{2}+U^{2}+V^{2}. \tag{7}\] The equality holds for purely polarized waves but partially polarized waves lead to the inequality. Then, we define total polarization fraction, linear polarization fraction and circular polarization fraction as, \[p\equiv\frac{\sqrt{Q^{2}+U^{2}+V^{2}}}{I} \tag{8}\] \[p_{L}\equiv\frac{\sqrt{Q^{2}+U^{2}}}{I}\] (9) \[p_{V}\equiv\frac{|V|}{I}. \tag{10}\] For linearly polarized waves, polarization angle is defied as \[\chi\equiv\frac{1}{2}\arctan\frac{U}{Q}. \tag{11}\] Here, we take a range of \(-\pi/2\leq\chi\leq\pi/2\), where it is understood that \(-\pi/2\leq\chi\leq-\pi/4\) for \(Q\leq 0\) and \(U\leq 0\), \(-\pi/4\leq\chi\leq\pi/4\) for \(Q\geq 0\), and \(\pi/4<\chi<\pi/2\) for \(Q\leq 0\) and \(U\geq 0\). For a propagation direction along \(z\) axis, there is a rotational freedom for the choice of (\(x\),\(y\)) plane. For a rotation in (\(x\),\(y\)) plane by an angle \(\theta\), electric field transforms as, \[\left(\begin{array}{c}E_{x}^{\prime}\\ E_{y}^{\prime}\end{array}\right)=\left(\begin{array}{cc}\cos\theta&-\sin \theta\\ \sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}E_{x}\\ E_{y}\end{array}\right). \tag{12}\] The, Stokes parameters transform as, \[\left(\begin{array}{c}I^{\prime}\\ Q^{\prime}\\ U^{\prime}\\ V^{\prime}\end{array}\right)=\left(\begin{array}{cccc}1&0&0&0\\ 0&\cos 2\theta&-\sin 2\theta&0\\ 0&\sin 2\theta&\cos 2\theta&0\\ 0&0&0&1\end{array}\right)\left(\begin{array}{c}I\\ Q\\ U\\ V\end{array}\right). \tag{13}\] Thus, while \(I\) and \(V\) are invariant under a rotation, \(Q\) and \(U\) are not invariant but a combination \(Q^{2}+U^{2}\) is still invariant. Therefore, \(p\), \(p_{L}\) and \(p_{V}\) are also invariant. Further, it is seen that a rotation in (\(x\),\(y\)) plane by an angle \(\theta\) corresponds to a rotation in (\(Q\),\(U\)) plane by an angle \(2\theta\). The polarization angle is transformed as \(\chi^{\prime}=\chi+\theta\), which shows that it has a fixed direction in the sky irrespective of the choice of (\(x\),\(y\)) plane. Thus, if we define \[\mathbf{P}\equiv\sqrt{Q^{2}+U^{2}}(\cos\chi,\sin\chi) \tag{14}\] in (\(x\),\(y\)) plane, this behaves as a vector under rotation. This vector is called the polarization vector. On the other hand, we define the complex polarization intensity as, \[P=Q+iU=|P|e^{2i\chi}. \tag{15}\] It should be noted that the phase is \(2\chi\), not \(\chi\). Hereafter, we will consider only linear polarization with \(\delta_{1}-\delta_{2}=0,\pi\) and circular polarization is omitted. ### Synchrotron radiation Synchrotron radiation is emitted by relativistic charged particles accelerated by magnetic fields. High energy cosmic-ray electrons are widely distributed in galaxies and they emit synchrotron radiation through the interaction with galactic magnetic fields. If magnetic fields are coherent, the radiation has a high linear-polarization fraction with a polarization plane perpendicular to magnetic fields. Thus, synchrotron radiation is useful for the study of cosmic magnetism. First, the intensity of synchrotron radiation per unit frequency by a single charged particle is expressed as a function of frequency \(\nu\), \[P(\nu)=\frac{\sqrt{3}q^{3}B\sin\varphi}{mc^{2}}F\left(\frac{\nu}{\nu_{c}} \right), \tag{16}\] where \(q\) and \(m\) are electric charge and mass of the particle. Here, \(\varphi\) is the angle between the particle velocity and magnetic field, and there is no emission if the particle moves along the magnetic field line (\(\varphi\!=\!0\)). The critical frequency is denoted as \(\nu_{c}\) and expressed, using the Lorentz factor of the particle \(\gamma\!=\!E/mc^{2}\), as \[\nu_{c}=\frac{3qB\sin\varphi}{4\pi mc}\gamma^{2}. \tag{17}\] Here, \(F(x)\) is a function which represents the spectral shape and has a peak at \(x\!\sim\!0.29\). Further, it behaves as \(F(x)\!\propto\!x^{1/3}\) and \(F(x)\!\propto\!x^{1/2}e^{-x}\) for \(x\!\ll\!1\) and \(x\!\gg\!1\), respectively. Thus, most of the radiation energy is emitted at around the critical frequency \(\nu_{c}\). In the case of an electron, we have, \[\nu_{c}\!=\!16\ {\rm MHz}\left(\frac{B\sin\varphi}{1\ \mu{\rm G}}\right)\left( \frac{E}{1\ {\rm GeV}}\right)^{2}. \tag{18}\] Next, we consider synchrotron radiation from a ensemble of particles. Galactic cosmic rays often have a power-law energy spectrum and we assume the number density of the form, \[N(\gamma)d\gamma\!=\!N_{0}\gamma^{-\alpha}d\gamma \tag{19}\] for a range of \(\gamma_{1}\!\leq\!\gamma\!\leq\!\gamma_{2}\). Here the spectral index \(\alpha\) takes a value around \(\sim\!2.6\!-\!3.0\). The intensity spectrum of synchrotron radiation from such particles is as follows. \[J_{\nu}=\int_{\gamma_{1}}^{\gamma_{2}}P(\nu)N(\gamma)d\gamma\!\propto\!\nu^{- (\alpha-1)/2}B^{(\alpha+1)/2} \tag{20}\] Therefore, the radiation spectrum is also power-law and the spectral index and dependence on the magnetic field are determined by \(\alpha\). For a nominal value of \(\alpha\!=\!3.0\), we have \(J_{\nu}\!\propto\!\nu^{-1}B^{2}\). The value of \(\alpha\) cannot be measured directly but can be estimated from observation of synchrotron radiation. As we saw above, the intensity of synchrotron radiation is determined by the magnetic field strength and energy density of charged particles. We cannot separate them only from observation of synchrotron radiation. However, it is possible if we assume the equipartition of energy between magnetic fields and charged particles or minimize the total energy density. While synchrotron radiation from each particle is elliptically polarized, the ensemble average leaves only linear polarization and the polarization fraction is given by, \[p=\frac{\alpha+1}{\alpha+7/3}. \tag{21}\] It should be noted that this is independent of the wavelength. A nominal value of \(\alpha\!=\!3.0\) gives \(p\!=\!0.75\). Such a high polarization fraction cannot be seen except for pulsars. In the discussion so far, it was assumed that the magnetic field is uniform, but in many systems, there is a turbulence in magnetized media and the global magnetic field is also curved. In such cases, the polarization is canceled (depolarization) and the polarization fraction decreases substantially. For example, when magnetic fields have a coherent component with strength \(B_{c}\) and a turbulent component with mean of zero and variance of \(\sigma_{B}^{2}\), the polarization fraction is given by (Burn 1966), \[p^{\prime}=p\frac{B_{c}^{2}}{B_{c}^{2}+\sigma_{B}^{2}}. \tag{22}\] As we will see in section 2.4, there are many mechanisms which reduce the polarization fraction. The degree of depolarzation is dependent on wavelength for some mechanisms and independent for others. There have been many studies to explore the structure of the galactic magnetic fields by observing synchrotron radiation. Fig. 1 is a radio image of M51 in Fletcher et al. (2011). This galaxy is face-on so that it is suitable for studying the structure of magnetic field along the galactic plane. The top panel of Fig. 1 shows contours of polarization intensity at 3 cm overlaid on an optical image. Magnetic field lines estimated from the polarization angle are also shown. The angular resolution of the contours is \(15^{\prime\prime}\), where \(1^{\prime\prime}\) corresponds to 37 pc at the distance of M51 (7.6 Mpc). Therefore, the contours has a spatial resolution of about 560 pc. The spread of polarization intensity is wider than that of the arm structure of the optical image, suggesting that the magnetic field and cosmic ray electrons also exist in the space between the arms. Further, magnetic field lines are along the arms in the arm regions. The bottom panel of Fig. 1 represents a map of spectral index of total intensity obtained from observations at 3 cm and 20 cm. The contours in this panel show the total intensity and it is seen that the contours have similar structure to that of optical image, compared with the polarization map. The spectral index is different between arm and inter-arm regions, and \(-0.9\lesssim\beta\lesssim-0.6\) for arms and \(-1.2\lesssim\beta\lesssim-0.9\) for inter-arm. The index in arms is larger than expected from synchrotron radiation, which suggests that thermal bremsstrahlung is significantly contributing in these region. Contrastingly, synchrotron radiation is dominant in inter-arm region. By assuming energy equipartition, Fletcher et al. (2011) estimated magnetic field strength in galactic center, arm and inter-arm to be 30 \(\mu\)G, \(20-25\)\(\mu\)G and \(15-20\)\(\mu\)G, respectively. As mentioned above, because incoherence of magnetic field below the scale of spatial resolution (570 pc in this case) causes depolarization, coherent component of magnetic field can be estimated from the degree of depolarization. As a result, strengths of \(11-13\)\(\mu\)G, \(8-10\)\(\mu\)G and \(10-12\)\(\mu\)G were obtained in inner arm region (\(1-2\) kpc from the center), outer arm region and inter-arm region, respectively. Thus, they estimated that turbulent magnetic fields are stronger than these coherent fields by a factor of about 1.5 in inner and outer arm regions and about 1.2 in inter-arm region. ### Faraday rotation Faraday rotation is a phenomenon in which the plane of polarization of an electromagnetic wave rotates in a magnetized plasma, and occurs because the dispersion relation between right- and left-circular polarizations differs in the magnetized plasma. Denoting the polarization angle at the emission as \(\chi_{0}\), the polarization angle after the propagation through magnetized media is dependent on wavelength \(\lambda\) and expressed as, \[\chi=\chi_{0}+RM~{}\lambda^{2}, \tag{23}\] which is linear with respect to \(\lambda^{2}\). Here the coefficient is called rotation measure and have the following value, \[k\int n_{e}B_{||}dx\] \[\approx 811.9~{}[\mathrm{rad~{}m}^{-2}]\int\left(\frac{n_{e}}{ \mathrm{cm}^{-3}}\right)\left(\frac{B_{||}}{\mu\mathrm{G}}\right)\left(\frac{ dx}{\mathrm{kpc}}\right), \tag{24}\] \[k\equiv\frac{e^{3}}{8\pi^{2}\epsilon_{0}m_{e}^{2}c^{3}}, \tag{25}\] where \(\epsilon_{0}\) is vacuum permittivity and \(m_{e}\) is electron mass. Here, \(x\) is the spatial coordinate along the line of sight and \(n_{e}\) is the density of thermal electrons. \(B_{||}\) is the line-of-sight component of magnetic field and defined to be positive for magnetic field in the direction from the source to the observer. This simple linear relation between \(\lambda^{2}\) and \(\chi\), Eq. (23), holds if only a single source exists along the line of sight. To be more precise, it holds if only a single Faraday-thin source, which will be defined later, exists along the line of sight. If the slope of the relation is observationally measured, we obtain the integration of the product, \(n_{e}B_{||}\), from the observer to the source. Further, by assuming the electron density distribution along the line of sight, magnetic field strength can be estimated. Faraday rotation, along with synchrotron radiation, has also been used extensively as a means of studying cosmic magnetic fields. In Taylor et al. (2009), the large-scale structure of Galactic magnetic fields was studied using a Figure 1: Radio images of M51 (Fietcher et al., 2011). Top: contours of polarization intensity at \(3\) cm are overlaid on an optical image by Hubble Space Telescope. Magnetic field lines estimated from the polarization angle are also shown. Bottom: map of spectral index of total intensity obtained from observations at \(3\) cm and \(20\) cm. Contours of total intensity are also shown. large data set of NVSS (NRAO VLA Sky Survey). They derived rotation measure of 37,543 polarized sources observed at 1.4 GHz with a typical error of \(1-2\)rad/m\({}^{2}\). Fig. 2 is the obtained rotation measure map, which covers about 82% of the sky area north of declination \(-40^{\circ}\). The source density is about one per square degree. The bottom panel is a smoothed image, taking the median value of rotation measure of sources within a circle with radius of \(4^{\circ}\). It is seen that rotation measure reach as high as 200 rad/m\({}^{2}\) at the Galactic plane, while it is \(O(10)\) rad/m\({}^{2}\) in the polar directions. Fluctuations of rotation measure over various scales are also seen. Extension and further detailed analyses of this map have been done in Oppermann et al. (2012); Oppermann et al. (2015). As we saw in Eq. (24), rotation measure is essentially an integration of magnetic field strength and thermal electron density from the source to the observer, and it is in principle impossible to know the distribution along the line of sight. Such values of rotation measure obtained in Taylor et al. (2009) are contributed not just from our galaxy but also from medium around the source and intergalactic medium. Further, there can be intrinsic rotation measure inside the source. Thus, we need to be careful to interpret the map of rotation measure. Typical values of rotation measure of our galaxy and intergalactic medium are as follows. * our galaxy: \(n_{e}\)\(\sim\) 0.1 cm\({}^{-3}\), \(B\)\(\sim\) 1 \(\mu\)G, \(x\)\(\sim\) 1 kpc \(\rightarrow\) RM\(\sim\) 100 rad/m\({}^{2}\) * intergalactic medium: \(n_{e}\)\(\sim\) 10\({}^{-6}\) cm\({}^{-3}\), \(B\)\(\sim\) 1 nG, \(x\)\(\sim\) 1 Gpc \(\rightarrow\) RM\(\sim\) 1 rad/m\({}^{2}\) In reality, both the magnetic field and the thermal electron density have spatial fluctuations so that rotation measure also has a large dispersion depending on the direction in the sky. In addition, the contributions inside sources are not correlated between different sources, and the contributions from intergalactic medium to different sources are not correlated unless they are very close to each other. Thus, these contributions are considered to have little effect on the large-scale pattern of the rotation measure map. Because most of the polarized sources in Taylor et al. (2009) are located outside the Galaxy, it is expected that their rotation measures include the contribution of the Galaxy to a considerable extent and, therefor, the nature of Galaxy magnetic field can be investigated from this rotation measure map. In particular, the pattern at low Galactic latitudes reflects the shape of the global magnetic fields in the Galactic disk, and that at high Galactic latitudes reflects magnetic fields and interstellar gas in the cylindrical region extending perpendicular to the Galactic plane from the vicinity of the solar system, especially the halo region. In the successive paper (Stil et al., 2011), they analyzed the spatial fluctuations of rotation measure with structure function and discussed the structure of magnetic fields in the vicinity of the solar system and the entire Galaxy. Gaensler et al. (2005) studied the strength and structure of magnetic fields in the Large Magellanic Cloud (LMC) with rotation measures of polarized sources behind it. They used rotation measures of 291 sources in sky area of 130 deg\({}^{2}\) around the LMC observed by ATCA (Australia Telescope Compact Array). 140 of the 291 objects are out of the LMC and their average rotation measure can be used to estimate the contribution from the Galaxy. Then Faraday rotation due to the LMC was estimated by subtracting the contribution of the Galaxy from the observed rotation measures of the sources behind the LMC. Fig. 3 is the map of rotation measures due to the LMC. The distribution implies that the LMC has coherent axisymmetric spiral magnetic field of strength about 1 \(\mu\)G. As described above, Faraday rotation has been used to investigate the structure of magnetic fields of galaxies. However, if there are few polarized objects behind the target object, much information cannot be obtained, so the target has been limited to the Galaxy and nearby galaxies. In the future, as the sensitivity of radio telescopes increases, the number of polarized objects that can be ob Figure 2: Top: rotation measures of 37,543 polarized sources of NVSS (Taylor et al., 2009). Red and blue points indicate positive and negative values. Bottom: map of median value of rotation measure of sources within a circle with radius of \(4^{\circ}\). **G**AAS. Reproduced with permission. served will increase substantially. Thus, it will be possible to understand in more detail the structure of magnetic field of not only the Galaxy and the LMC but also distant galaxies. ### Depolarization As we saw in section 2.2, synchrotron radiation due to coherent magnetic field has a large linear polarization fraction. However, various processes can cause depolarization and reduce the polarization fraction of observed radio waves. Below, let us summarize some important processes of depolarization following Burn (1966). **beam depolarization 1**: Spatial variation of polarization angle within a beam leads to depolarization. For example, we consider a case where there is a turbulent component in magnetic field which causes synchrotron radiation and assume that the polarization angle at a position \(\mathbf{x}\), \(\chi(\mathbf{x})\), follows Gaussian distribution with mean \(\chi_{0}\) and variance \(\sigma_{\chi}^{2}\). \[\chi(\mathbf{x})=\chi_{0}+\delta\chi(\mathbf{x}),\] (26) \[\langle\delta\chi(\mathbf{x})\rangle=0,\] (27) \[\langle(\delta\chi(\mathbf{x}))^{2}\rangle=\sigma_{\chi}^{2}.\] (28) Here \(\langle\cdots\rangle\) represents an average within a beam. Using the following formulas for Gaussian random variable, \[\langle(\delta\chi(\mathbf{x}))^{2n}\rangle=(2n-1)!!\ \sigma_{\chi}^{2n}, \tag{29}\] \[\langle(\delta\chi(\mathbf{x}))^{2n+1}\rangle=0, \tag{30}\] complex polarization intensity integrated within a beam is given by, \[\langle P(\mathbf{x})\rangle = \langle P_{0}e^{2i\chi(\mathbf{x})}\rangle \tag{31}\] \[= P_{0}e^{2i\chi_{0}}\left\langle 1+2i\delta\chi(\mathbf{x})-\frac{2^{2} }{2!}(\delta\chi(\mathbf{x}))^{2}-i\frac{2^{3}}{3!}(\delta\chi(\mathbf{x}))^{3}\right.\] \[\left.\hskip 28.452756pt+\frac{2^{4}}{4!}(\delta\chi(\mathbf{x}))^{4}+ \cdots\right\rangle\] \[= P_{0}e^{2i\chi_{0}}\left(1-2\sigma_{\chi}^{2}+\frac{2^{2}}{2!} \sigma_{\chi}^{4}+\cdots\right)\] \[= P_{0}e^{2i\chi_{0}}e^{-2\sigma_{\chi}^{2}}.\] Here, \(P_{0}\) is the intrinsic polarization fraction and we see the depolarization occurs by a factor of \(e^{-2\sigma_{\chi}^{2}}\). It should be noted that this effect is independent of wavelength. **beam depolarization 2**: Let us consider a case where polarized radiation experiences Faraday rotation in a magnetized medium somewhere between the source and observer. Even if the initial polarization angle is uniform within a beam, observed polarization angle fluctuate within the beam if rotation measure varies depending on the location. As an example, we assume that rotation measure at a position \(\mathbf{x}\), \(RM(\mathbf{x})\), follows Gaussian distribution with mean \(RM_{0}\) and variance \(\sigma_{RM}^{2}\). \[RM(\mathbf{x})=RM_{0}+\delta RM(\mathbf{x}) \tag{32}\] \[\langle\delta RM(\mathbf{x})\rangle=0\] (33) \[\langle(\delta RM(\mathbf{x}))^{2}\rangle=\sigma_{RM}^{2} \tag{34}\] Then, in a similar way as beam depolarization 1, complex polarization intensity integrated within a beam can be calculated as, \[\langle P(\mathbf{x})\rangle = \langle P_{0}e^{2i(RM(\mathbf{x})^{2}+\chi_{0})}\rangle \tag{35}\] \[= P_{0}e^{2i(RM_{0}\lambda^{2}+\chi_{0})}e^{-2\sigma_{RM}^{2} \lambda^{4}}\] While the depolarization factor is again exponential with respect to the variance, it depends on wavelength in contrast to Eq. (31). **band-width depolarization**: Even if the intrinsic polarization angle is independent of wavelength, Faraday rotation induces the wavelength dependence. When rotation measure is relatively large and the frequency channel is wide, polarization angle varies significantly within a channel, which leads to depolarization. Within a channel with \(\nu\sim\nu+\delta\nu\), which corresponds to squared wavelength difference of \(\delta\lambda^{2}\), variation of polarization angle is given by, \[RM\times\delta\lambda^{2}=1.8\times 10^{-2}\ \mathrm{rad}\left(\frac{RM}{100\ \mathrm{rad}/ \mathrm{m}^{2}}\right)\] Figure 3: Rotation measures of polarized sources behind Large Magellanic Cloud (Gaensler et al., 2005). Filled and open green circles represent objects with positive and negative rotation measures and the size is proportional to the magnitude. Purple asterisks indicate rotation measures that are consistent with zero within their errors. The image shows the distribution of emission measure in units of \(\mathrm{pc}\ \mathrm{cm}^{-6}\). \[\times\left(\frac{\nu}{1\text{ GHz}}\right)^{-2}\left(\frac{\delta\nu/\nu}{10^{- 3}}\right)\] (36) The depolarization is significant when this quantity is \(O(1)\). The effect is larger for lower frequencies so that the channel width should be enough fine to avoid depolarization at 100 MHz band. **Faraday-depth depolarization**: When the line-of-sight component of magnetic fields is nonzero in a radiation region, radio waves experience different amount of Faraday rotation depending on the position along the line of sight. Therefore, even if perpendicular component of magnetic fields is coherent within an observation beam, superposition of polarized emission with different amount of Faraday rotation leads to depolarization. As described above, depolarization occurs due to various processes and the observable polarization intensity is reduced. However, when the degree of depolarization depends on the frequency, the depolarization reflects the physical quantity related to that process. Therefore, it should be possible to obtain information by utilizing depolarization. Faraday tomography, the main theme of this paper, attempts to reproduce the distribution of various physical quantities related to Faraday rotation and synchrotron radiation along the line of sight by using the Faraday-depth depolarization. ## 3 Principle of Faraday Tomography ### Faraday tomography Let us consider a general case with multiple polarized sources with spatial thickness along the line of sight. Observed polarization spectrum, \(P(\lambda^{2})\), is the integration of polarization emissivity, \(\varepsilon(x)\), along the line of sight and, noting that the amount of Faraday rotation depends on the position, it is given by, \[P(\lambda^{2})=\int\varepsilon(x)e^{2i(\chi_{0}(x)+\phi(x)\lambda^{2})}dx. \tag{37}\] Here \(x\) is a coordinate along the line of sight and the fact that radiation with polarization angle \(\chi_{0}(x)\) emitted at \(x\) experiences Faraday rotation of \(\phi(x)\lambda^{2}\) is taken into account in this equation. Further, \(\phi(x)\) is an important quantity called Faraday depth and is defined as, \[\phi(x)\equiv k\int n_{e}B_{||}dx. \tag{38}\] This is the same expression as Eq. (24), but the rotation measure actually represents the dependence of observed polarization angle on the wavelength and is defined as, \[RM(\lambda^{2})\equiv\frac{1}{2}\frac{d}{d\lambda^{2}}\text{arg}[P(\lambda^{2 })]=\frac{d\chi}{d\lambda^{2}}. \tag{39}\] In fact, observed polarization angle is not linear with respect to \(\lambda^{2}\) in general and, thereore, rotation measure itself is a function of \(\lambda^{2}\) as we will see later. Thus, Faraday depth and rotation measure are totally independent quantities. They have the same value only in simple cases where there is only one Farady-thin source, which will be defined later, along the line of sight. From Eq. (38), we can regard \(x\) as a function of \(\phi\) and change the integration variable of Eq. (37) to \(\phi\). \[P(\lambda^{2})=\int_{-\infty}^{\infty}F(\phi)e^{2i\phi\lambda^{2}}d\phi \tag{40}\] Here, \[F(\phi)\equiv\int\varepsilon(x)e^{2i\chi_{0}(x)}\delta(\phi-\phi(x))dx \tag{41}\] is called Faraday dispersion function (FDF) or Faraday spectrum and represents polarization intensity per unit Faraday depth at Faraday depth \(\phi\). Eq. (40) shows that the relation between \(P(\lambda^{2})\) and \(F(\phi)\) is mathematically Fourier transform. Then, we can formally write the inverse Fourier transform as follows. \[F(\phi)=\frac{1}{\pi}\int_{-\infty}^{\infty}P(\lambda^{2})e^{-2i\phi\lambda^{ 2}}d\lambda^{2} \tag{42}\] It should be noted that the integration variable is not \(\lambda\) but \(\lambda^{2}\). FDF \(F(\phi)\) gives us information on the distribution of polarized intensity along the line of sight, because \(\phi\) is a function of \(x\). More specifically, Faraday depth \(\phi\) is determined by the distribution of the line-of-sight component of magnetic field (\(B_{||}\)) and thermal electron density (\(n_{e}\)), while the polarized intensity is determined by the perpendicular component of magnetic field (\(B_{\perp}\)) and the density of high-energy charged particles (\(n_{\text{cr}}\)). These are the physical quantities which we can probe with FDF. In the conventional Faraday rotation method explained in the previous section, the information on the magnetic field and thermal electron density is confined to a single value of rotation measure, and only the integrated value from the observer to the polarization source can be probed. Therefore, if the FDF can be obtained, information along the line of sight, which has not been obtained so far, can be studied. Furthermore, if the image of the source is resolved and the FDF is obtained at each point, we can investigate the three-dimensional structure of the source. This methodology is called Faraday tomography and it can have a great impact on astronomy because it is generally difficult to obtain information in the line-of-sight direction in astronomy. However, there are two major problems concerned with the use of the FDF. 1. As mentioned before, the polarization spectrum and the FDF are mathematically Fourier transform of each other and we need to perform integration of the range \(-\infty<\lambda^{2}<\infty\) in order to obtain the FDF from Eq. (42). However, negative values of \(\lambda^{2}\) are not physical and we can obtain observation data only for a limited range of positive \(\lambda^{2}\) depending on the frequency coverage of telescopes. Therefore, in practice, we cannot perform the integration perfectly. This is the problem of reconstruction of the FDF. We will discuss strategies to reconstruct the FDF as precisely as possible from finite data in section 5. 2. As we saw in Eq. (38), Faraday depth is an integration of magnetic field and thermal electron density and there is no one-to-one correspondence between Faraday depth \(\phi\) and physical distance \(x\) if there is a field reversal along the line of sight. Then, in general, different multiple spatial positions can have the same value of \(\phi\) and the distribution of physical quantities in physical space cannot be determined uniquely from the FDF. Thus, even if we could succeed in the precise reconstruction of the FDF from polarization observation, its physical interpretation is not straightforward. This is the problem of physical interpretation of the FDF. We will discuss how the distribution of magnetic fields, thermal electrons and high-energy particles in physical space is reflected in the FDF in section 4. These two problems are essential to Faraday tomography and it cannot be used effectively unless these are overcome. ### Dirty FDF and RMSF As stated before, Fourier transform in Eq. (42) is impossible in principle, but if we put zeros outside observation band, which is called zero padding, the integration can be performed. Then, let us define a window function \(W(\lambda^{2})\), which is unity in observation band and zero outside, and we denote \(\tilde{P}(\lambda^{2})=W(\lambda^{2})P(\lambda^{2})\). We denote the FDF obtained by zero padding as \(\tilde{F}(\phi)\): \[\tilde{F}(\phi) = \frac{1}{\pi}\int_{-\infty}^{\infty}\tilde{P}(\lambda^{2})e^{-2i \phi\lambda^{2}}d\lambda^{2} \tag{43}\] \[= \frac{1}{\pi}\int_{-\infty}^{\infty}W(\lambda^{2})P(\lambda^{2}) e^{-2i\phi\lambda^{2}}d\lambda^{2}\] This is called the dirty Faraday dispersion function (dirty FDF) and is generally a different function from the true FDF. To see the difference between \(F(\phi)\) and \(\tilde{F}(\phi)\), let us consider a simple but important example where the FDF is a delta function: \[F(\phi)=fe^{2i\chi_{0}}\delta(\phi-\phi_{0}). \tag{44}\] This represents a source with polarization amplitude \(f\) and polarization angle \(\chi_{0}\) at Faraday depth \(\phi=\phi_{0}\). It should be noted that \(\chi_{0}\) is the intrinsic polarization angle before experiencing Faraday rotation. In this case, polarization spectrum is given by, \[P(\lambda^{2}) = \int_{-\infty}^{\infty}fe^{2i\chi_{0}}\delta(\phi-\phi_{0})e^{2i \phi\lambda^{2}}d\phi \tag{45}\] \[= fe^{2i(\phi_{0}\lambda^{2}+\chi_{0})}.\] Thus, the polarization spectrum is constant with respect to wavelength and the polarization angle varies linearly with \(\lambda^{2}\). This is the expected spectrum when we consider the conventional Faraday rotation. The generalized rotation measure is given by, \[RM=\frac{1}{2}\frac{d}{d\lambda^{2}}\arg[P(\lambda^{2})]=\phi_{0} \tag{46}\] which is the same as the conventional Faraday rotation. Let us assume we observe this source with a wavelength range of \(\lambda^{2}_{\rm min}\leq\lambda^{2}\leq\lambda^{2}_{\rm max}\). In this case, the dirty FDF can be calculated as, \[\tilde{F}(\phi) = \frac{1}{\pi}\int_{-\infty}^{\infty}W(\lambda^{2})fe^{2i(\phi_{0 }\lambda^{2}+\chi_{0})}e^{-2i\phi\lambda^{2}}d\lambda^{2} \tag{47}\] \[= \frac{1}{\pi}\int_{\lambda^{2}_{\rm min}}^{\lambda^{2}_{\rm max} }fe^{2i(\phi_{0}\lambda^{2}+\chi_{0})}e^{-2i\phi\lambda^{2}}d\lambda^{2}\] \[= \frac{ife^{2i\chi_{0}}}{2\pi(\phi-\phi_{0})}(e^{-2i(\phi-\phi_{0} )\lambda^{2}_{\rm max}}-e^{-2i(\phi-\phi_{0})\lambda^{2}_{\rm min}}).\] The absolute value and polarization angle can be written as, \[\left|\tilde{F}(\phi)\right|=\frac{f}{\pi}\left|\frac{\sin\left\{(\phi-\phi_{ 0})(\lambda^{2}_{\rm max}-\lambda^{2}_{\rm min})\right\}}{\phi-\phi_{0}} \right|, \tag{48}\] \[\tilde{\chi}(\phi)=-\frac{1}{2}(\phi-\phi_{0})(\lambda^{2}_{\rm max}+\lambda^ {2}_{\rm min})+\chi_{0}. \tag{49}\] First of all, at \(\phi=\phi_{0}\), where the delta function is located, we have, \[\tilde{F}(\phi_{0})=\frac{fe^{2i\chi_{0}}(\lambda^{2}_{\rm max}-\lambda^{2}_{ \rm min})}{\pi}. \tag{50}\] Thus, the polarization angle at \(\phi=\phi_{0}\) coincides with the intrinsic polarization angle and the absolute value is proportional to \((\lambda^{2}_{\rm max}-\lambda^{2}_{\rm min})\). In Fig. 4, the RMSF, which is defined later and proportional to the dirty FDF, is plotted. Here, we set \(f=1,\phi_{0}=0,\chi_{0}=0\). The real and imaginary parts and absolute value are plotted in top and middle panels, respectively, setting \(\lambda^{2}_{\rm max}-\lambda^{2}_{\rm min}=1\) m\({}^{2}\). As we can see, the dirty FDF has a peak at \(\phi=\phi_{0}\) and decays slowly with oscillation in proportion to \(1/(\phi-\phi_{0})\). Aside from the main peak at \(\phi=\phi_{0}\), there are multiple peaks on both sides, which are called sidelobes. It is understood that the delta function of the original FDF is broadened due to incomplete observation and the peak width is determined by the band width \(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2}\). The bottom panel of Fig. 4 shows a comparison of the absolute value of the dirty FDFs with different values of \(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2}\). It is seen that the main peak is narrower and higher for wider observation band. Thus, the effect of incompleteness of observation band is the broadening of the main peak and the existence of sidelobes. If we put \(\lambda^{2}=\lambda_{\rm max}^{2}=-\lambda_{\rm min}^{2}\) in Eq. (47), we have \[\tilde{F}(\phi)=\frac{fe^{2i\chi_{0}}\sin\{2(\phi-\phi_{0})\lambda^{2}\}}{ \pi(\phi-\phi_{0})}. \tag{51}\] By taking a limit of \(\lambda^{2}\rightarrow\infty\), the dirty FDF reduces to the original FDF. Thus, the FDF could be completely reconstructed with a perfect observation including the negative region of \(\lambda^{2}\), although it is practically impossible. Here, we define the Rotation Measure Spread Function (RMSF) from the Fourier transform of the window function, \[R(\phi) \equiv \frac{K}{\pi}\int_{-\infty}^{\infty}W(\lambda^{2})e^{-2i\phi \lambda^{2}}d\lambda^{2} \tag{52}\] \[K \equiv \left(\int_{-\infty}^{\infty}W(\lambda^{2})d\lambda^{2}\right)^{ -1} \tag{53}\] This is the same as the quantity in Eq. (47) multiplied with \(K\) setting \(f=1,\phi_{0}=0,\chi_{0}=0\) and is essentially the dirty FDF for a delta-function FDF. When the window function is unity only for a range \(\lambda_{\rm min}^{2}\leq\lambda^{2}\leq\lambda_{\rm max}^{2}\), we have \(K=1/(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2})\) and \[R(\phi) = \frac{i}{2\pi\phi(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2})}( e^{-2i\phi\lambda_{\rm max}^{2}}-e^{-2i\phi\lambda_{\rm min}^{2}}) \tag{54}\] \[= \frac{ie^{-2i\phi\lambda_{\rm max}^{2}}}{2\pi\phi(\lambda_{\rm max }^{2}-\lambda_{\rm min}^{2})}(1-e^{2i\phi(\lambda_{\rm max}^{2}-\lambda_{\rm min }^{2})}),\] \[|R(\phi)| = \frac{\left|1-e^{2i\phi(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{ 2})}\right|}{2\pi\phi(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2})}\] (55) \[= \frac{\left|\sin\left\{\phi(\lambda_{\rm max}^{2}-\lambda_{\rm min }^{2})\right\}\right|}{\pi\phi(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2})},\] \[\chi(\phi) = -\frac{1}{2}\phi(\lambda_{\rm max}^{2}+\lambda_{\rm min}^{2}). \tag{56}\] Note that the absolute value \(|R(\phi)|\) have a peak at \(\phi=0\) and the height is \(|R(0)|=1/\pi\), independent of the value of \((\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2})\). From the definition of the RMSF, Eq. (52), the window function can be written as, \[W(\lambda^{2})=K^{-1}\int_{-\infty}^{\infty}R(\phi)e^{2i\phi\lambda^{2}}d\phi. \tag{57}\] Then, the dirty FDF in Eq. (43) can be rewritten as, \[\tilde{F}(\phi) = \frac{1}{\pi}\int_{-\infty}^{\infty}W(\lambda^{2})P(\lambda^{2}) e^{-2i\phi\lambda^{2}}d\lambda^{2} \tag{58}\] \[= \frac{K^{-1}}{\pi}\int_{-\infty}^{\infty}d\lambda^{2}\int_{- \infty}^{\infty}d\phi^{\prime}\int_{-\infty}^{\infty}d\phi^{\prime\prime}\] \[\times R(\phi^{\prime})F(\phi^{\prime\prime})e^{2i(\phi^{\prime}+ \phi^{\prime\prime}-\phi)\lambda^{2}}\] \[= K^{-1}\int_{-\infty}^{\infty}d\phi^{\prime}\int_{-\infty}^{ \infty}d\phi^{\prime\prime}R(\phi^{\prime})F(\phi^{\prime\prime})\delta(\phi ^{\prime}+\phi^{\prime\prime}-\phi)\] \[= K^{-1}\int_{-\infty}^{\infty}d\phi^{\prime}R(\phi^{\prime})F( \phi-\phi^{\prime})\] \[= K^{-1}(R*F)(\phi)\] Here, \((R*F)\) represents the convolution. This expression shows that the original FDF can be regarded as a collection of delta functions and that the dirty FDF is the superposition of the RMSFs with the weight of \(F(\phi)\). If we write \(\lambda^{2}=\lambda_{\rm max}^{2}=-\lambda_{\rm min}^{2}\) and take a limit of \(\lambda^{2}\rightarrow\infty\), the RMSF approaches to a delta function \(\delta(\phi)\) and then the dirty FDF approaches to the original FDF. The more the RMSF deviates from a delta function, the more \(\tilde{F}(\phi)\) deviates from \(F(\phi)\). Because the RMSF has a finite width as we saw in Fig. 4, the width is an important parameter which affects the quality of the reconstruction of the FDF. As a measure of the width, we adopt the Full Width at Half Maximum (FWHM). When the observation band is \(\lambda_{\rm min}^{2}\leq\lambda^{2}\leq\lambda_{\rm max}^{2}\), the FWHM is given by, \[{\rm FWHM}=\frac{2\sqrt{3}}{\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2}}. \tag{59}\] Here, the width of the RMSF depends only on the bandwidth in \(\lambda^{2}\) domain. Finally, from the Parseval's theorem, the following equation holds for a square-integrable FDF. \[\int_{-\infty}^{\infty}\lvert F(\phi)\rvert^{2}d\phi=\frac{1}{\pi}\int_{- \infty}^{\infty}\lvert P(\lambda^{2})\rvert^{2}d\lambda^{2} \tag{60}\] Further, the following relation holds between the dirty FDF and the observed polarization spectrum. \[\int_{-\infty}^{\infty}\lvert\bar{F}(\phi)\rvert^{2}d\phi=\frac{1}{\pi}\int_{- \infty}^{\infty}W(\lambda^{2})\lvert P(\lambda^{2})\rvert^{2}d\lambda^{2} \tag{61}\] Thus, the power of the dirty FDF for a finite observation band is smaller than that of the original FDF. For a specific case with a delta-function FDF and an observation band \(\lambda_{\rm min}^{2}\leq\lambda^{2}\leq\lambda_{\rm max}^{2}\), we have, \[\int_{-\infty}^{\infty}\lvert\bar{F}(\phi)\rvert^{2}d\phi =\frac{1}{\pi}\int_{-\infty}^{\infty}W(\lambda^{2})\lvert P( \lambda^{2})\rvert^{2}d\lambda^{2}\] \[=\frac{f^{2}(\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2})}{\pi}. \tag{62}\] ### Interference in Faraday depth space Because the FDF is a complex function, interference occurs when there are multiple sources along the line of sight or a source with a finite width in the Faraday depth space. Here, we see an example of interference considering an FDF with two delta functions. We denote the polarization amplitudes, polarization angles and Faraday depths of the two sources as \((F_{0}\pm\Delta F)/2\), \(\chi_{0}\pm\Delta\chi/2\) and \(\phi_{0}\pm\Delta\phi/2\), respectively. \[F(\phi) =\frac{F_{0}+\Delta F}{2}e^{2i(\chi_{0}+\Delta\chi/2)}\delta \left(\phi-(\phi_{0}+\frac{\Delta\phi}{2})\right)\] \[\quad+\frac{F_{0}-\Delta F}{2}e^{2i(\chi_{0}-\Delta\chi/2)}\delta \left(\phi-(\phi_{0}-\frac{\Delta\phi}{2})\right) \tag{63}\] For this FDF, the polarization spectrum is given by, \[P(\lambda^{2}) =e^{2i(\phi_{0}\lambda^{2}+\chi_{0})}\] \[\quad\times\left(F_{0}\cos\left(\Delta\phi\lambda^{2}+\Delta \chi\right)+i\Delta F\sin\left(\Delta\phi\lambda^{2}+\Delta\chi\right)\right). \tag{64}\] To understand the behavior of this spectrum, first let us set \(\Delta F=0\), that is, the polarization amplitudes of the two sources are the same. In this case, the polarization angle is given by \(\chi=\phi_{0}\lambda^{2}+\chi_{0}\), which is linear with respect to \(\lambda^{2}\) and equivalent to a single source with the polarization angle \(\chi_{0}\) and Faraday depth \(\phi=\phi_{0}\). In fact, such a single source does not exist and these values are average values of the two sources. On the other hand, the absolute value \(\lvert P(\lambda^{2})\rvert\) oscillates with \(\lambda^{2}\). The upper panel of Fig. 5 shows the comparison of the absolute value of the polarization spectrum for \(\Delta\phi=1,2\), and \(3\) [rad/m\({}^{2}\)] fixing other values as \(F_{0}=1\) [Jy\(\cdot\) m\({}^{2}\)/rad], \(\Delta F=0\) [Jy\(\cdot\) m\({}^{2}\)/rad], \(\phi_{0}=0\) [rad/m\({}^{2}\)], \(\chi_{0}=0\) [rad] and \(\Delta\chi=0\) [rad]. The oscillation in the absolute value is a typical symptom of interference. In particular, the polarization is perfectly canceled for wavelengths which satisfy \(\Delta\phi\lambda^{2}+\Delta\chi=(n+1/2)\pi\) for a given value of \(\Delta\phi\), because the polarization angles of the two sources differ by 90 degree. In contrast, the two sources interfere constructively when \(\Delta\phi\lambda^{2}+\Delta\chi=n\pi\). The dirty FDFs for the above polarization spectra are shown in the bottom panel of Fig. 5. Here the observation band is set to \(0\) m\({}^{2}\leq\lambda^{2}\leq 1\) m\({}^{2}\). The FWHM of the RMSF for this band is \(2\sqrt{3}\) [rad/m\({}^{2}\)] and it is seen that the two delta functions cannot be resolved when the difference in the Faraday depths is smaller than \(\Delta\phi=3\) [rad/m\({}^{2}\)]. As we can see in the top panel, within the observation band of \(0\) m\({}^{2}\leq\lambda^{2}\leq 1\) m\({}^{2}\), the polarization spectra are more different at longer wavelengths. Then, it is understandable that the case with \(\Delta\phi=1\) [rad/m\({}^{2}\)] is less distinguishable with the case of a single delta function, where the absolute value of the polarization spectrum is constant with wavelength. Contrastingly, the spectral shape is significantly different for \(\Delta\phi=3\) [rad/m\({}^{2}\)], which is why the two sources can be resolved. In Fig. 6, three cases with \(\Delta\chi=0,\pi/4,\pi/2\) [rad] are compared by fixing \(\Delta\phi=2\) [rad/m\({}^{2}\)]. It is seen that the peak position of the polarization spectrum is shifted with \(\Delta\phi\), as is also evident in Eq. (64). As a result, the resolvability of the two sources also depends on \(\Delta\phi\). Therefore, the FWHM of the RMSF should be considered as a rough measure of resolution in Faraday depth space. As we saw in the previous section, the absolute value and FWHM of the RMSF depend only on \(\Delta\lambda^{2}=\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2}\) but are independent of \(\lambda_{\rm max}^{2}\) and \(\lambda_{\rm min}^{2}\) themselves. On the other hand, its phase is dependent on \(\lambda_{\rm max}^{2}\) and \(\lambda_{\rm min}^{2}\) as in Eq. (56). Because the interference pattern is affected by the difference in the polarization angles of the two sources, the shape of the dirty FDF varies depending on the observation band and is not determined just from the FWHM of the RMSF. Fig. 7 shows the comparison of the dirty FDF for three observation bands: \(0\) m\({}^{2}\leq\lambda^{2}\leq 1\) m\({}^{2}\), \(1\) m\({}^{2}\leq\lambda^{2}\leq 2\) m\({}^{2}\) and \(2\) m\({}^{2}\leq\lambda^{2}\leq 3\) m\({}^{2}\). Other parameters are the same as Fig. 5 with \(\Delta\phi=2\) [rad/m\({}^{2}\)]. We can see that, for the same value of \(\Delta\lambda^{2}\), two sources are resolved in the case of \(2\) m\({}^{2}\leq\lambda^{2}\leq 3\) m\({}^{2}\) but not resolved in the case of \(1\) m\({}^{2}\leq\lambda^{2}\leq 2\) m\({}^{2}\). From the top panel of Fig. 5, it is seen that, when the observation band is \(1\) m\({}^{2}\leq\lambda^{2}\leq 2\) m\({}^{2}\), \(P(\lambda^{2})\) does not fall to zero and is relatively similar to that of a single delta function. A case with nonzero \(\Delta F\) leads to more complicated behavior. Fig. 8 shows the comparison of the absolute value and polarization angle for \(\Delta F=1,2,3\) [Jy\(\cdot\) m\({}^{2}\)/rad]. Other parameters are set to \(F_{0}=4\) [Jy\(\cdot\) m\({}^{2}\)/rad], \(\phi_{0}=0\) [rad/m\({}^{2}\)], \(\Delta\phi=2\) [rad/m\({}^{2}\)], \(\chi_{0}=0\) [rad] and \(\Delta\chi=0\) [rad]. As in the case of \(\Delta F=0\), the interference of the polarization intensity can be constructive or destructive depending on the wavelength. However, since the brightness is different between the two sources when \(\Delta F\) is not zero, the polarization is not completely canceled by the interference. Further, the polarization angle is not linear with respect to \(\lambda^{2}\) and the RM depends on wavelength. Because two sources have different brightness in general, nonlinear behavior of the polarization angle \(\lambda^{2}\) is a sign of interference of multiple sources. However, if the observation band is too narrow, the polarization angle is apparently linear and, thus, wideband observations are necessary to investigate the structure of the sources in Faraday depth space. ### Gaussian FDF Let us consider the Faraday dispersion function of Gaussian-function type, which is commonly used as well as delta-function type. \[F(\phi)=\frac{f}{\sqrt{2\pi}\sigma}\exp\left[-\frac{(\phi-\phi_{0})^{2}}{2\sigma^ {2}}+2i\chi_{0}\right] \tag{65}\] Here \(F\) is the amplitude, \(\phi_{0}\) is the central Faraday depth, \(\sigma\) is the width and \(\chi_{0}\) is the polarization angle. This can be regarded as a cluster of delta-function sources with a wide range of Faraday depth. The corresponding polarization spectrum is given by, \[P(\lambda^{2})=f\exp\left[-2\sigma^{2}\lambda^{4}+2i(\phi_{0}\lambda^{2}+\chi _{0})\right]. \tag{66}\] This is also a Gaussian function centered at \(\lambda^{2}=0\) with the width \(\sigma_{P}=1/(2\sigma)\). The behavior of the polarization angle and the RM are the same as the case with a delta function. This is nontrivial because Gaussian FDF is the sum of the polarization sources with various Faraday depth. Thus, the difference between Gaussian and delta-function FDF appears in the absolute value of the polarization spectrum. It falls rapidly for \(\lambda^{2}>\sigma_{P}\), which physically means the depolarization due to interference among sources with different Faraday depths at long wavelengths. Conversely, for a given observation band \(\lambda_{\rm min}^{2}\leq\lambda^{2}\leq\lambda_{\rm max}^{2}\), a Gaussian FDF broader than the following maximum width \(\sigma_{\rm max}\) cannot be observed due to depolarization: \[\sigma_{\rm max}=\frac{\pi}{\lambda_{\rm min}^{2}}. \tag{67}\] It should be noted that this width represents the broadening in the Faraday-depth space rather than in physical space. The polarization spectrum in Eq. (66) is equivalent to Eq. (35) for beam depolariztion. Therefore, when turbulent magnetized plasma exists in front of polarization sources which are delta-function type and have the same Faraday depth, the total FDF becomes Gaussian. In this way, the FDF includes information on non-emitting magnetized plasma as well as polarization sources along the line of sight. The dirty FDF for limited-band observation of the polarization spectrum, Eq. (66), is represented by error function and is shown in Fig. 9. The top panel shows the dirty FDFs for \(\sigma=1\) rad/m\({}^{2}\) and \(\sigma=3\) rad/m\({}^{2}\). Other parameters are set as \(f=1\) Jy\(\cdot\)m\({}^{2}\)/rad, \(\phi_{0}=10\) rad/m\({}^{2}\), \(\lambda_{\rm min}^{2}=0.01\) m\({}^{2}\) and \(\lambda_{\rm max}^{2}=1\) m\({}^{2}\). Due to the incomplete observation, the dirty FDF is wider than the original Gaussian function, and the sidelobes which are seen in the delta-function case cannot be seen. The bottom panel is comparison of the dirty FDF for three observation bands (0.001 m\({}^{2}\leq\lambda^{2}\leq 10\) m\({}^{2}\), 0.001 m\({}^{2}\leq\lambda^{2}\leq 0.1\) m\({}^{2}\) and 0.1 m\({}^{2}\leq\lambda^{2}\leq 10\) m\({}^{2}\)). Other parameters are set as \(f=1\) Jy\(\cdot\)m\({}^{2}\)/rad, \(\phi_{0}=10\) rad/m\({}^{2}\) and \(\sigma=5\) rad/m\({}^{2}\). As we saw in Eq. (66), the polarization intensity is larger for shorter wavelengths so that the dirty FDF with shorter-wavelength observation has larger absolute value. As we will see in section 4, polarization sources have in general broad structure in Faraday depth space. When the width (\(\sigma\) in the case of Gaussian function) is smaller and larger than the FWHM of the RMSF, it is called Faraday thin and Faraday thick, respectively. Gauss function and top-hat function discussed below are often used to characterize Faraday thick sources. ### Top-hat FDF A top-hat function is another commonly-used type of the FDF. \[F(\phi)=\left\{\begin{array}{ll}F_{0}&(0\leq\phi\leq\phi_{0})\\ 0&(\phi<0,\ \phi>\phi_{0})\end{array}\right. \tag{68}\] In fact, this functional form is not realistic physically, but it is useful for understanding the relationship between the FDF and the polarization spectrum and the nature of Faraday tomography. Since the top-hat type is continuously distributed in Faraday depth space, interference oc Figure 9: Dirty FDF for Gaussian FDFs. Top: original FDFs (dashed) and dirty FDFs (solid) for \(\sigma=1\) rad/m\({}^{2}\) and \(\sigma=3\) rad/m\({}^{2}\). Other parameters are set as \(f=1\) Jy\(\cdot\)m\({}^{2}\)/rad, \(\phi_{0}=10\) rad/m\({}^{2}\), \(\lambda_{\rm min}^{2}=0.01\) m\({}^{2}\) and \(\lambda_{\rm max}^{2}=1\) m\({}^{2}\). Bottom: Comparison of the dirty FDF for three observation bands (0.001 m\({}^{2}\leq\lambda^{2}\leq 10\) m\({}^{2}\), 0.001 m\({}^{2}\leq\lambda^{2}\leq 0.1\) m\({}^{2}\) and 0.1 m\({}^{2}\leq\lambda^{2}\leq 10\) m\({}^{2}\)) and the original Gaussian FDF. Other parameters are set as \(f=1\) Jy\(\cdot\)m\({}^{2}\)/rad, \(\phi_{0}=10\) rad/m\({}^{2}\) and \(\sigma=5\) rad/m\({}^{2}\). curs in a complicated form like the Gaussian-function type. First, the polarization spectrum is as follows. \[P(\lambda^{2})=F_{0}\phi_{0}\frac{\sin\phi_{0}\lambda^{2}}{\phi_{0}\lambda^{2}} e^{2i(\phi_{0}/2)\lambda^{2}} \tag{69}\] Therefore, the behavior of the polarization angle is the same as the case with a single delta-function source at \(\phi=\phi_{0}/2\). On the other hand, the polarization intensity decays with oscillation toward long wavelength. The phase of the oscillation is \(\phi_{0}\lambda^{2}\), which is a combination of squared wavelength and Faraday depth which characterize the FDF as is the case with two interfering delta-function sources. The intensity decays with \(\lambda^{-2}\), which is much slower than the case of Gaussian source. Unlike the Gaussian function type, there is no characteristic wavelength that characterizes depolarization, but \(\lambda^{2}=\pi/\phi_{0}\), where the intensity becomes zero for the first time, is one useful measure. Fig. 10 shows the dirty FDF of top-hat FDF. Here, the parameters are set as \(\phi_{0}=10\) [rad/m\({}^{2}\)] and \(F_{0}\phi_{0}=1\) [Jy], and three observation bands, \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 10\) [m\({}^{2}\)], \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 1\) [m\({}^{2}\)] and \(1\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 10\) [m\({}^{2}\)] are compared. Wider band observation results in better reconstruction of the FDF, but even for a very wide band \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 10\) [m\({}^{2}\)] (\(100\) MHz \(-\) 3 GHz) the shape of the dirty FDF is far from the top-hat shape. In fact, the widest physically-possible band (\(\lambda^{2}>0\)) does not improve the result compared with the case with \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 10\) [m\({}^{2}\)]. This is a limitation of reconstruction without information of unphysical band (\(\lambda^{2}<0\)), while the unnatural shape of the dirty FDF is due to the fact that the original FDF is not a continuous function. By investigating the dirty FDFs for two bands, \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 1\) [m\({}^{2}\)] and \(1\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 10\) [m\({}^{2}\)], which divides a wide band \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 10\) [m\({}^{2}\)], we can see a generic characteristics of Faraday tomography. Because short-wavelength information of polarization spectrum reflects large-scale structure in Faraday-depth space, the dirty FDF for \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 1\) [m\({}^{2}\)] reproduces the overall feature of that of \(0.01\) [m\({}^{2}\)] \(\leq\lambda^{2}\leq 10\) [m\({}^{2}\)]. Contrastingly, long-wavelength information corresponds to small-scale structure in Faraday-depth space and its dirty FDF reproduces the spike structure at \(\phi=0,10\) rad/m\({}^{2}\). Thus, observation band determines what scale of Faraday-depth space structure can be reproduced. ### Indices for Faraday tomography In this section, we discussed the basic principle of Faraday tomography. Finally, we summarize some indices which represents the performance of Faraday tomography. Below, \(\lambda_{\rm max}^{2}\), \(\lambda_{\rm min}^{2}\) and \(\delta\lambda^{2}\) denote maximum wavelength, minimum wavelength and channel width, respectively. * RMSF FWHM: resolution in Faraday-depth space (see Eq. (59)) \[{\rm FWHM}=\frac{2\sqrt{3}}{\lambda_{\rm max}^{2}-\lambda_{\rm min}^{2}}\] (70) * max Faraday depth: maximum Faraday depth which can be probed (see Eq. (36)) \[\phi_{\rm max}=\frac{\sqrt{3}}{\delta\lambda^{2}}\] (71) * max Faraday scale: maximum width of polarization source in Faraday depth space which can be probed (see Eq. (67)) \[\sigma_{\rm max}=\frac{\pi}{\lambda_{\rm min}^{2}}\] (72) ## 4 Models and interpretation of Faraday dispersion function In this section, we discuss the physical interpretation of the Faraday dispersion function and introduce some practical models of polarization sources. The FDF include information on the distribution of magnetic fields, thermal electrons and cosmic-ray electrons along the line of sight. However, as we saw in Eq. (38), the interpretation of the FDF is not straightforward because there is in general no one-to-one correspondence between Faraday depth and physical distance. Here, we first consider physical models of the FDF which are simple but help our understanding of its nature in section 4.1-4.5. Then, in section 4.6, we see some research examples of realistic polarization sources and discuss how they look like in Faraday space. ### Coherent magnetic field Let us consider a uniform slab (Burn, 1966; Sokoloff et al., 1998; Frick et al., 2011). Magnetic field is uniform within the slab and has strength of \(B\), and thermal-electron density, \(n_{\rm e}\), and cosmic-ray electron density, \(n_{\rm CR}\), are also assumed to be uniform. Therefore, the distribution of polarization intensity in physical space is, setting the size of the slab along the line of sight as \(L\), \[\varepsilon(x)=\left\{\begin{array}{ll}\varepsilon_{0}&(x_{0}\leq x\leq x_{0}+L )\\ 0&(x<x_{0},\;x>x_{0}+L)\end{array}\right.. \tag{73}\] Here, \(\varepsilon_{0}\) is the polarization intensity per unit size. Faraday depth increases or decreases monotonically within the slab depending on the direction of magnetic field, and is constant outside the slab. \[\phi(x)=\left\{\begin{array}{ll}0&(x<x_{0})\\ \frac{x-x_{0}}{L}\phi_{0}&(x_{0}\leq x\leq x_{0}+L)\\ \phi_{0}&(x>x_{0}+L)\end{array}\right. \tag{74}\] Here, \(\phi_{0}=kn_{a}B_{||}L\). Because the polarized emission exist in a range of \(x_{0}\leq x\leq x_{0}+L\) in physical space, the FDF is nonzero in a range of \(0\leq\phi\leq\phi_{0}\) in Faraday-depth space. The width of the emission in Faraday-depth space, \(\phi_{0}\), is determined not only by the physical size of the slab but also by the strength of magnetic field along the line of sight and thermal-electron density. From Eq. (41), the FDF is expressed as, \[F(\phi)=\left\{\begin{array}{ll}\frac{\varepsilon_{0}L}{\phi_{0}}&(0\leq \phi\leq\phi_{0})\\ 0&(\phi<0,\;\phi>\phi_{0})\end{array}\right. \tag{75}\] Here, \(\varepsilon_{0}L/\phi_{0}\) is polarization intensity per unit Faraday depth. In this way, the FDF for a uniform slab is a top-hat function. There are 4 parameters which characterize a top-hat FDF: width, height, position (central Faraday depth) and polarization angle. On the other hand, a uniform slab can be characterized by 6 parameters in physical space: size, parallel and perpendicular components of coherent magnetic field, thermal-electron density and cosmic-ray electron density. Thus, even if we obtain such FDF from observation and assume that the source is a uniform slab, we cannot determine the latter 6 parameters from the former 4 parameters. In other words, we cannot obtain all parameters in physical space due to degeneracy. Next, we consider a case where the direction of coherent magnetic field is reversed at the center of the slab, \(x=x_{0}+L/2\). Specifically, we assume the field points in the direction of the observer for \(x<x_{0}+L/2\). The field strength, thermal electrons and cosmic-ray electrons are again assumed to be uniform. In this case, Faraday depth increases and decreases monotonically for \(x_{0}<x<x_{0}+L/2\) and \(x_{0}+L/2<x<x_{0}+L\), respectively, and reaches zero at \(x=x_{0}+L\). \[\phi(x)=\left\{\begin{array}{ll}0&(x<x_{0})\\ \frac{x-x_{0}}{L}\phi_{0}&(x_{0}\leq x\leq x_{0}+L/2)\\ \frac{L-(x-x_{0})}{L}\phi_{0}&(x_{0}+L/2\leq x\leq x_{0}+L)\\ 0&(x>x_{0}+L)\end{array}\right. \tag{76}\] It should be noted that one value of Faraday depth in the range of \(0<\phi<\phi_{0}/2\) corresponds to 2 positions in physical space. Thus, a range of polarized emission in Faraday-depth space becomes halves and the polarization intensity per unit Faraday depth doubles compared to the previous case. \[F(\phi)=\left\{\begin{array}{ll}\frac{2\varepsilon_{0}L}{\phi_{0}}&(0\leq \phi\leq\phi_{0}/2)\\ 0&(\phi<0,\;\phi>\phi_{0}/2)\end{array}\right. \tag{77}\] This is again a top-hat function and the shape is exactly the same as that of Eq. (75). Furthermore, in this case, the edges in Faraday-depth space (\(\phi=0\) and \(\phi_{0}/2\)) does not always correspond to the edges in physical space (\(x=x_{0},x_{0}+L\)). Therefore, when we obtain a top-hat FDF from observation, we suffer from not only the parameter degeneracy but also uncertainty in the configuration of magnetic field. As we saw above, we cannot determine the distribution model in physical space from the FDF, even for such a very simple system as a uniform slab. This is a fundamental problem in Faraday tomography. Nevertheless, the FDF has much richer information compared with the conventional rotation measure and we can obtain various physical implication from it. ### Faraday caustics Bell et al. (2011) proposed a simple configuration of magnetic field which leads to the FDF with a striking feature. Let us consider a emission region with uniform polarization emissivity \(\epsilon_{0}\) and polarization angle \(\chi_{0}=0\) rad within a range of \(-L<x<L\). The line-of-sight component of magnetic field is assumed to vary as, \[B_{||}(x)=B^{\prime}x, \tag{78}\] where \(B^{\prime}\) is a constant. Then, Faraday depth can be calculated as, \[\phi(x)=\phi_{0}\left(\frac{x^{2}}{L^{2}}-1\right)\ \ \ \ \ (-L\leq x \leq L) \tag{79}\] \[\phi_{0}\equiv\frac{kn_{a}B^{\prime}L^{2}}{2} \tag{80}\] Here, thermal-electron density \(n_{e}\) is assumed to be constant. Then, the resultant FDF is, \[F(\phi) =\epsilon_{0}\int_{-L}^{L}\delta\left(\phi-\phi_{0}\left(\frac{x^ {2}}{L^{2}}-1\right)\right)dx\] \[=\frac{L}{\sqrt{\phi_{0}}}\frac{\epsilon_{0}}{\sqrt{\phi+\phi_{0} }}\ \ \ \ \ (-\phi_{0}\leq\phi\leq 0) \tag{81}\] which is divergent at \(\phi=-\phi_{0}\). This behavior was called "Faraday caustics" in analogy with optics in Bell et al. (2011) and shown in Fig. 11 setting \(\phi_{0}=1\) rad/m\({}^{2}\) and \(\epsilon_{0}L=1\). In this way, the FDF diverges at Faraday depth \(\phi=-\phi_{0}\) which corresponds to the field reversal and has an asymmetric sharp peak. Which side of the peak has a non-zero polarization intensity depends on the direction of the magnetic field. This divergence appears because Faraday depth is almost constant at \(x\!=\!0\) and polarization emissivity around \(x\!=\!0\) contributes to the FDF of the very narrow range around \(\phi\!=\!-\phi_{0}\). The FDF is polarization intensity per unit Faraday depth so that it diverges if finite amount of polarization intensity contributes to a very narrow range in Faraday-depth space. Although the peak will be smoothed on the scale of the FWHM of the RMSF in practical observations, such a feature will be helpful for the physical interpretation of the FDF. Finally it should be noted that the total emissivity should be finite while the FDF itself is divergent. In fact, we have, \[\int_{-\infty}^{\infty}|F(\phi)|d\phi\!=\!2\epsilon_{0}L \tag{82}\] This depends only on the size of the slab and the emissivity per unit size and not on \(\phi_{0}\). ### Helical magnetic field Horellou & Fletcher (2014) considered an emitting region with helical magnetic field and derived the FDF to discuss the determination of parameters through observation. Helical magnetic fields are considered to play an important role in dynamo mechanism and also exist in galactic and protostar jets. Helical magnetic field toward \(z\) direction can be written as, \[B_{x}(z) =B_{\perp}\cos\left(k_{z}(z-z_{0})+\chi_{0}\right) \tag{83}\] \[B_{y}(z) =B_{\perp}\sin\left(k_{z}(z-z_{0})+\chi_{0}\right)\] (84) \[B_{z}(z) =B_{||}. \tag{85}\] Here, the strength of \(z\) component, \(B_{||}\), and the perpendicular components, \(B_{\perp}\), are assumed to be constant and \(k_{z}\) represents the wavenumber of helix in \(z\) direction. When the line of sight is in \(z\) direction, Faraday depth and polarization angle vary linearly with \(z\). \[\phi(z) =k^{\prime}B_{||}(z-z_{0}) \tag{86}\] \[\chi(z) =k_{z}(z-z_{0})+\chi_{0}, \tag{87}\] where \(k^{\prime}\!=\!kn_{e}\) using \(k\) in Eq. (25) and thermal electron density \(n_{e}\) is assumed to be constant. In this system, the phase of the FDF vary linearly with Faraday depth. \[\chi(\phi)=\frac{k_{z}}{k^{\prime}B_{||}}(\phi-\phi_{0})+\chi_{0}^{\prime} \equiv\beta\phi+\chi_{0} \tag{88}\] Here, denoting the FDF in case of \(\beta\!=\!0\) as \(F_{0}(\phi)\), the FDF with non-zero \(\beta\) can be written as, \[F(\phi)=F_{0}(\phi)e^{2i\beta\phi}. \tag{89}\] Then, the polarization spectrum, \(P(\lambda^{2})\), is expressed by using that for \(\beta=0\), \(P_{0}(\lambda^{2})\), as, \[P(\lambda^{2}) =\int_{-\infty}^{\infty}F(\phi)e^{2i\phi\lambda^{2}}d\phi\] \[=\int_{-\infty}^{\infty}F_{0}(\phi)e^{2i\phi(\lambda^{2}+\beta)}d\phi\] \[=P_{0}(\lambda^{2}+\beta). \tag{90}\] This is \(P_{0}(\lambda^{2})\) shifted in \(\lambda^{2}\) direction by \((-\beta)\). Next, let us consider a case where the directions of line of sight and helix do not coincide. We take \(z^{\prime}\) axis as the new line-of-sight direction by rotation by \(\theta\) in \((x,z)\) plane. The coordinate transformation is given by, \[\left(\begin{array}{c}x\\ y\\ z\end{array}\right)=\left(\begin{array}{ccc}\cos\theta&0&\sin\theta\\ 0&1&0\\ -\sin\theta&0&\cos\theta\end{array}\right)\left(\begin{array}{c}x^{\prime} \\ y^{\prime}\\ z^{\prime}\end{array}\right), \tag{91}\] and magnetic field components in the new coordinate system is given by, \[\left(\begin{array}{c}B_{x^{\prime}}\\ B_{y^{\prime}}\\ B_{z^{\prime}}\end{array}\right) \tag{92}\] \[=\left(\begin{array}{ccc}\cos\theta&0&\sin\theta\\ 0&1&0\\ -\sin\theta&0&\cos\theta\end{array}\right)\left(\begin{array}{c}B_{\perp} \cos\left(k_{z}(z-z_{0})+\chi_{0}\right)\\ B_{\perp}\sin\left(k_{z}(z-z_{0})+\chi_{0}\right)\\ B_{||}\end{array}\right)\] \[=\left(\begin{array}{c}B_{\perp}\cos\theta\cos\left(k_{z}(z-z_{0 })+\chi_{0}\right)+B_{||}\sin\theta\\ B_{\perp}\sin\left(k_{z}(z-z_{0})+\chi_{0}\right)\\ -B_{\perp}\sin\theta\cos\left(k_{z}(z-z_{0})+\chi_{0}\right)+B_{||}\cos\theta \end{array}\right). \tag{93}\] Then, polarization angle and Faraday depth depend on \(x^{\prime}\) and \(z^{\prime}\) through \(z(x^{\prime},z^{\prime})\) and expressed as, \[\chi(x^{\prime},z^{\prime}) =\arctan\left[\frac{B_{\perp}\sin\left(k_{z}(z-z_{0})+\chi_{0} \right)}{B_{\perp}\cos\theta\cos\left(k_{z}(z-z_{0})+\chi_{0}\right)+B_{||} \sin\theta}\right]\] \[\phi(x^{\prime},z^{\prime}) =k^{\prime}\int_{z_{0}^{\prime}}^{z^{\prime}}\left[-B_{\perp}\sin \theta\cos\left(k_{z}(z-z_{0})+\chi_{0}\right)+B_{||}\cos\theta\right]dz^{\prime}\] \[=k^{\prime}\int_{z_{0}^{\prime}}^{z^{\prime}}\left[-B_{\perp}\sin \theta\cos\left(k_{z}(z-z_{0})+\chi_{0}\right)+B_{||}\cos\theta\right]dz^{\prime}\] \[=k^{\prime}\int_{z_{0}^{\prime}}^{z^{\prime}}\left[-B_{\perp}\sin \theta\cos\left(k_{z}(z-z_{0})+\chi_{0}\right)+B_{||}\cos\theta\right]dz^{\prime}\] \[=k^{\prime}\int_{z_{0}^{\prime}}^{z^{\prime}}\left[-B_{\perp}\sin \theta\cos\left(k_{z}(z-z_{0})+\chi_{0}\right)+B_{||}\cos\theta\right]dz^{ \prime}\] \[=k^ \[= -\frac{k^{\prime}B_{\perp}\sin\theta}{k_{z}\cos\theta}\left[\sin(k_{ z}(z-z_{0})+\chi_{0})-\sin\chi_{0}\right] \tag{95}\] \[+k^{\prime}B_{||}(z^{\prime}-z_{0}^{\prime})\cos\theta\] Faraday depth is generally not a monotonic function with respect to \(z^{\prime}\) because the line-of-sight component of the magnetic field can be inverted when observing helical magnetic field from an angle. In this case, \(\chi\) and \(\phi\) are not single-valued functions with respect to \(z^{\prime}\). Fig. 12 shows an example of \(\chi(z^{\prime})\), \(\phi(z^{\prime})\) and \(\chi(\phi)\). Here, 3 cases with \(\theta=\pi/8\), \(\pi/4\) and \(3\pi/8\) are compared and other parameters are set as \(k^{\prime}B_{\perp}=1\), \(k^{\prime}B_{||}=1.5\), \(k_{z}=1\), \(z_{0}=0\), \(\chi_{0}=0\) and \(x^{\prime}=0\). In the case of \(\theta=\pi/8\), \(B_{||}\) is dominant in the line-of-sight component (see Eq. (93)) so that Faraday depth is monotonic with respect to \(z^{\prime}\) and polarization angle is almost linear with respect to Faraday depth. However, in the case of \(\theta=3\pi/8\), the line-of-sight component of the magnetic field is inverted along the line-of-sight direction and Faraday depth is not monotonic with respect to \(z^{\prime}\). Therefore, multiple positions in physical space have the same value of Faraday depth and polarization angle is not single-valued for some ranges of Faraday depth. In this case, polarization emission with different polarization angle contribute to the FDF of the same Faraday depth, which leads to depolarization. As we saw above, the dependence of the FDF phase (polarization angle) on Faraday depth has in general information on the shape and configuration of magnetic fields. ### Turbulent magnetic fields Turbulence is often developed in magnetic plasmas in interstellar space, and the presence of turbulence complicates the FDF (Beck et al., 2012; Ideguchi et al., 2017). In order to discuss the FDF due to the polarization emission of turbulent plasma, we start from the following simple model of turbulence following Ideguchi et al. (2017). * divide radiation region into regular lattice which consists of many cubic cells of size \(L_{\rm cell}^{3}\) * suppose \(N\) cells are lined up in the line-of-sight direction * magnetic is coherent within each cell * direction of turbulent magnetic field is random and there is no correlation in the direction and strength between different cells * each cell can have coherent magnetic field common to all emission region * thermal-electron density and cosmic-ray density are uniform in the whole emitting region Let us first consider the behavior of Faraday depth along the line of sight. Faraday depth of \(n\)-th cell is proportional to the sum of magnetic fields from the first cell to \(n\)-th cell. Denoting line-of-sight component of magnetic field and Faraday depth of \(i\)-th cell as \(B_{||}^{i}\) and \(\phi^{i}\), \[\phi^{n}=kn_{e}L_{\rm cell}\sum_{i=1}^{n}B_{||}^{i} \tag{96}\] Here, we assume \(B_{||}^{i}\) is a Gaussian random variable with a mean of zero and a standard deviation of \(\sigma_{B}\). Then, Faraday depth behaves as random walk. To consider the FDF, it is necessary to consider the polarization intensity distribution in each cell. Here, for simplicity, let us assume that the component of the magnetic field perpendicular to the line of sight have the same strength and orientation in all cells. This may appear to contradict with the existence of turbulent magnetic fields. However, for example, in a case of a face-on galaxy which has strong coherent magnetic field along the galactic plane, Figure 12: Behavior of \(\chi(z^{\prime})\) (top), \(\phi(z^{\prime})\) (middle) and \(\chi(\phi)\) (bottom), when emitting region with helical magnetic field is observed from an angle. Here, parameters are set as \(k^{\prime}B_{\perp}=1\), \(k^{\prime}B_{||}=1.5\), \(k_{z}=1\), \(z_{0}=0\), \(\chi_{0}=0\) and \(x^{\prime}=0\). Cases with \(\theta=\pi/8,\pi/4\) and \(3\pi/8\) are shown with black solid, blue dotted and red dashed lines. it is possible that the perpendicular component is dominated by the coherent field while the line-of-sight component is dominated by the turbulent fields. In this case, all cells have the same polarization intensity and polarization angle, and the FDF is proportional to the distribution of Faraday depth because the complex polarization intensities of cells with the same Faraday depth are summed coherently. Fig. 13 shows the result of simulations of turbulent magnetic fields. The standard deviation of \(B_{||}^{i}\) is set to \(\sigma_{B}=1~{}\mu\)G and the number of cells along a line of sight is \(N=200\). The top panel is two independent realizations of the line-of-sight components of random magnetic fields in the 200 cells. The middle panel is the sum of \(B_{||}\) from the first cell to \(n\)-th cell, which is proportional to Faraday depth of \(n\)-th cell. As the two lines are based on two independent realizations of turbulent magnetic fields, they behave as two independent random walks. The bottom panel shows the histogram of \(\phi\) which is proportional to the absolute value of the FDF. We can see that the two FDFs have significantly different shapes and average Faraday depths. This is due to the randomness of the turbulence, even if the two realizations have the same statistical property of turbulence and distribution of thermal electrons and cosmic-ray electron. Therefore, the turbulent magnetic field is one of the factors that make it difficult to extract physical information from the FDF. Ideguchi et al. (2017) proposed that adding many statistically independent FDFs will reduce the randomness and make it easier to extract physical information. This is automatically realized when the angular resolution is much larger than the coherence length of turbulence so that many turbulent cells are included within a beam. For example, the angular size of a cell of size 100 pc located at 100 Mpc is about 0.2 arcsec. If the angular resolution is 1 arcsec, about 25 independent cells are lined up perpendicular to the line of sight in a beam, and the observed FDF is a sum of those in the direction of these cells. Fig. 14 shows that the randomness reduces gradually as more statistically independent FDFs are added together. Approximately 100 or more additions reduce the difference between different realizations, and the resultant FDF is considered to represent the average behavior and statistical properties of turbulence. Such an average behavior of turbulence can be understood analytically (Ideguchi et al., 2017). First, we divide the emission region into cubic cells with size \(L_{\rm cell}\). We assume that \(N\) cells are arranged in the line-of-sight direction, and \(N_{\perp}\times N_{\perp}\) cells are arranged in a square shape in the direction perpendicular to the line of sight. Therefore, the size of the emission region is \((N_{\perp}L_{\rm cell})^{2}\times NL_{\rm cell}\). This can be thought of as a square layer of \(N_{\perp}\times N_{\perp}\) cells stacked along the line of sight. The average and dispersion of Faraday depth of \(n\)-th cell are given by, \[\langle\phi^{n}\rangle=kn_{e}L_{\rm cell}\sum_{i=1}^{n}\langle B_{||}^{i} \rangle=0 \tag{97}\] \[\langle(\phi^{n}-\langle\phi^{n}\rangle)^{2}\rangle=k^{2}n_{e}^{2}L_{\rm cell }^{2}\sum_{i=1}^{n}\langle(B_{||}^{i})^{2}\rangle=nk^{2}n_{e}^{2}L_{\rm cell} ^{2}\sigma_{B}^{2} \tag{98}\] Here, \(\langle\cdots\rangle\) represents the statistical average. Therefore, the expectation value of Faraday depth is zero and the dispersion is larger for farther layer and proportional to \(n\). In general, the probability distribution of \(\phi^{n}\) is determined by that of \(B_{||}\). However, because \(\phi^{n}\) is a sum of independent and identically distributed variables (\(B_{||}^{i},i=1,2,\cdots,n\)), it obeys Gaussian distribution with a mean of zero and a dispersion of \(nk^{2}n_{e}^{2}L_{\rm cell}^{2}\sigma_{B}^{2}\equiv n\sigma_{\phi}^{2}\) for \(n\gg 1\) ac Figure 13: Simulation of turbulence based on the cell model. The standard deviation of \(B_{||}^{i}\) is set to \(\sigma_{B}=1~{}\mu\)G and the number of cells along a line of sight is \(N=200\). The top and middle panels show \(B_{||}\) and the sum of \(B_{||}\) from the first cell to \(n\)-th cell which is proportional to Faraday depth of \(n\)-th cell, taking the horizontal axis as the cell number \(n\). The bottom panel shows the histogram of \(\phi\) which is proportional to the absolute value of the FDF. Two independent realizations of turbulent fields are shown in black solid line and red dashed line. cording to the central limit theorem. Then, the probability distribution of Faraday depth of a cell in \(n\)-th layer, \(P_{n}(\phi)\), is, \[P_{n}(\phi)=\frac{1}{\sqrt{2\pi n\sigma_{\phi}^{2}}}\exp\left[-\frac{\phi^{2}}{2 n\sigma_{\phi}^{2}}\right] \tag{99}\] Assuming that the polarization intensity and polarization angle are the same for all cells, the FDF is proportional to the histogram of Faraday depth of \(N_{\perp}^{2}N\) cells. For a large \(N_{\perp}\), the histogram of \(n\)-th layer approaches the probability distribution in Eq. (99) multiplied with \(N_{\perp}^{2}\) and the FDF is approximately given by, \[F(\phi)\propto\sum_{n=1}^{N}P_{n}(\phi) \tag{100}\] Therefore, the FDF is a sum of many Gaussian functions with different widths. A demonstration of a sum of Gaussian functions with different widths is shown in Fig. 15. The resultant function has a heavier tail and a sharper peak than a Gaussian function. The two bottom panels in Fig. 14 have the same feature. On the other hand, the phase of the FDF is constant with respect to Faraday depth. It should be noted that \(P_{n}(\phi)\) depends on the probability distribution of \(B_{||}\) for small values of \(n\) and the approximation of Gaussian function is not valid. However, if \(N\gg 1\), the contribution of the layers of such small \(n\) to the sum in Eq. (100) is not significant and the resultant FDF is well approximated by Eq. (100). So far, the direction and strength of magnetic field perpendicular to the line of sight were assumed to be the same for all cells. If the perpendicular component is also assumed to be random, the complex polarization intensity is not coherently summed at the same Faraday depth, which causes depolarization. Because the polarization angle is angle, the polarization intensity at a specific value of Faraday depth is proportional to \(\sqrt{N_{\phi}}\), where \(N_{\phi}\) is the number of cells which has the value of Faraday depth. In this case, the FDF of each layer is proportional to a square root of a Gaussian function. Even then, the qualitative property remains the same in the sense that the total FDF is a sum of functions which are broader for deeper layers. Contrastingly, there is no correlation between the phases of the total FDF at different Faraday depths. The FDF of turbulent plasma was also calculated with numerical MHD simulations of turbulence in Basu et al. (2019). ### Coherent and turbulent magnetic fields Let us consider a case where both coherent and turbulent magnetic fields exist in a emission region. The parallel component of each cell, \(B_{||}\), is a sum of coherent (\(B_{\rm coh}\)) and turbulent (\(B_{\rm tur}\)) components. \[B_{||}=B_{\rm coh}+B_{\rm tur} \tag{101}\] Here, \(B_{\rm coh}\) is a constant common to all cells, while \(B_{\rm tur}\) is a random variable. In this case, the expectation value of Faraday depth and its variance of a cell at \(n\)-th layer is given by, \[\langle\phi^{n}\rangle=nkn_{e}L_{\rm cell}B_{\rm coh}\equiv n\phi_{0} \tag{102}\] \[\langle(\phi^{n}-\langle\phi^{n}\rangle)^{2}\rangle=n\sigma_{0}^{2} \tag{103}\] Figure 14: Simulation of the FDF due to turbulent plasma. While the parameters are the same as Fig. 13, independent FDFs are added together. The number of independent FDFs is \(N=1,10,100\), and \(1000\), and the FDFs of two different realizations are shown. As \(N\) increases, the difference between the two realizations becomes small. It should be noted that both are proportional to \(n\). Therefore, for \(n\gg 1\), the probability distribution of Faraday depth at \(n\)-th layer is a Gaussian function whose center is shifted by \(\langle\phi^{n}\rangle\). \[P_{n}(\phi)=\frac{1}{\sqrt{2\pi n\sigma_{\phi}^{2}}}\exp\left[-\frac{\left( \phi-n\phi_{0}\right)^{2}}{2n\sigma_{\phi}^{2}}\right] \tag{104}\] As discussed in section 4.4, if the coherent field is dominant in the perpendicular component, the total FDF is again expressed by Eq. (100), that is, a sum of Gaussian functions. In the current case, not only the width but the central value of the Gaussian functions changes at a certain rate toward deeper layers. An example with \(\phi_{0}=\sigma_{\phi}=1\) is shown in the bottom panel of Fig. 15. As we can see, while each Gaussian function is symmetric, the total FDF is not symmetric and has a finite skewness. The overall shape is determined by the relative strength of coherent and turbulent fields. Ideguchi et al. (2017) focused on the width, skewness and kurtosis to characterize the FDF and investigated the relation to the strength of the coherent field. So far, in our simple model, we treated turbulence by dividing an emission region into cubic cells. In actual interstellar turbulence, fluctuations of various scales exist, and a turbulent magnetic field with a power-law power spectrum is often considered. One of the major models is the Kolmogorov turbulence. Ideguchi et al. (2017) performed a series of numerical simulations of turbulent magnetic fields with a power-law power spectrum to make a comparison with the simple cell model and evaluate the effect of the spectral index on the FDF. As a result, it was shown that the difference between the two models is not significant and the dependence on the spectral index is also small. As we saw in Eq. (96), Faraday depth is a sum of many independent variables and the much information of the original statistical properties of turbulent fields is lost due to the central limit theorem. Thanks to this fact, while the FDF of an emission region with a turbulent magnetic field is universal and relatively easy to interpret physically, it makes it impossible to explore the statistical properties of turbulence. Finally, in the simple cell model, we assumed the thermal-electron density and cosmic-ray electron density to be constant, but they are generally non-uniform in practice. However, the method of considering the uniform and turbulent fields separately is still effective, and the simple cell model provides a generic feature of the FDF of turbulent plasma and works as a benchmark to understand more realistic and complicated FDFs. ### Galactic models There are many kinds of astronomical objects which have magnetic fields and emit synchrotron radiation, and a galaxy is one of the most interesting target of Faraday tomography. In previous studies, there have been some attempts to theoretically predict realistic galaxy FDFs to help the physical interpretation of observationally reconstructed FDFs. Eguchi et al. (2020) introduced global axisymmetric magnetic fields to the simple cell model of the previous section in order to discuss the FDF of spiral galaxies. Fig. 16 is the schematic view of their galactic model. Global magnetic field is given to each cell depending on the position in the galaxy and turbulent field is randomly distributed. The thermal-electron density and cosmic-ray electron density are assumed to be uniform. Therefore, it is physically equivalent to the cell model except that an axisymmetric magnetic field along the galactic plane is considered. However, Eguchi et al. (2020) considered this galactic model to be observed from various directions, and as we will see later, the shape of the FDF changes significantly depending on the inclination angle because the relative relationship between the direction of the global magnetic field and the line-of-sight direction changes. Further, they took an effect of a finite beam size into account and the shape of the FDF also depends on the position in the galaxy. To calculate the FDF, the \((x,y)\) plane is set along the Figure 15: Summation of Gaussian functions. Top panel shows 5 Gaussian functions with a mean of zero and increasing with (\(\sigma_{\phi}^{2}=1\sim 5\)) and the sum multiplied with \(0.5\). Bottom panel shows the case with an increasing mean and width (\(\phi_{0}=\sigma_{\phi}=1\)). galactic plane and the \(z\) axis is perpendicular to it as in Fig. 16. The line of sight is assumed to be parallel to the \(y\) axis and the inclination is denoted as \(\theta\). The distance between the galactic center and the intersection of the line of sight and \(z\!=\!0\) plane is denoted as \(\beta\) [pc]. A cell is a cube with a size of (10 pc)\({}^{3}\) and the thickness of the galaxy is set to 1 kpc. The fiducial model has the thermal-electron density of \(n_{e}\!=\!0.02\) cm\({}^{-3}\), the strength of global magnetic field of 5 \(\mu\)G and the standard deviation of turbulent fields of 1 \(\mu\)G. In fact thermal electrons and cosmic-ray electrons are not uniform and the density is often considered to decrease exponentially in the direction perpendicular to the galactic plane with the scale height of 1 kpc. Thus, a uniform galactic plane with a thickness of 1 kpc can be regarded as a zeroth-order approximation. Another important model parameter is a pitch angle, which is the angle between the tangential direction of a circle centered on the galactic center and the global magnetic field line. The global field is ring-like when the pitch angle is zero, which is adopted as a fiducial value. Fig. 17 shows the line-of-sight profile of the perpendicular and parallel components of the global field, \(B_{\perp}\) and \(B_{||}\), respectively, and Faraday depth taking only the global field into account. Here, the position of the line of sight is set to \(y\!=\!200\) pc and \(\beta\!=\!0\) pc and three cases with different inclination, \(\theta\!=\!20\),40 and 60 deg, are compared. First, it should be noted that in the case of face-on observation (\(\theta\!=\!0\)) the global field is perpendicular to the line of sight and does not contribute to Faraday depth. A larger inclination leads to a stronger \(B_{||}\) and Faraday depth evolves more rapidly with \(z\). Because the sign of \(B_{||}\) doesn't change, Faraday depth monotonically increases. For a fixed inclination, \(B_{||}\) is stronger around the plane of \(z\!=\!0\) and weaker away from it so that the increase rate of Faraday depth changes accordingly. Contrastingly, \(B_{\perp}\) is weaker around \(z\!=\!0\) so that the polarization intensity is smaller there compared to other region. Fig. 18 shows the absolute value of the corresponding Figure 16: Schematic view of the galactic model by Eguchi et al. (2020). Magnetic fields consist of global axisymmetric field and turbulent field similar to the cell model in section 4.4. The \((x,y)\) plane is set along the galactic plane and the \(z\) axis is perpendicular to it. The line of sight is assumed to be parallel to the \(y\) axis and the inclination is denoted as \(\theta\). The distance between the galactic center and the intersection of the line of sight and \(z=0\) plane is denoted as \(\beta\) [pc]. GAAS. Reproduced with permission. Figure 17: Line-of-sight profile of the perpendicular and parallel components of the global field and Faraday depth taking only the global field into account from top to bottom (Eguchi et al., 2020). The position of the line of sight is set to \(y=200\) pc and \(\beta=0\) pc and three cases with different inclination, \(\theta=20,40\) and 60 deg, are compared. GAAS. Reproduced with permission. FDFs. It can be seen that for a larger inclination angle, the FDF has a non-zero value over a wider range of Faraday depth. There is a characteristic dip at the center of each FDF. Because Faraday depth is monotonically increasing along the line of sight, the center in Faraday depth space corresponds to the center in physical space (\(z\sim 0\)). The existence of the dip is attributed to the facts that the polarization intensity is relatively weak around \(z\sim 0\) as we saw in Fig. 17 and that the increase rate of \(B_{||}\) is larger there. Thus, for the same galactic model with the same values of parameters, the FDFs are apparently different depending on the inclination. In Eguchi et al. (2020), taking the above parameters as fiducial values, they calculated the FDFs varying the position of the line of sight, the pitch angle and the relative strength of global and turbulent fields. It was found that the shape of the FDF changes variously depending on these parameters and that the FDF contains much information about galaxies. On the other hand, the diversity that exists even in such a simple model will make it difficult to physically interpret the FDFs. If some of the parameters such as the inclination angle could be measure by optical observation, it will be helpful for the physical interpretation. Ideguchi et al. (2014b) calculated the FDF of a realistic galaxy model constructed by Akahori et al. (2013) with multi-wavelength observation data. The galaxy model consists of the following ingredients. * thermal-electron density (Cordes & Lazio, 2002): a model of our Galaxy called NE2001 based on dispersion measure of pulsars * coherent magnetic field along the galactic plane (Sun et al., 2008): axisymmetric and bisymmetric fields obtained from all-sky observations of radio intensity, polarization intensity and RMs * toroidal magnetic field in halo (Sun & Reich, 2010): large-scale patterns of all-sky Faraday rotation map suggest the existence of toroidal magnetic fields in opposite directions north and south of the galactic plane * poloidal magnetic field in the halo (Giacinti et al., 2010): suggested from the arrival direction distribution of ultra-high energy cosmic rays * turbulent fields (Kim et al., 1998): estimation from MHD simulations of turbulence in interstellar medium * cosmic-ray electron density (Sun et al., 2008): exponentially falls in the direction perpendicular to the galactic plane and in the radial direction, and the scale heights are 1 kpc and 8.5 kpc, respectively Incorporating these elements, Ideguchi et al. (2014b) constructed a Galactic model of \(500~{}{\rm pc}\times 500~{}{\rm pc}\) along the galactic plane near the Sun and a region of \(-10~{}{\rm kpc}<z<10~{}{\rm kpc}\) perpendicular to it. Then, the FDF was calculated assuming this region (\(500~{}{\rm pc}\times 500~{}{\rm pc}\times 20~{}{\rm kpc}\)) was placed outside our Galaxy and observed from the direction perpendicular to the plane. The angular size of a region of \(500~{}{\rm pc}\times 500~{}{\rm pc}\) is \(10^{\prime\prime}\times 10^{\prime\prime}\) if it is located at 10 Mpc. Fig. 19 is the phase and absolute value of the calculated FDF. The FDFs of the whole region and 4 sub-regions of \(250~{}{\rm pc}\times 250~{}{\rm pc}\) are plotted. The FDF extends in the range of \(-15~{}{\rm rad/m^{2}}\lesssim\phi\lesssim 2~{}{\rm rad/m^{2}}\) and has narrow peaks at \(\phi\sim 0~{}{\rm rad/m^{2}}\) and a wide peak at \(\phi\sim-12~{}{\rm rad/m^{2}}\). In addition, there are small fluctuations in the phase and absolute value of the FDF, which suggests that the effect of turbulence is not sufficiently smoothed out. In fact, it was shown that the FDF obtained form a different realization of turbulent magnetic fields have a significantly different shape, and it would be difficult to interpret such FDFs obtained by observations of an emitting region of this size. Nevertheless, it was pointed out that information such as the scale height of the density of thermal electrons and cosmic-ray electrons can be obtained from the characteristic quantities such as the width, skewness and kurtosis of the absolute value of the FDF. It is desirable for the FDF using a model of the entire galaxy to be calculated. As described above, there are two approaches to understanding the FDF of galaxies: one is to incorporate various elements into a simple model, and the other is to construct a realistic model based on observational data. The former is relatively easy to understand qualitatively because the effect of each element on the FDF can be seen, but it is difficult to make a precise model to make a quantitative comparison with observational data. On the other hand, the latter can be directly compared with observational data, but it is difficult to make physical interpretations and estimate parameters due to the complexity of the model. Since the FDF of galaxies involves many factors and phys Figure 18: Absolute value of the FDFs corresponding to the three cases in Fig. 17. **QAAS**. Reproduced with permission. ical processes, it is necessary to deepen the qualitative and quantitative understanding through both approaches and compare it with observational data. ### Intergalactic magnetic field While the FDF contains information on the magnetic field of polarization sources and the line-of-sight distribution of polarized radiation as we saw above, it has been proposed that it is also useful for exploring the intergalactic magnetic field (IGMF) (Akahori & Ryu 2010; Akahori & Ryu 2011). As an example, let us consider two polarization sources along the line of sight. Two sources can be our Galaxy and an external galaxy, or 2 external galaxies in the same direction. The intergalactic space is between the two sources, and, because the gas density is very small there, almost no radiation is emitted from the intergalactic matter. Nevertheless, the presence of a magnetic field of some strength can contributes to Faraday rotation. Therefore, the IGMF can create a gap between the two sources in Faraday-depth space. Fig. 20 represents this situation (Ideguchi et al. 2014a). To identify a gap in Faraday-depth space, we need to measure precisely the width of two sources in Faraday-depth space. Because the sidelobes seen in the dirty FDF are much more widespread than the sources really are, they are a major obstacle to the gap identification. Further, as we saw in section 2.3, the RM contributed from the IGMF is expected to be only \(O(1)\) rad/m\({}^{2}\), two sources will be very close to each other and high resolution observation in Faraday-depth space is necessary do resolve them. Therefore, study of the IGMF with Faraday tomography requires a wideband observation including long-wavelength bands. Ideguchi et al. (2014a) estimated the measurement accuracy of rotation measure due to the IGMF by Fisher analysis, assuming a combination of observations with LOFAR, GMRT and ASKAP and the data analysis with QU fit method described later. As a result, it was shown that, if the brightness of two sources is more than 0.1 mJy, the IGMF about 3 rad/m\({}^{2}\) is measurable with 1-hour observation of each telescope. Practically, due to the presence of Radio Frequency Interference (RFI), some ranges of the observation bands are not available even if they are covered by the telescope. Based on the situation of the RFI, Akahori et al. (2014); Akahori et al. (2018) discussed which band is suitable for observing the IGMF. Here, it should be noted that, even if the rotation measure due to the IGMF is larger than expected, it does not necessarily create a gap in Faraday depth space. As discussed in section 4.1, in a situation where the magnetic field is inverted inside the emitting region, the edges in real space generally does not correspond to the edges in Faraday depth space. Therefore, depending on the configuration of the magnetic field in the sources or in the intergalactic matter, Faraday rotation in the intergalactic matter may not appear as a gap. Contrastingly, the presence of a gap in Faraday-depth space is a sufficient condition to indicate the existence of the intergalactic magnetic field. In this sense, this method is effective to probe the IGMF. ## 5 Algorithms of Faraday tomography In section 4, we learned that the FDF contains a wealth of information on polarization sources with magnetic fields. However, the FDF cannot be directly observable but has to be reconstructed from observation of polarization spec Figure 19: Phase (top) and absolute value (bottom) of the FDF calculated in Ideguchi et al. (2014b). The FDFs of the whole region (\(500\,\mathrm{pc}\times 500\,\mathrm{pc}\), black solid line) and 4 sub-regions of \(250\,\mathrm{pc}\times 250\,\mathrm{pc}\) are plotted. GAAS. Reproduced with permission. Figure 20: Schematic view of the FDF which reflects the intergalactic magnetic field appearing as a gap between two polarization sources (Ideguchi et al. 2014a). trum. Let us remind the relation between the FDF \(F(\phi)\) and polarization spectrum \(P(\lambda^{2})\): \[P(\lambda^{2})=\int_{-\infty}^{\infty}F(\phi)e^{2i\phi\lambda^{2}}d\phi \tag{105}\] Here, while the integration range is formally \(-\infty<\lambda^{2}<\infty\), we can practically obtain the polarization spectrum only for a limited range of positive \(\lambda^{2}\) and the FDF cannot be reconstructed perfectly. In other words, there are an infinite number of FDFs which give the same polarization spectrum for a finite range and observed polarization spectrum cannot determine the FDF uniquely. In this situation, the purpose of Faraday tomography is to obtain a physically reasonable FDF by some means. Accurate reconstruction of the FDF not only provides the physical information of the polarization source, but also leads to the detection of the intergalactic magnetic field as seen in section 4.7. The simplest of Faraday tomography is to perform an inverse Fourier transform with only the obtained polarization data, that is, to obtain a dirty FDF, which is called Rotation Measure synthesis (RM synthesis). As we saw in section 3, a dirty FDF has artificial features such as the spread in Faraday depth space and sidelobes due to the finite observation band, which makes it difficult to physically interpret the FDF. So far, many algorithms have been proposed to alleviate these problems. In this section, we review the outline of some of major algorithms. ### RM CLEAN RM CLEAN is a method developed based on the image synthesis algorithm CLEAN (Hogbom, 1974) of radio interferometer, and attempts to reconstruct the FDF by regarding it as a collection of delta functions (Brentjens, 2007; Heald et al., 2009). As we saw in section 3.2, when a delta-function-type FDF is observed with a finite frequency range, the dirty FDF has a finite width as shown in Fig. 4, which is the RMSF \(R(\phi)\). Therefore, RM CLEAN estimates a set of delta functions (CLEAN components) that produces the dirty FDF, assuming that the dirty FDF is a superposition of the RMSF pattern. The specific algorithm is as follows. 1. find a peak in the dirty FDF \(\tilde{F}(\phi)\) and set the peak position as \(\phi=\phi_{0}\) 2. add a delta function \(\gamma F(\phi_{0})\delta(\phi-\phi_{0})\) to CLEAN component, where \(\gamma\) is a constant gain and is often taken as \(\gamma\sim 0.1\) 3. subtract \(\gamma F(\phi_{0})R(\phi-\phi_{0})\) from \(\tilde{F}(\phi)\) 4. find a peak in the residual, \(\tilde{F}(\phi)-\gamma F(\phi_{0})R(\phi-\phi_{0})\) and repeat 2 and 3 above 5. finish the iteration when the residual is below a constant \(\epsilon\) at all \(\phi\) 6. multiply each of CLEAN component by a Gaussian function with the same width of the RMSF, considering the resolution in Faraday-depth space determined by the observation frequency range 7. add the residual to obtain the final result called the cleaned FDF Fig. 21 is an example of RM CLEAN. Here, a delta-function source is put at \(\phi=30\) rad/m\({}^{2}\) and the dirty FDF and cleaned FDF assuming an observation of a \(700\sim 1800\) MHz band are shown. The dirty FDF is the RMSF itself because there is only one delta function. The cleaned FDF has a Gaussian function with the width of the RMSF (FWHM \(\sim 22\) rad/m\({}^{2}\)) at the correct position and small residuals. Thus, the polarization source is originally a delta function, but it is reproduced with a finite width due to the finite observation band. Nevertheless, RM CLEAN eliminates the sidelobes and outputs an FDF close to the original FDF. RM CLEAN is widely used in polarization analysis be Figure 21: Example of RM CLEAN. A delta-function source is put at \(\phi=30\) rad/m\({}^{2}\) and the dirty FDF (red) and cleaned FDF (blue) assuming an observation of a \(700\sim 1800\) MHz band are shown. Figure 22: Example oRM CLEAN for a Gaussian source with \(\phi=30\) rad/m\({}^{2}\). The dirty FDF (red) and cleaned FDF (blue) are plotted for \(\sigma=4.5\) rad/m\({}^{2}\) (thick) and \(\sigma=13\) rad/m\({}^{2}\) (thin). An observation with a \(700\sim 1800\) MHz band is assumed. cause it produces reasonable results for its low computational cost compared to other methods. However, RM CLEAN may not work well in some cases. That is, for example, when two delta-function FDFs are located close to each other in the \(\phi\) space by about the FWHM of the RMSF or closer (Farnsworth et al., 2011). As an example of \(\Delta\phi=2\) in the bottle panel of Fig. 5, when two sources are close, they interfere and only one peak appears in the dirty FDF. Then, because RM CLEAN assumes that there is a delta-function source at the peak of the dirty FDF, in this example, a CLEAN component stands at \(\phi=0\) where there is actually no source. This phenomenon is called Rotation Measure ambiguity (RM ambiguity). As we saw in the bottom panel of Fig. 6, the interference of two sources depends on the difference in polarization angle as well. Kumazaki et al. (2014) performed a series of simulations to identify the condition of RM ambiguity. More specifically, they consider two delta-function sources and investigate the condition of RM ambiguity varying the gap in the \(\phi\) space, difference in polarization angle and brightness ratio. As a result, it was discovered that RM ambiguity occurs even for a gap of \(1.5\times\) FWHM depending on the condition. Miyashita et al. (2016) performed more detailed simulations to find that the estimation of Faraday depth, polarization angle and polarization intensity of two sources is biased by several tens of percent even if RM ambiguity does not happen, when the gap between the two sources is less than \(1.2\times\) FWHM. Further, it is difficult to reconstruct FDFs with extended structure in the \(\phi\) space by RM CLEAN. Fig. 22 shows an example of a Gaussian function. It can be seen that a wider Gaussian function leads to a cleaned FDF with a smaller peak value and the reproducibility is poorer. As we saw above, while RM CLEAN is simple and the result of reconstruction is often reasonable, it should be noted that it may return a poor result depending on the shape of the original FDF. ### QU fit QU fit is another common method of Faraday tomography, which assumes a model (functional form) of the FDF and determine the model parameters by fitting the polarization spectrum calculated from it to the observed data. Some of the commonly used models are following: * delta function (parameters: \(\boldsymbol{\theta}=\{f_{0},\chi_{0},\phi_{0}\}\)) \[F(\phi)=f_{0}e^{2i\chi_{0}}\delta(\phi-\phi_{0})\] (106) * Gaussian function (parameters: \(\boldsymbol{\theta}=\{f_{0},\chi_{0},\phi_{0},\sigma\}\)) \[F(\phi)=\frac{f_{0}}{\sqrt{2\pi}\sigma}e^{-\frac{(\phi-\phi_{0})^{2}}{2\sigma ^{2}}+2i\chi_{0}}\] (107) * top-hat function (parameters: \(\boldsymbol{\theta}=\{f_{0},\chi_{0},\phi_{0},\phi_{1}\}\), where \(\phi_{1}>\phi_{0}\)) \[F(\phi)=f_{0}e^{2i\chi_{0}}\left[\Theta(\phi-\phi_{0})-\Theta(\phi-\phi_{1})\right]\] (108) In practice, because there may be multiple sources along the line of sight or the FDF can have a complicated structure, a model with multiple of these functions is also used. The following chi-square is often used as an index to evaluate the fit between the observed data and the model. \[\chi^{2}(\boldsymbol{\theta})=\sum_{j}\frac{(P_{\rm model}(\lambda_{j}^{2}, \boldsymbol{\theta})-P_{\rm obs}(\lambda_{j}^{2}))^{2}}{\sigma_{j}^{2}} \tag{109}\] Here \(j\) represents the index of wavelength channel, \(\sigma_{j}\) is an error of \(P_{\rm obs}(\lambda_{j}^{2})\) and \(\boldsymbol{\theta}\) is a vector of model parameters. By finding the parameters that minimize this chi-square value, we can find an FDF that explains the observed data well. In RM Synthesis, the inverse Fourier transform is performed on the polarization spectrum in the observed band, putting artificially \(P(\lambda^{2})=0\) in the unobserved band including negative \(\lambda^{2}\). The same is true for RM CLEAN because it is based on dirty FDF, and such an operation causes the delta function source to spread in the \(\phi\) space. On the other hand, QU fit does not require any explicit assumption on the unobserved bands, limiting the possibility for artifacts in the FDF. In QU fit, the shape of FDFs is assumed in advance, but it is not obvious whether the true FDF can be accurately expressed by the assumed functional form. In fact, as we saw in section 4, FDFs are not expressed in a simple functional form even for a simple galactic model. Therefore, it is necessary to try several models with different functional forms and number of sources and to select the model that best approximates the true FDF. The chi-square mentioned earlier is a common index of the goodness of data fitting, but it cannot simply be considered that the model with a smaller value of chi-square is better. In general, a model with more parameters is easier to fit to the data, but a complex model with too many parameters is difficult to interpret physically and is not a good model. Further, due to the presence of observational errors, even if the correct model is used, the data cannot be perfectly fitted, and an unnecessarily complex model will overfit the data. Therefore, the balance between the goodness of fit to the data and the simplicity of the model is important, and the information criterion is often used as an index for model selection. There are various information criteria, and the simple information criteria that are often used are as follows. * AIC (Akaike information criterion) \[{\rm AIC}=-2\ln{(L)}+2k\] (110) * BIC, Bayesian information criterion) \[\text{BIC}=-2\ln{(L)}+k\ln{(n)}\] (111) Here, \(L=e^{-\chi^{2}/2}\) is likelihood function, \(n\) is the number of observation data and \(k\) is the number of parameters. In the expression of these two information criteria, the first term represents the goodness of fit to the data and becomes smaller as it fits better. The second term is a penalty for complex models, which increases with more parameters. By selecting a model that minimizes the information criterion, the fit to the data and the simplicity of the model can be balanced. Comparing AIC and BIC, BIC gives a stronger penalty for the number of parameters unless the number of data is extremely small, so a simpler model tends to be selected using BIC. There are other types of information criteria, and the applicability varies depending on the size of the data and the nature of the error. In estimating parameters and selecting models, it is necessary to search the parameter space to find the place where the chi-square is minimized. However, this is not easy for the following two reasons. One is that the number of parameters is generally large. As there can be multiple sources within an observation field or one source can have a complex FDF as seen in section 4, a model with multiple delta functions and Gaussian functions is often used to fit data. For example, if a model with three Gaussian functions is adopted, the number of parameters is 12 and it is not practical to perform a grid search in the 12-dimensional parameter space considering the calculation time. Therefore, various methods for efficiently searching the parameter space have been developed such as the Markov Chain Monte Carlo (MCMC) and the nested sampling. Another difficulty lies in the chi-square structure in the parameter space. In general, the further away from the correct parameter set in the parameter space, the larger the chi-square value becomes. However, depending on the nature of the problem, the chi-square can take a local minimum at a parameter set away from the correct one. If there are many such local minima, it becomes difficult to find the global minimum by the gradient method or MCMC. In the case of Faraday tomography, periodic functions are often involved due to the nature of Fourier transform and many local minima of chi-square are considered to exist in the parameter space. As an example to show this, let us consider an FDF which consists of two delta functions. They are assumed to have the same polarization intensity and other parameters are set as \(\phi_{1}=5\) rad/m\({}^{2}\),\(\phi_{2}=10\) rad/m\({}^{2}\),\(\chi_{1}=\pi/3\) rad and \(\chi_{2}=\pi/2\) rad. Fig. 23 shows the contour of chi-square in \((\phi_{1},\phi_{2})\), \((\phi_{2},\chi_{2})\) and \((\chi_{1},\chi_{2})\) planes, assuming an observation band of \(0.1\) m\({}^{2}\leq\lambda^{2}\leq 0.5\) m\({}^{2}\). Here, parameters other than the two are set to the correct values. Observation errors are not taken into account so that \(\chi^{2}=0\) at the correct parameter set shown as a cross. While the chi-square takes the global minimum here, several local minima can be seen in \((\phi_{1},\phi_{2})\) plane. Miyashita et al. (2019) investigated the performance of QU fit by the MCMC method in model selection and parameter estimation, by simulating observation of two Gaussian-function sources. It was found that, compared to RM CLEAN, QU fit can resolve two sources in many cases even if the dirty FDF has only one peak. While QU fit can resolve two sources more often for a larger separation in the \(\phi\) space, due to the interference in the \(\phi\) space of the two sources, it may not work even if the separation is larger than the FWHM of the RMSF. The structure of chi-square in the parameter space becomes more complicated as the number of parameters increases. Fig. 24 shows an isosurface of chi-square in \((\phi_{1},\phi_{2},\phi_{3})\) space in the case with three delta-function sources. The polarization intensity is assumed to be the same for the three sources and other parameters are set as \(\phi_{1}=5\ \mathrm{rad/m^{2}},\chi_{1}=\pi/3\ \mathrm{rad},\phi_{2}=10\ \mathrm{rad/m^{2}},\chi_{2}=\pi/2 \ \mathrm{rad},\phi_{3}=18\ \mathrm{rad/m^{2}}\) and \(\chi_{3}=\pi/6\ \mathrm{rad}\). Parameters other than \((\phi_{1},\phi_{2},\phi_{3})\) are set to the correct values to draw the isosurface. We can see many local minima in \((\phi_{1},\phi_{2},\phi_{3})\) space and finding the global minimum is much harder in this case compared to the previous case with two sources. In this way, an FDF model with more parameters is expected to have more complicated structure of chi-square in the parameter space. More efficient algorithm of QU fit is necessary to search a huge and complicated parameter space and to find the global minimum. Computational cost is a serious problem, especially in an era where millions of polarization sources can be obtained with wide-field surveys. ### Sparse modelling As we saw before, the fundamental difficulty of Faraday tomography, that is, the procedure to obtain the FDF from observed polarization spectrum, is that polarization spectrum can be measured only in a finite range of \(\lambda^{2}\), while complete Fourier transform need it for \(-\infty<\lambda^{2}<\infty\). There are infinite number of FDFs which can explain observed polarization spectrum of a finite range. Sparse modelling, or compressive sampling, assumes that the FDF is sparse and search for the sparsest possible solution while reproducing the observation data. It is an idea similar to Occam's razor which selects the simplest model that can explain observation data. In astronomy, sparse modelling has been successful in image synthesis of radio interferometry and application to Faraday tomography was initiated in Li et al. (2011); Andrecut et al. (2012); Akiyama et al. (2018) and recently further developed in Carcamo et al. (2023). To explain the principle of sparse modelling, we discretize the relation between polarization spectrum and the FDF, Eq. (40). \[P(\lambda_{j}^{2})=\sum_{k}e^{2i\phi_{k}\lambda_{j}^{2}}F(\phi_{k}) \tag{112}\] This equation can be expressed as, \[P_{j} =\sum_{k}M_{jk}F_{k} \tag{113}\] \[P_{j} \equiv P(\lambda_{j}^{2}),\quad j=1,2,\cdots,J\] (114) \[F_{k} \equiv F(\phi_{k}),\quad k=1,2,\cdots,K\] (115) \[M_{jk} \equiv e^{2i\phi_{k}\lambda_{j}^{2}}, \tag{116}\] where \(J\) is the number of data, \(K\) is the number of grid in the \(\phi\) space and we generally have \(J<K\). We simplify the expression further as, \[\mathbf{P}=\mathbf{M}\mathbf{F}. \tag{117}\] Eq. (117) is a set of linear equations, but since the number of unknowns \(F_{k}\) (\(K\)) is larger than the number of equations (\(J\)), there are an infinite number of solutions that satisfy this. Then, we assume that the true solution has a sparsity. Most simply, a sparsity is that the number of non-zero \(F_{k}\) is the fewest of all solutions. Denoting \(L_{0}\) norm, the number of non-zero component, as \(||\mathbf{F}||_{0}\), we consider the following minimization problem. \[\mathrm{min}_{\mathbf{F}}||\mathbf{F}||_{0}\ \ \mathrm{subject\ to}\ \ \mathbf{P}=\mathbf{M}\mathbf{F} \tag{118}\] Here, "\(\mathrm{min}\mathbf{r}\)" represents minimization with respect to \(\mathbf{F}\). Therefore, this problem is to find a solution with the least value of \(L_{0}\) norm among infinite number of solutions which explain the observation data (\(\mathbf{P}=\mathbf{M}\mathbf{F}\)). This is the basic idea of sparse modelling, but solving this problem actually requires trying all combinations of which \(F_{k}\) should be zero (combinatorial optimization) and is practically impossible to solve when \(K\) is large due to the computational cost. We therefore define a problem that is equivalent to \(L_{0}\)-norm minimization and feasible. First, let us define \(L_{p}\) norm with \(p\geq 1\) as, \[||\mathbf{F}||_{p}\equiv\left(\sum_{j}|F_{j}|^{p}\right)^{1/p}. \tag{119}\] We consider minimization of \(L_{1}\) norm, instead of \(L_{0}\) norm. \(L_{1}\) norm is the sum of absolute values of all \(F_{j}\), and a solution with minimum \(L_{1}\) norm often has minimum \(L_{0}\) norm. Minimization of \(L_{1}\) norm is convex optimization, rather than combinatorial optimization, and can be solved with a reasonable computational cost. Therefore, a problem to solve can be written as, \[\min_{\mathbf{F}}||\mathbf{F}||_{1}\ \ \text{subject to}\ \ \mathbf{P}=\mathbf{MF} \tag{120}\] In a practical situation, due to observational errors, \(\mathbf{P}=\mathbf{MF}\) should not be satisfied exactly and a solution with a small value of \(||\mathbf{P}-\mathbf{MF}||_{2}^{2}\) can be considered to explain observation data well. Denoting a typical observation error as \(\sigma\), the minimization problem which takes observation errors into account is written as, \[\min_{\mathbf{F}}||\mathbf{F}||_{1}\ \ \text{subject to}\ \ ||\mathbf{P}- \mathbf{MF}||_{2}^{2}<\sqrt{J}\sigma. \tag{121}\] In fact, since it is necessary to make both \(L_{1}\) norm and the deviation from the observed data as small as possible, the problem of Eq. (121) is equivalent to the following problem. \[\min_{\mathbf{F}}\left(||\mathbf{P}-\mathbf{MF}||_{2}^{2}+\Lambda||\mathbf{F} ||_{1}\right) \tag{122}\] Here, the first term is the chi-square that represents the goodness of fit of the observed data with the model, and the second term is the \(L_{1}\) norm. \(\Lambda\) is a hyperparameter that determines the balance between data fit and sparsity, and needs to be given in advance. Such a problem setting is called LASSO (the least absolute shrinkage and selection operator). Two methods to extend Eq. (122) have been studied to improve the performance. One is to change the basis. So far, sparsity has been considered to be that \(F\) is non-zero with as few \(\phi_{k}\) as possible, but this is not the case with sources spread in the Faraday depth space. Denoting a new basis of FDFs as \(\psi_{\ell}(\phi)\), we can write as, \[F(\phi)=\sum_{\ell}\xi_{\ell}\psi_{\ell}(\phi), \tag{123}\] where \(\xi_{\ell}\) are coefficients. It should be noted that the basis of the original FDF is delta functions as \(\psi_{\ell}(\phi)=\delta(\phi-\phi_{\ell})\) and \(\xi_{\ell}=F(\phi_{\ell})\). Other bases often used are Gaussian functions, top-hat functions and wavelets. If we take Gaussian functions as the basis, the sparsity means that \(F(\phi)\) represented by the sum of as few Gaussian functions as possible. In order to rewrite the LASSO with a new basis, we write the new basis with delta functions. \[\psi_{\ell}(\phi)=\sum_{k}V_{k\ell}\delta(\phi-\phi_{k}) \tag{124}\] Then, the FDF can be written as, \[F(\phi)=\sum_{\ell}\xi_{\ell}\psi_{\ell}(\phi)=\sum_{k,\ell}\xi_{\ell}V_{k \ell}\delta(\phi-\phi_{k}), \tag{125}\] and we have, \[\xi_{\ell}=\sum_{k}V_{k\ell}^{-1}F_{k}\equiv\sum_{k}W_{k\ell}F_{k}, \tag{126}\] where \(W_{k\ell}\) is the inverse matrix of \(V_{k\ell}\). Therefore, the LASSO in the new basis can be written as, \[\min_{\mathbf{F}}\left(||\mathbf{P}-\mathbf{MF}||_{2}^{2}+\Lambda||\mathbf{ WF}||_{1}\right) \tag{127}\] In order to choose good basis, we need to consider in what sense the true FDF is sparse. Li et al. (2011); Andrecut et al. (2012) used wavelet basis to reconstruct Faraday-thin and Faraday-thick sources through a series of simulations. Sparse modelling with delta-function basis tends to be too sparse and not to give a reasonable reconstruction of FDFs. Then, Akiyama et al. (2018) adopted constraints on total variation (TV) and total squared variation (TSV) to obtain sparse and smooth FDFs. This is the second extension of the LASSO. * total variation (TV) \[||\mathbf{F}||_{\text{TV}}=\sum_{k}|F_{k+1}-F_{k}|\] (128) * total squared variation (TSV) \[||\mathbf{F}||_{\text{TSV}}=\sum_{k}|F_{k+1}-F_{k}|^{2}\] (129) These are constraints on the difference between adjacent \(F_{k}\). If we adopt the constraint on total variation, the optimization problem can be written as follows. \[\min_{\mathbf{F}}\left(||\mathbf{P}-\mathbf{MF}||_{2}^{2}+\Lambda_{\ell}|| \mathbf{F}||_{1}+\Lambda_{t}||\mathbf{F}||_{\text{TV}}\right) \tag{130}\] Here, \(\Lambda_{\ell}\) and \(\Lambda_{t}\) are hyperparameters to determine the weights of constraints of \(L_{1}\) norm and total variation. Fig. 25 represents the comparison of results from RM CLEAN, \(L_{1}+\text{TV}\) and \(L_{1}+\text{TSV}\), through mock observation data of \(300\ \text{MHz}-3\ \text{GHz}\). For the FDF model, a galactic model of (Ideguchi et al., 2014) shown in Fig. 19 was adopted, which is expected to express the complexity of realistic FDFs. The hyperparameters are determined through cross validation and \((\Lambda_{\ell},\Lambda_{t})=(10,1)\) and \((\Lambda_{\ell},\Lambda_{t})=(10,10^{3})\) were chosen for \(L_{1}+\text{TV}\) and \(L_{1}+\text{TSV}\), respectively. We can see that RM CLEAN picks up major peaks, but hardly reproduces the overall shape. On the other hand, \(L_{1}+\mathrm{TV}\) and \(L_{1}+\mathrm{TSV}\) do not reproduce small-scale structure, especially the sharp peak at \(\phi=0\), while overall shape is well reproduced. It should be noted that the peak at \(\phi\sim-11\) rad/m\({}^{2}\) is resolved with a slightly higher resolution compared with the FWHM of the RMSF of this band, which is about 3.5 rad/m\({}^{2}\). Sparse modelling has been very successful in image synthesis for radio interferometry, but its application to Faraday tomography is still immature and needs further study. In particular, because it is expected that FDFs of some polarization sources are Faraday thick and have a complicated structure, it is necessary to select a basis that fits well with such FDFs. Since the amount of calculation for sparse modelling is generally large compared to other methods such as RM CLEAN, it is also necessary to develop an algorithm that can reduce the computational cost. ### Craft Cooray et al. (2021) proposed a new method, called CRAFT (Constraining and Restoring iterative Algorithm for Faraday Tomography), to estimate the FDF from observed polarization spectrum. This is an application of an algorithm (Cooray et al., 2020) which estimate unobserved signal by imposing reasonable assumptions about the properties of signals in Fourier space. A conceptual diagram of CRAFT is shown in Fig. 26. The basic idea is to limit the \(\phi\)-space range where non-zero polarization intensity exist and to estimate the polarization spectrum of unobserved band including negative \(\lambda^{2}\). This restriction on the FDF comes from the fact that \(\phi\) does become too large within a realistic source and an assumption of sparsity of the FDF. The specific algorithm is as follows. 1. Compute the dirty FDF from Fourier transform of observed polarization spectrum \(\tilde{P}(\lambda^{2})=W(\lambda^{2})P(\lambda^{2})\) and denote it as \(F_{0}(\phi)\). As we saw in section 3.2, the dirty FDF has sidelobes and polarization intensity spreads over a wide range of Faraday-depth space. 2. Compute a new FDF \(F_{i}^{\prime}(\phi)\) from \(F_{i}(\phi)\) in the following way. * Consider a physically reasonable limit on the range in Faraday-depth space and set the polarization intensity outside that range to zero. * Set the polarization intensity at \(\phi\) with \(|F(\phi)|<\mu\) to zero assuming the sparsity of the FDF. Here \(\mu\) is a threshold we give in advance. Further replace \(|F(\phi)|\) with \(|F(\phi)|-\mu\) for \(\phi\) range with \(|F(\phi)|>\mu\). 3. Perform Fourier transform of \(F_{i}^{\prime}(\phi)\) to obtain polarization spectrum \(P_{i+1}^{\prime}(\lambda^{2})\). This polarization spectrum have non-zero values in unobserved band including \(\lambda^{2}<0\) as well. 4. Replace \(P_{i+1}^{\prime}(\lambda^{2})\) of the observed band with the observation data \(\tilde{P}(\lambda^{2})\). This new spectrum is denoted as \(P_{i+1}(\lambda^{2})\). 5. Compute a new FDF \(F_{i+1}(\phi)\) from Fourier transform of \(P_{i+1}(\lambda^{2})\). 6. Repeat the steps 2-5 above. \(P_{i}(\lambda^{2})\) coincides with the observation data in the observed band and is expected to approach the true value outside the observed band through the iteration. Thus, \(F_{i}(\phi)\) is also expected to approach the true FDF. Terminate the iteration when \(F_{i}(\phi)\) sufficiently converges. Figure 25: Simulations of reconstruction of an FDF through sparse modelling (Akiyama et al., 2018). A model of galaxy developed in Heguchi et al. (2014) was used as the ground truth and an observation band of \(300\,\mathrm{MHz}-3\,\mathrm{GHz}\) was assumed the results of RM CLEAN (top), \(L_{1}+\mathrm{TV}\) and \(L_{1}+\mathrm{TSV}\) are compared. Fig. 27 is an example of reconstruction of the FDF by CRAFT with a mock polarization spectrum. The galactic model of Ideguchi et al. (2014) was used as the true FDF and an observation band of \(300\,\mathrm{MHz}-3\) GHz was assumed. Compared to the result of sparse modelling in Fig. 25, the overall shape is reproduced with about the same quality and the reproduction of the sharp peak at \(\phi\!=\!0\) is slightly better. Again, It should be noted that the peaks are resolved with a slightly higher resolution compared with the FWHM of the RMSF of this band (\(3.5\,\mathrm{rad}/\mathrm{m}^{2}\)). In this example, 532 iterations were required to converge the results, but the computation time was comparable to RM CLEAN and significantly faster than sparse modelling. Cooray et al. (2021) notes that the assumption applied in step 2 can be flexible. Cooray et al. (2022) has extended the method to use wavelets as the basis of sparsity with different sets of assumptions, further improving the reproducibility of the FDF. ## 6 Application of Faraday tomography In recent years, wide-band polarization observation has become possible and Faraday tomography has been applied to real data. In this section, we will see some examples of application of Faraday tomography. For other important applications, see Mao et al. 2017; Dickey et al. 2019; Turic et al. 2021; Pasetto et al. 2021; Carretti et al. 2022; Ideguchi et al. 2022; Pomakov et al. 2022, for example. ### Resolution in Faraday-depth space O'Sullivan et al. (2012) performed observations of 4 quasars with ATCA (Australia Telescope Compact Array) with a wide observational band of \(1.1-3.1\) GHz and frequency resolution of 1 MHz. The beam size was \(\sim\!10^{\prime\prime}\times 10^{\prime\prime}\) and all quasars were not resolved in image and observed as point sources. However, the polarization spectra of two of four quasars (PKS B1610-771 and PKS B1039-47) could not be fitted well with a single delta-function model and multiple sources were necessary to explain the data. Therefore, Faraday tomography could resolve the components of the two sources which could not be resolved by imaging. Fig. 28 shows the polarization spectrum of PKS B1039-47 and the result of QU-fitting with 3 delta functions. The bottom left panel is polarization angle as a function of \(\lambda^{2}\) and it is evident that it is not linear and cannot be fitted by a single delta function. The polarization fraction shown in the top right panel oscillates with \(\lambda^{2}\) and indicate that multiple sources are interfering. The bottom-left panel also implies the existence of interfering sources because it should be circular in the case of a single source. This polarization spectrum was fitted with one, two and three delta-function models and the last model was selected by the BIC. By this model, the complicated behavior of both \(Q\) and \(U\) for the whole observational band are well explained as shown in the top left panel. Figure 27: Reconstruction of a galactic FDF model (Ideguchi et al. 2014) with CRAFT (Cooray et al. 2021). An observation band of \(300\,\mathrm{MHz}-3\) GHz was assumed. Figure 26: Conceptual diagram of CRAFT (Cooray et al. 2022). Here, “Fourier transform” and “inverse Fourier transform” correspond to the inverse RM synthesis and RM synthesis, respectively. The obtained FDF of PKS B1039-47 is shown in Fig. 29. The dirty FDF, cleaned FDF and the result of QU-fit are compared. The cleaned FDF also implies the existence of multiple sources but QU-fit resolves the main peak of the cleaned FDF into two delta functions, whose Faraday depths are about \(-13\) rad/m\({}^{2}\) and \(-30\) rad/m\({}^{2}\), respectively. The gap of the two sources in Faraday-depth space is smaller than the FWHM of the RMSF of the observational band which is about \(60\) rad/m\({}^{2}\). Thus, QU-fit succeeded in resolving two sources which are closer than the resolution. In fact, PKS B1039-47 was resolved in image with VLBI observation by the LBA (Australian Long Baseline Array) and a jet structure extending about \(20\) msec (\(\sim 160\) pc) was found. In the jet, there are \(3\) spots which are bright in Stokes I and it is possible that the \(3\) polarization sources found by QU-fit correspond to these spots. If this is the case, the \(3\) delta functions are independent sources. In fact, it is also possible that there is only one polarized source with a complicated FDF and it is necessary to perform observations with a wider band and Faraday tomography with more advanced methods to verify the possibility. ### Correspondence between physical space and Faraday-depth space As stated before, because there is generally no one-to-one correspondence between physical space and Faraday-depth space, distribution of polarization intensity in physical space cannot be directly known from the FDF, even if we could reconstruct the FDF perfectly. However, a combination with other kind of observation may allow us to infer the distribution in physical space. Here we show an example of such studies. Thomson et al. (2019) observed Sharpless 2-27 (Sh 2027) with a band of \(300-480\) MHz with the Parkes radio telescope. "Sharpeless" is the name of a catalogue of HII regions and Sh 2-27 is an HII region with an angular size of \(5.5\) degree around Zeta Ophiuchi. In this paper, they first obtained the Faraday cube of a region around Sh 2-27, which is \(3\)-dimensional data which consists of the FDF at each point in the sky, by Faraday tomography of the polarization spectra. Fig. 30 represents the peak height of the FDF. The peak height within Sh 2-27 region is relatively low compared with the outside but a finite value is detected that is significantly larger than the error. It is considered that Sh 2-27 itself does not emit synchrotron radiation. According to an RM catalogue of Taylor et al. (2009), there is a large dispersion in the Faraday depths of extragalactic objects behind Sh 2-27 and a part of the dispersion, \(\sigma\approx 74\pm 1\) rad/m\({}^{2}\), is estimated to be contributed from Sh 2-27. At low frequencies such as Figure 28: Polarization spectrum of PKS B1039-47 obtained by ATCA observation (O’Sullivan et al., 2012). \(Q\) and \(U\) (top-left), polarization fraction (top-right) and polarization angle (bottom-left) are shown as functions of \(\lambda^{2}\). Bottom-right panel represents \((Q,U)\) plane. Curves are the results of QU-fitting with \(3\)-delta-function model. Figure 30: Peak heights of the FDF around Sh 2-27 region (Thomson et al., 2019). White curve represents HII regions and the corresponding central stars are represented by white stars. Figure 29: FDFs from Faraday tomography of polarization spectrum in Fig. 28 (O’Sullivan et al., 2012). The dirty FDF (dashed), cleaned FDF (solid) and QU-fit with \(3\) delta functions (asterisks) are compared. 300-480MHz, the polarization from behind Sh 2-27 is considered to be completely depolarized due to beam depolarization. Then, the polarized emission coming from the direction of Sh 2-27 is produced in front of Sh 2-27. Noting the distance of Zeta Ophiuchi is \(182^{+53}_{-30}\) pc and assuming the HII region is spherical, the near-side boundary of the HII region is located at the distance of \(164^{+48}_{-33}\) pc. Thus, the FDF toward Sh 2-27 is contributed from a nearby region around 160 pc from the Sun. Fig. 31 is the result of Faraday tomography toward Sh 2-27. The bottom panel is the FDF at the center of Sh 2-27 and three peaks are seen there. On the other hand, the upper panel corresponds to an off-center LOS and the peak at \(\phi\sim 6\) rad/m\({}^{2}\) seen in the bottom panel is absent. It was argued that, considering the typical gas density, ionization rate and strength of magnetic fields, Cold Neutral Medium (CNM) is a reasonable candidate of the polarization sources appearing in the FDF. In fact, from the 3-dimensional dust distribution map obtained by the STILISM project (STructuring by Inversion the Local Interstellar Medium) (Lallement et al., 2018), it was found that two neutral clouds exist in front of Sh 2-27. As an interpretation of the FDFs, considering the Local Bubble near the Solar System, which is Hot Ionized Medium (HIM), and these two neutral gas clouds to be the main sources of synchrotron radiation, Thomson et al. (2019) assigned them the three peaks. Fig. 32 is a schematic view of the polarization distribution in physical space. Further, assuming the typical thermal-electron density of each phase, the LOS components of magnetic field of the Local Bubble, the near neutral cloud and the far neutral cloud are estimated to be 2.5 \(\mu\)G, 15 \(\mu\)G and 30 \(\mu\)G, respectively. On the other hand, the top panel of Fig. 31 is interpreted as an LOS which intersects only the Local Bubble and the near neutral cloud, which is consistent with the FDF with only two peaks. As described above, by combining Faraday tomography of polarization spectra with other observations, it is possible to reconstruct the LOS distributions of emission region, magnetic fields and other phases of gases. There have been many other attempts in this direction (Sakemi et al., 2018; Sakemi et al., 2018; Bracco et al., 2020; Turic et al., 2021) and physical interpretation through MHD simulations is also presented in Bracco et al. (2022). In the future, as surveys at other wavelengths progress, this kind of reconstruction will become possible on various scales more precisely. ## 7 Conclusion This review described Faraday tomography from its basic principles to physical interpretation, algorithms and applications. Synchrotron radiation, Faraday rotation, and depolarized waves, which have been major tools for the research of cosmic magnetic fields, form the basis for Faraday tomography, and it can be considered to be an advanced version of them. In other words, Faraday rotation and depolarization generally depend on the wavelength reflecting the the line-of-sight structure of the polarization source, and Faraday tomography reconstructs it from the polarization spectrum over a wide band. More specifically, it yields the Faraday dispersion function, which contains information about the cosmic-ray electron density and the component of magnetic fields perpendicular to the line of sight, which are associated with synchrotron radiation, and the thermal-electron density and the line-of-sight component of magnetic fields, which are associated with Faraday rotation. By combining 2-dimensional imaging and Faraday tomography, we can investigate the 3-dimensional struc Figure 31: FDFs toward two directions within Sh 2-27 (Thomson et al., 2019). The top and bottom panels correspond to LOSs which intersect one and two neutral clouds, respectively. Orange long-dashed curve, blue solid curve and green solid curve represent dirty FDF, cleaned FDF and CLEAN components, respectively. Horizontal dotted line is the threshold for RMCLEAN (\(\epsilon\) in section 5.1). Figure 32: Distribution of the LOS component of magnetic field and polarization intensity in physical space which was reconstructed by Faraday tomography and observation of neutral clouds (Thomson et al., 2019). There are the Local Bubble and two neutral clouds in front of Sh 2-27. ture of polarization sources. However, we saw that there are two major problems in applying Faraday tomography. One is the reconstruction of the Faraday dispersion function from the observed polarization spectrum. The Faraday dispersion function is mathematically equal to the Fourier transform of the polarization spectrum, and if the polarization spectrum can be obtained from negative infinity to positive infinity with respect to the square of the wavelength, the Fourier transform can be performed to obtain the complete Faraday dispersion function. However, the polarization spectrum is physically meaningful only in the region where the square of the wavelength is positive, and the observation is limited to a finite range there. Therefore, it is necessary to reconstruct the Faraday dispersion function from imperfect information as accurately as possible. We saw that, in addition to standard methods such as RM CLEAN and QU-fit, more advanced algorithms have been proposed. Considering the future development of observation facilities, the computational cost is also an important factor, and it is necessary to develop a method to reconstruct a physically reasonable Faraday dispersion function while suppressing the computational cost. Another problem is the physical interpretation of the Faraday dispersion function. The Faraday dispersion function is a function of the Faraday depth, which generally does not have one-to-one correspondence with the physical space so that the distribution of physical quantities in the physical space is not immediately deduced from the polarization intensity distribution in the Faraday depth space. In this review, we started with a simple model with a uniform magnetic field and polarization intensity and examined a toy model of a galaxy including turbulent magnetic fields. Then, it was shown that the Faraday dispersion functions can have complex shapes even for simple models. The Faraday dispersion functions of more realistic galaxy models are expected to be further complicated, and even if it could be reconstructed accurately from the observed polarization spectrum, its physical interpretation is not straightforward. In the future, if higher-sensitivity observations in a wider band including long wavelengths become possible, the resolution and maximum width in the Faraday depth space will improve, and the complex structure of the Faraday dispersion function will become visible. In preparation for this, both the approach of understanding the physical meaning of the Faraday dispersion function from a simple model and the approach of predicting it with a realistic model would be necessary. Faraday tomography has already been applied to various polarization sources and has produced fruitful results. It is useful to compare the obtained Faraday dispersion function with the theoretical model and to combine it with other observables. In the future, large-scale surveys with various wavelengths are scheduled to proceed, and Faraday tomography is expected to play a major role in it because it has information on the line-of-sight distribution of sources. ## Acknowledgements KT is partially supported by JSPS KAKENHI Grant Numbers 20H00180, 21H01130 and 21H04467, Bilateral Joint Research Projects of JSPS, and the ISM Cooperative Research Program (2021- ISMCRP-2017).
2310.07679
Biased dynamics of the miscible-immiscible quantum phase transition in a binary Bose-Einstein condensate
A quantum phase transition from the miscible to the immiscible phase of a quasi-one-dimensional binary Bose-Einstein condensate is driven by ramping down the coupling amplitude of its two hyperfine states. It results in a random pattern of spatial domains where the symmetry is broken separated by defects. In distinction to previous studies [J. Sabbatini et al., Phys. Rev. Lett. 107, 230402 (2011), New J. Phys. 14 095030 (2012)], we include nonzero detuning between the light field and the energy difference of the states, which provides a bias towards one of the states. Using the truncated Wigner method, we test the biased version of the quantum Kibble-Zurek mechanism [M. Rams et al., Phys. Rev. Lett. 123, 130603 (2019)] and observe a crossover to the adiabatic regime when the quench is sufficiently fast to dominate the effect of the bias. We verify a universal power law for the population imbalance in the nonadiabatic regime both at the critical point and by the end of the ramp. Shrinking and annihilation of domains of the unfavourable phase after the ramp, that is, already in the broken symmetry phase, enlarges the defect-free sections by the end of the ramp. The consequences of this phase-ordering effect can be captured by a phenomenological power law.
Francis A. Bayocboc Jr., Jacek Dziarmaga, Wojciech H. Zurek
2023-10-11T17:22:40Z
http://arxiv.org/abs/2310.07679v2
Biased dynamics of the miscible-immiscible quantum phase transition in a binary Bose-Einstein condensate ###### Abstract A quantum phase transition from the miscible to the immiscible phase of a quasi-1D binary Bose-Einstein condensate is driven by ramping down the coupling amplitude of its two hyperfine states. It results in a random pattern of spatial domains where the symmetry is broken separated by defects. In distinction to previous studies [J. Sabbatini et al., Phys. Rev. Lett. 107, 230402 (2011), New J. Phys. 14 095030], we include non-zero detuning between the light field and the energy difference of the states which provides a bias towards one of the states. Using the truncated Wigner method, we test the biased version of the quantum Kibble-Zurek mechanism [M. Rams et al., Phys. Rev. Lett. 123, 130603 (2019)] and observe a crossover to the adiabatic regime when the quench is sufficiently fast to dominate the effect of the bias. We verify a universal power law for the population imbalance in the non-adiabatic regime both at the critical point and by the end of the ramp. Shrinking and annihilation of domains of the unfavourable phase after the ramp--that is, already in the broken symmetry phase--enlarges the defect-free sections by the end of the ramp. The consequences of this phase ordering effect can be captured by a phenomenological power law. ## I Introduction Quantum phase transitions involve a dramatic change in the ground state of the system as a consequence of small changes to its Hamiltonian. They can be induced by adjusting an external parameter such as magnetic field. They need not happen at absolute zero temperature: it is sufficient that the temperature is sufficiently low for the measurable equilibrium properties of the system (e.g., correlations) to be dominated by the properties of the ground state. The miscibility-immiscibility transition in the Bose-Einstein condensate (BEC) is a good example of a quantum phase transition. For example, atoms of the condensate (such as \({}^{87}\)Rb) may start in a superposition of two hyperfine states. In the presence of the magnetic field, these states are miscible, so these atoms persist in superposition. However, as the field is lowered, hyperfine states of \({}^{87}\)Rb become immiscible, inducing symmetry breaking: different BEC fragments attempt to "choose" one or the other of these two hyperfine states. One can drive BEC atoms through such a miscibility-immiscibility transition at various rates by controlling an external parameter. The miscibility-immiscibility transition is in some ways reminiscent of the paramagnetic-ferromagnetic transition in the quantum Ising chains in a transverse field, in that the system is forced to make a choice between the two possible alternatives -- spins up or down in the ferromagnetic phase of the Ising model, and one or the other of the two hyperfine states in the immiscible phase of the BEC. We therefore expect that the Kibble-Zurek mechanism (KZM) that has been by now well established in the other phase transitions can be also studied in the miscibility-immiscibility transitions in the Bose-Einstein condensates [1; 2]. KZM originated from a scenario for topological defect formation in cosmological phase transitions driven by expanding Universe [9] where independent selection of broken symmetry vacua in causally disconnected regions can be expected to result in a mosaic of broken symmetry domains leading to topologically nontrivial configurations. However, for phase transitions in condensed matter systems, relativistic causality is not relevant. Thus, to relate the density of defects to the quench rate and the nature of the transition, a dynamical theory for the continuous phase transitions was proposed. [4; 5] It predicts the scaling Figure 1: **Miscible-immiscible transition. —** A condensate of atoms in an equal superposition of two hyperfine states is driven across a miscible-immiscible transition and separates into domains with different states. A typical size of the domains is proportional to the third root of the transition time [1; 2]. In this paper, we apply a bias favouring one of the states and study how it affects the outcome of the transition. of the defect density as a function of the quench rate by employing the universality class of the transitions--its equilibrium critical exponents. It has been verified by numerous simulations [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17] and experiments [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. Topological defects play central role in these studies as they can survive inevitable dissipation and can be counted afterwards. The quantum version of KZM (QKZM) was developed for quenches across critical points in isolated quantum systems [43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. It was already tested by experiments [83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96]. Recent progress in Rydberg atoms' versatile emulation of quantum many-body systems [95; 96; 97; 98] and coherent D-Wave [99; 94] open the possibility to study the QKZM in a variety of two- and three-dimensional settings and/or to employ it as a test of quantumness of the hardware [99; 97; 98; 99; 99; 94]. The QKZM can be briefly outlined as follows. A smooth ramp crossing the critical point at time \(t=0\) can be linearized in its vicinity as \[\epsilon(t)=\frac{t}{\tau_{Q}}. \tag{1}\] Here, \(\epsilon\) is a dimensionless parameter in a Hamiltonian, that measures distance from the quantum critical point, and \(\tau_{Q}\) is called a quench time. Initially, the system is prepared in its ground state far from the critical point. At first, far from the critical point, the evolution adiabatically follows the ground state of the changing Hamiltonian. However, adiabaticity fails near the time \(-\hat{t}\) when the energy gap becomes comparable to the quench ramp rate: \[\Delta\propto|\epsilon|^{z\nu}\propto|\hat{\epsilon}/\epsilon|=1/|t| \tag{2}\] and the critical slowing down precludes such adiabatic following. This timescale is \[\hat{t}\propto\tau_{Q}^{z\nu/(1+z\nu)}, \tag{3}\] where \(z\) and \(\nu\) are the dynamical and the correlation length critical exponents, respectively. The correlation length at \(-\hat{t}\), \[\hat{\xi}\propto\tau_{Q}^{\nu/(1+z\nu)}, \tag{4}\] defines the size of the domains where fluctuations select the same broken symmetry ground state. Its inverse determines the resulting density of defects left after crossing the critical point; \[N_{d}\propto\hat{\xi}^{-1}. \tag{5}\] The two KZ scales are interrelated by \[\hat{t}\propto\hat{\xi}^{z}. \tag{6}\] Accordingly, in the KZM regime after \(-\hat{t}\), observables are expected to satisfy the KZM dynamical scaling hypothesis [100; 101; 102] with \(\hat{\xi}\) being the unique scale. For, say, a two-point observable \(\mathcal{O}_{r}\), where \(r\) is a distance between the two points, it reads \[\hat{\xi}^{\Delta\mathcal{O}}\langle\psi(t)|\mathcal{O}_{r}|\psi(t)\rangle=F_{ \mathcal{O}}\left(t/\hat{\xi}^{z},r/\hat{\xi}\right), \tag{7}\] where \(|\psi(t)\rangle\) is the state during the quench, \(\Delta_{\mathcal{O}}\) is the scaling dimension, and \(F_{\mathcal{O}}\) is a non-universal scaling function. ## II Quench with a bias The selection of the broken symmetry can be biased--and, simultaneously, the quantum transition can be made more adiabatic--by adding a bias term to the Hamiltonian that is linear in the order parameter with a bias strength \(b\)[70]. A similar mechanism was demonstrated experimentally for a classical thermodynamic transition in helium-3 [42]. In a quantum transition, the bias opens a finite energy gap at the critical point, \[\Delta_{b}\propto b^{z\nu/(\beta\delta)}, \tag{8}\] and makes the correlation length finite: \[\xi_{b}\propto\Delta_{b}^{-1/z}\propto b^{-\nu/(\beta\delta)}. \tag{9}\] Here \(\beta\) is the order parameter exponent in the ordered phase (\(M\propto\epsilon^{\beta}\), where \(M\) is the order parameter) and \(\delta\) is its exponent at the critical point (\(M\propto b^{1/\delta}\)). With \(\xi_{b}\) providing an additional length scale, the scaling hypothesis (7) generalizes to \[\hat{\xi}^{\Delta\mathcal{O}}\langle\psi(t)|\mathcal{O}_{r}|\psi(t)\rangle=F_{ \mathcal{O}}\left(t/\hat{\xi}^{z},\hat{\xi}/\xi_{b},r/\hat{\xi}\right). \tag{10}\] The extra argument, \(\hat{\xi}/\xi_{b}\), discriminates between the non-adiabatic and adiabatic regimes. When \(\hat{\xi}\gg\xi_{b}\), the energy gap (8) is strong enough to make the quench adiabatic all the way through the critical point. When \(\hat{\xi}\ll\xi_{b}\), then in first approximation, the bias can be ignored and the QKZM proceeds as usual. The freeze-out takes place far enough from the critical point for the weak bias to have a negligible effect. Beyond this first approximation, one can expect that, before \(-\hat{t}\), when the evolution is adiabatic, the order parameter in the ground state is proportional to \(|\epsilon|^{-\gamma}b\). Here \(|\epsilon|^{-\gamma}\) is proportional to the linear susceptibility and \(\gamma\) is the susceptibility exponent. At \(-\hat{t}\) it freezes out with a value proportional to \[\hat{M}\propto b\ \tau_{Q}^{\gamma/(1+z\nu)}. \tag{11}\] This is the order parameter when the system is crossing the critical point. It remains a non-universal system-specific question if this characteristic power law survives after the quench deep in the symmetry-broken phase. ## III System In this paper, we consider the effect of the bias on the miscibility-immiscibility transition in the same system as in Refs. [1] and [2]. The Hamiltonian for the binary BEC mixture in 1D reads [103; 104] \[\hat{H}=\hat{H}_{sp}+\hat{H}_{\rm int}+\hat{H}_{\rm c}. \tag{12}\] Here \(\hat{H}_{sp}\), \(\hat{H}_{\rm int}\), and \(\hat{H}_{\rm cpl}\) are the single-particle, interaction, and coupling Hamiltonians, respectively, defined as \[\hat{H}_{sp} = \int dx\sum_{i=1}^{2}\hat{\psi}_{i}^{\dagger}(x)\left[-\frac{\hbar ^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}-\mu+V(x)\right]\hat{\psi}_{i}(x), \tag{13}\] \[\hat{H}_{\rm int} = \int dx\left\{\sum_{i=1}^{2}\frac{g_{ii}}{2}\hat{\psi}_{i}^{ \dagger}(x)\hat{\psi}_{i}^{\dagger}(x)\hat{\psi}_{i}\hat{\psi}_{i}(x)\right.\] (14) \[\left.+g_{12}\hat{\psi}_{1}^{\dagger}(x)\hat{\psi}_{2}^{\dagger }(x)\hat{\psi}_{2}\hat{\psi}_{1}(x)\right\},\] \[\hat{H}_{\rm cpl} = \int dx\left\{\frac{\hbar b}{2}\left[\hat{\psi}_{2}^{\dagger}(x) \hat{\psi}_{2}(x)-\hat{\psi}_{1}^{\dagger}(x)\hat{\psi}_{1}(x)\right]\right.\] (15) \[\left.-\hbar\Omega(t)\left[\hat{\psi}_{1}^{\dagger}(x)\hat{\psi} _{2}(x)+\hat{\psi}_{2}^{\dagger}(x)\hat{\psi}_{1}(x)\right]\right\}.\] Here \(\hat{\psi}_{i}(x)\) is the Bose field operator that annihilates a particle in hyperfine state \(i\) at position \(x\). It obeys \([\hat{\psi}_{i}(x),\psi_{j}^{\dagger}(x^{\prime})]=\delta_{ij}\delta(x-x^{ \prime})\). \(g_{ij}\) are 1D interaction constants obtained by integration from a 3D Hamiltonian where the transverse state is tightly confined in the transverse ground state by a transverse harmonic potential with frequency \(\omega_{\perp}\): \(g_{ij}=2\hbar^{2}a_{ij}/(ma_{\perp}^{2})\), where \(a_{ij}\) is the 3D \(s\)-wave scattering length and \(a_{\perp}=\sqrt{\hbar/m\omega_{\perp}}\) is the transverse harmonic oscillator length. In the coupling Hamiltonian, \(\Omega(t)\) is the coupling strength and \(b\) is the detuning of the light field from the energy difference of the states. The detuning is the bias that favours one of the two components against the other. In the absence of the bias, \(b=0\), the ground state of the model undergoes a continuous phase transition between the miscible phase, when \(\Omega>\Omega_{c}\), and the immiscible one, when \(\Omega<\Omega_{c}\). At the mean-field level, in the former phase each particle is in a symmetric superposition of the two hyperfine states and in the latter there are two symmetry-broken ground states where the superposition is tilted in favour of one of the two hyperfine states. In the following we assume \(g_{11}\approx g_{12}\equiv g\) when the critical \[\hbar\Omega_{c}=\frac{1}{2}(g_{12}-g)\rho \tag{16}\] with \(\rho\) being total particle density [1; 2]. The linear ramp (1) is implemented as \[\Omega(t)=\Omega_{c}\left[1-\epsilon(t)\right] \tag{17}\] starting in the ground state at \(2\Omega_{c}\) and stopping after \(\Omega\) is brought down to zero. The rest of the paper is organized as follows. In Sec. IV we study model (12) in order to extract all relevant mean-field critical exponents for the miscible-immiscible quantum phase transition. In Sec. V we briefly outline the truncated Wigner approximation [105; 106; 107; 108] and anticipate potential problems with the ultraviolet divergence of quantum fluctuations represented by classical ones. The biased QKZM is considered in Secs. VI and VII. In Sec. VI we focus on the order parameter scaling both when the ramp is crossing the critical point and when it is terminated deep in the immiscible phase. In Sec. VII the kinks/defects are counted in function of the bias driving the QKZM towards a defect-free regime. In Sec. VIII possible experimental realizations of the model are discussed. Finally, we conclude in Sec. IX. ## IV Model properties In the framework of the truncated Wigner approximation (TWA) [105; 106; 107; 108] the operators in the Hamiltonian (12) are replaced by classical fields \(\psi_{i}\). In a homogeneous system, \(V(x)=0\), the uniform ground state can be parameterized as \[\psi_{1}^{(0)} = \sqrt{\rho}\cos\left(\frac{1}{4}\pi-\alpha\right),\] \[\psi_{2}^{(0)} = \sqrt{\rho}\sin\left(\frac{1}{4}\pi-\alpha\right). \tag{18}\] Here \(\rho\) is the total density of particles and \(\alpha\) plays a similar role as the order parameter for the miscible-immiscible transition that can be defined as a population imbalance: \[M=\frac{\rho_{1}-\rho_{2}}{\rho_{1}+\rho_{2}}. \tag{19}\] Here \(\rho_{i}=|\psi_{i}|^{2}\). In the ground state (18) we have \(M=\sin 2\alpha\). The ground state minimizes the energy density \[\varepsilon(\rho,\alpha) = -\mu\rho-\frac{1}{2}\hbar b\rho\sin 2\alpha-\hbar\Omega\rho\cos 2\alpha+ \tag{20}\] \[\frac{1}{2}g\rho^{2}+\frac{1}{4}(g_{12}-g)\rho^{2}\cos^{2}2\alpha.\] Here we assumed \(g_{11}=g_{22}\equiv g\) which is a good approximation [1; 2]. A minimization with respect to \(\rho\) yields a compact formula for the chemical potential, \[\mu=\frac{1}{2}\rho\left(g_{12}+g\right)-\frac{\hbar\Omega}{\cos 2\alpha}, \tag{21}\] and with respect to \(\alpha\) an equation for \(b\): \[b=2\left[\frac{\Omega}{\cos 2\alpha}-\Omega_{c}\right]\sin 2\alpha. \tag{22}\] Here \(\Omega_{c}\) is the critical value of \(\Omega\) in (16). The Ginzburg expansion of the energy (20) near \(\Omega_{c}\) in powers of \(\alpha\) yields \[\varepsilon=\varepsilon_{0}+\hbar\rho\left[b\cdot\alpha+2\left(\Omega-\Omega_{c} \right)\cdot\alpha^{2}+\Omega_{c}\cdot\alpha^{4}\right]. \tag{23}\] For zero bias, \(b=0\), the symmetric \(\alpha=0\) is a solution for any \(\Omega\) but it is unstable in the immiscible phase below \(\Omega_{c}\). Above \(\Omega_{c}\), when the quartic term is neglected for small enough \(b\), there is an approximate solution \[\alpha\approx\frac{b}{4\left(\Omega-\Omega_{c}\right)}, \tag{24}\] that diverges at the transition with the susceptibility exponent \(\gamma=1\). The quartic term prevents this divergence and allows the order parameter at \(\Omega=\Omega_{c}\) to remain finite: \[\alpha_{c}=\left(\frac{b}{4\Omega_{c}}\right)^{1/3} \tag{25}\] with the critical exponent \(\delta=3\). The expansion (23) provides also an insight into small Bogoliubov fluctuations around the uniform ground state solution. For \(b=0\) and when the critical point is approached from above, the quadratic term in (23) makes the frequency of small oscillations with wave vector \(k=0\) around the ground state, \(\alpha=0\), decrease as \((\Omega-\Omega_{c})^{1/2}\). This power law implies that the critical exponents satisfy \(z\nu=1/2\). For a nonzero bias and at \(\Omega=\Omega_{c}\), small harmonic oscillations around (25) have a frequency \(\propto(b/\Omega_{c})^{1/3}\). The exponent \(1/3\), that stands for \(z\nu/\beta/\delta\), implies \(\beta=1/2\). Finally, a linear dispersion, \(\omega\propto k\), at the critical point implies \(z=1\) and, consequently, \(\nu=1/2\). This way we obtained all critical exponents that are relevant for the biased KZM. They are the mean-field exponents for the Ising universality class. For a quick reference, we also list here the exact exponents that should be valid asymptotically very close to the critical point: \(z=1\), \(\nu=1\), \(\gamma=7/4\), \(\delta=15\), and \(\beta=1/8\). In principle, they could be probed by QKZM in the limit of very slow quenches. ## V Truncated Wigner approximation In the truncated Wigner approximation [105; 106; 107; 108] the two fields, \(\psi_{i}(t,x)\), evolve according to the classical coupled Gross-Pitaevski equations (GPE) \[i\hbar\frac{\partial\psi_{i}}{\partial t} = \left[-\frac{\hbar^{2}}{2m}\frac{\partial^{2}}{\partial x^{2}}- \mu+V(x)\right]\psi_{i}+ \tag{26}\] \[(-1)^{i}\frac{\hbar b}{2}\ \psi_{i}-\hbar\Omega(t)\ \psi_{3-i}+\] \[\left[g_{ii}|\psi_{i}|^{2}+g_{12}|\psi_{3-i}|^{2}\right]\psi_{i}.\] The simulation starts from the ground state above the critical point at \(\Omega=2\Omega_{c}\) and follows the ramp (17) down to \(\Omega=0\) where the ramp stops. The initial ground state is dressed with random fluctuations as \[\psi_{i}(x,t_{\rm in})=\psi_{i}^{(0)}+\sum_{n}[\eta_{n}u_{i,n}(x)+\eta_{n}^{* }v_{i,n}^{*}(x)]. \tag{27}\] Here index \(n\) numbers stationary Bogoliubov modes around the initial state and \(\eta_{n}\) are complex Gaussian noises with correlations \(\overline{\eta_{n}^{*}\eta_{m}}=\delta_{nm}/2\). In the TWA framework, they represent quantum fluctuations in the initial ground state. Each random initial state is evolved with the GPE (26). Expectation values of observables are estimated by averaging over the random initial noises. In the following the error bars of the estimates account for the standard error of the mean and indicate a 95% confidence interval. The representability of the quantum fluctuations by the classical ones in the TWA has inevitable limitations. For instance, the average density in (27) is \[\rho_{i}=\left|\psi_{i}^{(0)}\right|^{2}+\sum_{n}\frac{1}{2}\left(|u_{i,n}(x)| ^{2}+|v_{i,n}(x)|^{2}\right) \tag{28}\] while the correct formula for a Bogoliubov vacuum reads \[\rho_{i}=\left|\psi_{i}^{(0)}\right|^{2}+\sum_{n}|v_{i,n}(x)|^{2}. \tag{29}\] As in our periodic boundary conditions, the Bogoliubov modes are momentum eigenstates, \[u_{i,n}(x)=U_{i,n}e^{ik_{n}x},\ \ v_{i,n}(x)=V_{i,n}e^{ik_{n}x}, \tag{30}\] for every \(n\) we have \(|u_{i,n}(x)|^{2}\propto|v_{i,n}(x)|^{2}\). The discrepancy between (28) and (29) is negligible for low-frequency modes, with a wavelength much longer than the healing length, where \(|U_{i,n}|\approx|V_{i,n}|\). However, for high frequency modes, where \(|U_{i,n}|\approx 1\) and \(|V_{i,n}|\ll 1\), there is a dramatic difference. As their coefficients \(|V_{i,n}|\) become negligible with increasing frequency they also have a negligible contribution to the exact formula (29) but at the same time, as their \(|U_{i,n}|\) become close to \(1\), there is an ultra-violet (UV) divergence in the TWA approximation (28). At first sight, the error could be mitigated just by truncating the high-frequency modes from the expansion (27). The question of where exactly to truncate is complicated by the fact that the healing length, and thus the cut-off, depends on \(\Omega\). It is small at the initial \(2\Omega_{c}\) and large near the critical point, where it grows up to \(\xi_{b}\propto\left(b/\Omega_{c}\right)^{-1/3}\). In the adiabatic regime, where \(\xi_{b}\ll\hat{\xi}\), all wavelengths evolve adiabatically and it is \(\xi_{b}\) that sets the cut-off scale at the critical point. In the complementary non-adiabatic regime, where \(\hat{\xi}\ll\xi_{b}\), wavelengths shorter than \(\hat{\xi}\) evolve adiabatically and, as they are also much shorter than \(\xi_{b}\), they need to be truncated at the critical point. Wavelengths much longer than \(\hat{\xi}\) freeze out near \(-\hat{t}\), where \(\hat{\xi}\) is the healing length, and thus they do not require the truncation anywhere between \(-\hat{t}\) and the critical point. Therefore, it is \(\hat{\xi}\) that sets the UV cut-off in the non-adiabatic regime. In the following, we avoid the truncation but we have to bear the above discussion in mind. For our simulations, we choose to simulate \({}^{87}\)Rb atoms in a ring trap of circumference \(L=96\,\mu\)m with transverse trapping frequency \(\omega_{\perp}=2\pi\times 500\,\)Hz and total number of particles \(N_{\rm tot}=N_{1}+N_{2}=2\times 10^{4}\). We take the 3D \(s\)-wave scattering lengths to be \(a_{11}=a_{22}=a_{12}/2=1.325\,\)nm (from which the interaction strengths \(g_{ij}\) can be calculated via \(g_{ij}=2\hbar\omega_{\perp}a_{ij}\), in the absence of confinement induced resonances [109]). With these parameters, all energy scales are smaller than the energy of the first excited state of the transverse harmonic trap, _e.g._\(\mu_{0}\approx 9.15\times 10^{-32}J<\hbar\omega_{\perp}\), and our system is well within the one-dimensional regime [110]. Here, \(\mu_{0}\) is the chemical potential of the two components \(\mu_{1}=\mu_{2}=\mu_{0}\), when both have the same number of particles and \(b=0\). Once we introduce a nonzero bias \(b\), the chemical potentials of the two components are given by \(\mu_{1}=\mu_{0}+\hbar b/2\) and \(\mu_{2}=\mu_{0}-\hbar b/2\). The average of the chemical potentials \(\mu=(\mu_{1}+\mu_{2})/2=\mu_{0}\) is still a constant. Furthermore, the large ratio between the total number of particles \(N\) and the number of simulated Bogoliubov modes \(M_{B}=1024\) ensures the validity of the TWA [105]. The parameters presented in the preceding paragraph correspond to the regime where the two components are strongly immiscible with \(\Delta\equiv g_{11}g_{22}/g_{12}^{2}=0.25\). These parameters are chosen such that the system spin healing length \(\xi_{\rm s}\equiv h/\sqrt{2m\rho g_{\rm s}}\), with \(g_{\rm s}=(2g_{12}-g_{11}-g_{22})/2\), is relatively short, and leads to both a large number of domains and their straightforward identification. ## VI Order parameter scaling According to (25), in the ground state at the critical point, the order parameter's response to a weak bias is proportional to \(\left(b/\Omega_{c}\right)^{1/\delta}\). Assuming that this parameter sets a scale for magnetization \(M\), we can formulate a dynamical scaling hypothesis for the order parameter during the quench between \(\pm\hat{t}\) as [70] \[\left(b/\Omega_{c}\right)^{-1/\delta}M(t)=F_{M}\left[(t-t_{c})/\hat{\xi}^{z}, b\tau_{Q}^{\beta\delta/(1+z\nu)}\right]. \tag{31}\] Here \(F_{M}\) is a non-universal scaling function. Its first argument is the scaled time measured with respect to time \(t_{c}\) when the critical point is crossed by the ramp. The second one is proportional to \(\hat{\xi}/\xi_{b}\) in (10). In particular the hypothesis can be probed at the critical point, \(t=t_{c}\), when it predicts that plots for different \(b\) of \(\left(b/\Omega_{c}\right)^{-1/\delta}M(t_{c})\) in function of \(x=b\tau_{Q}^{\beta\delta/(1+z\nu)}\) collapse to a common scaling function: \[\left(b/\Omega_{c}\right)^{-1/\delta}M(t_{c})=f_{M}\left(b\tau_{Q}^{\beta \delta/(1+z\nu)}\right). \tag{32}\] Here \(f_{M}(x)\equiv F_{M}[0,x]\). This function saturates at a constant value in the adiabatic regime, \(x\gg 1\), where \(M(t_{c})\) becomes equal to the order parameter in the ground state at the critical point which is \(\propto\left(b/\Omega_{c}\right)^{1/\delta}\). With \(\delta=3\) the mean-field equation (25) implies \[f_{M}(x\gg 1)\approx 2^{1/3}. \tag{33}\] In the complementary non-adiabatic regime, \(x\ll 1\), the order parameter is expected to freeze out at \(-\hat{t}\), where it is proportional to \(b\hat{\epsilon}^{-\gamma}\propto b\hat{\xi}^{\gamma/\nu}\propto b\tau_{Q}^{ \gamma/(1+z\nu)}\), and survive to the critical point as \(M(t_{c})\propto b\tau_{Q}^{\gamma/(1+z\nu)}.\) Using the scaling relation \(\gamma=\beta(\delta-1)\), we can predict [70] \[f_{M}(x\ll 1)\propto\left(b\tau_{Q}\right)^{\gamma\beta^{-1}\delta^{-1}}= \left(b\tau_{Q}\right)^{2/3}\equiv x^{2/3}. \tag{34}\] Figure 2: **Critical order parameter scaling. —** The order parameter when the ramp is crossing the critical point, at \(\Omega=\Omega_{c}\), in function of scaled quench time for different biases. In (a) fluctuations \(\eta_{a}\) in (27) were set to zero resulting in a perfect collapse in accordance with the scaling hypothesis in (32) and (34). We can also see the adiabatic saturation for \(b\tau_{Q}\gg 1\). In (b) the same but with the classical fluctuations in (27) and their unphysical UV divergence. In the last equality, we assumed the mean-field exponents. The collapse predicted in (32) and the asymptotes of the scaling function in (33) and (34) are tested in Fig. 2. In its top panel initial fluctuations \(\eta_{n}\) in (27) were set to zero in order to prevent the unphysical UV divergence in (28) from obscuring the physical results. With the bias, the system at the critical point remains stable against small fluctuations that add just a small quantum correction in (29). The top panel demonstrates a perfect collapse interpolating between the predicted asymptotes. The initial fluctuations in (27) were included in the bottom panel of Fig. 2 showing magnetization averaged over random \(\eta_{n}\). The UV-divergent fluctuations make plots depart from the collapsed plots in the top panel. The departure originates from the high-frequency Bogoliubov modes whose adiabaticity depends only on \(\tau_{Q}\) while their mode eigenfunctions show a linear response to the bias. Accordingly, for each bias, the departure begins at \(\tau_{Q}\) that is independent of the bias and at a value of the order parameter that is proportional to \(b\). Whereas at \(\Omega_{c}\) the exact quantum fluctuations can be just neglected, in the following evolution below \(\Omega_{c}\) long wavelength Bogoliubov modes trigger inhomogeneities that survive in the symmetry broken phase. The effect of the high-frequency fluctuations on the inhomogeneous pattern is averaged to zero on the time scale \(\hat{t}\) that it takes the inhomogeneities to develop. In this respect, the high-frequency modes do not need to be truncated by hand. The average order parameter is one of the characteristics that can probe the final state in the immiscible phase at the end of the ramp. Figure 3a shows that final \(M\) collapses in the non-adiabatic regime for small \(b\tau_{Q}\). The collapse cannot extend to the complementary adiabatic regime- that is for large \(b\tau_{Q}\) - because the order parameter saturates there at \(1\) instead of remaining proportional to \(\left(b/\Omega_{c}\right)^{1/3}\). At the end of the ramp, all particles end in the favourable component \(1\). This does not preclude a collapse for a suitably modified scaling hypothesis. One may notice that, in Fig. 3a, the asymptote \(\propto(b\tau_{Q})^{2/3}\) (valid for small \(b\tau_{Q}\)) crosses the saturation level \(\propto b^{-1/3}\) achieved for large \(b\tau_{Q}\) at \(\tau_{Q}\propto b^{-3/2}\). Therefore, a simultaneous collapse in both regimes can be engineered by plotting unscaled order parameter \(M\) in function of \(b^{3/2}\tau_{Q}\), see Fig. 3(b). In the final state it is \(\tau_{Q}\propto b^{-3/2}\), in place of \(\tau_{Q}\propto b^{-1}\), that marks the actual crossover to the defect-free regime. In the next section, we will see the same crossover for the density of defects in the final state. The final scaling is predicted by crossing the two asymptotes. The saturation level of the order parameter at \(M=1\) for large enough \(\tau_{Q}\) must be trivially true. The asymptote \(\propto(b\tau_{Q})^{2/3}\) for fast quenches is predicted by KZM within \(\pm\hat{t}\) but it does not need to survive until the end of the ramp at \(\Omega=0\). However, in first approximation one can argue that a domain pattern that forms at \(+\hat{t}\) survives until the end of the ramp and, therefore, the average order parameter determined by the proportion of the two immiscible phases survives as well. ## VII Density of kinks The fluctuations in (27) are essential for breaking the translational invariance and formation of kinks/defects separating domains of different immiscible phases. In the usual way [70; 42], one can argue that the density of defects, \(n\), should satisfy a scaling hypothesis: \[N_{d}=\hat{\xi}^{-1}F_{N}\left[(t-t_{c})/\hat{\xi}^{z},b\tau_{Q}^{\beta\delta /(1+z\nu)}\right]. \tag{35}\] Figure 3: **Final order parameter scaling.** — In (a) scaled order parameter at the end of the ramp, \(\Omega=0\), in function of scaled quench time \(b\tau_{Q}\) for different biases. For small \(b\tau_{Q}\) the plots collapse in accordance with the scaling hypothesis (34). In the adiabatic regime, for large \(b\tau_{Q}\), the order parameter saturates at \(1\) for all biases. In (b) the same data as in (a) but presented as the order parameter in function of \(b^{3/2}\tau_{Q}\). This scaling makes the plots collapse for both small and large \(b^{3/2}\tau_{Q}\). This scaling hypothesis is expected to hold in the KZ regime extending up to \(\hat{t}\) where, unfortunately, counting defects is still obscured by relatively large fluctuations. If we want to avoid sophisticated filtering of the fluctuations, which would require extra theorizing and smuggling in some of the KZ assumptions, the counting has to be postponed until deep in the immiscible phase where the kinks have large magnitudes as compared to the quantum noise but where we can also anticipate some discrepancies with respect to the scaling hypothesis. We begin with zero bias, the case considered before in Refs. [1] and [2], when defect density \(N_{d}\propto\hat{\xi}^{-1}\) with a proportionality factor, \(F_{N}\left[(t-t_{c})/\hat{\xi}^{z},0\right]\), is dependent only on the scaled time. Numerical results deep in the immiscible phase are shown in Fig. 4. They demonstrate that \(N_{d}\propto\hat{\xi}^{-1}\propto\tau_{Q}^{-1/3}\) is consistent with the data for the mean-field critical exponents and significantly different from \(n\propto\tau_{Q}^{-1/2}\) predicted with the exact ones. For a weak bias the scaling function in (35) has two arguments. The second one, equal to \(b\tau_{Q}\) for the mean-field exponents, discriminates between the non-adiabatic and the adiabatic regime for its small and large values, respectively. Without bias, the kinks are counted deep in the immiscible phase. Figure 5(a) shows their scaled density in function of \(b\tau_{Q}\) for different bias strengths. Their collapse is not perfect in a way suggesting that with increasing bias the final state becomes defect-free for shorter \(\tau_{Q}\) than suggested by the crossover value \(b\tau_{Q}\approx 1\). The bias seems to suppress kinks not only by making the transition itself more adiabatic but also by favouring their annihilation between \(+\hat{t}\) by the time of counting. Indeed, examples of defect annihilation are shown in Figs. 6 and 7. In both examples, a minority domain disappears together with its two delimiting kinks. In a similar way as for the final order parameter, and for the same reason, the collapse of the final kink density improves when scaled density \(\hat{\xi}N_{d}\) is plotted in function of \(b^{3/2}\tau_{Q}\) instead of \(b\tau_{Q}\), see Fig. 5(b). The disappearance of small domains of the unfavourable phase reduces the number of kinks and, at the same time, brings the aver Figure 4: **Defect density without bias.** — Average number of defects in function of \(\tau_{Q}\). For intermediate quench times, the slope is \(-1/3\) in consistency with \(N_{d}\propto\hat{\xi}^{-1}\) with mean field exponents. With exact exponents the slope \(-1/2\) would be significantly different. For large \(\tau_{Q}\) where the number of kinks goes down towards \(2\) the curve begins to cross over to an exponential decay as the finite size of the system makes the transition adiabatic thanks to a finite gap in the spinon excitation spectrum. For small \(\tau_{Q}\) kinks are overcounted as they are often difficult to distinguish from extra zero crossings due to strong fluctuations. Figure 5: **Defect density with bias.** — In (a) scaled number of defects in function of \(b\tau_{Q}^{\beta\delta/(1+z\nu)}\) for the mean-field critical exponents. The defects were counted deep in the immiscible phase. Their annihilation between \(+\hat{t}\) and the counting, which is shown in Figs. 6 and 7, explains why the collapse is not perfect. In (b) the same scaled defect density but in function of \(b^{3/2}\tau_{Q}\), similarly as in Fig. 3, that is improving the collapse. age order parameter closer to one. ## VIII Experimental Feasibility Two-component condensates have been experimentally realised using different atomic species [111; 112], atomic isotopes [113], or spin states [114; 115]. The results presented in this paper correspond to a system of two strongly immiscible components with \(\Delta\equiv g_{11}g_{22}/g_{12}^{2}=0.25\). This corresponds to a spin healing length \(\xi_{\rm s}\) that is relatively short, which allowed us for a straightforward identification of the domains. In particular, the number of domains is obtained from our simulations by calculating the number of zero crossings of \(M=(\rho_{1}-\rho_{2})/(\rho_{1}+\rho_{2})\). In experiments, \(M\) can be easily extracted by performing absorption imaging of the two components. The two hyperfine states are separated in energy by about 1000 times the linewidth of the optical transition used to probe them. Hence, one can take an absorption image of one component and then immediately image the other component with another light of a different frequency. Specific to the presented results in this paper is the realisation of phase separation with the same pair of atomic species using Feshbach resonances [115]. Though hyperfine states of \({}^{87}\)Rb employed in this work would enable to easily identify the domains, no pair of its hyperfine states lies naturally deep within the immiscible regime. However, the combination of its \(|F=1,m_{\rm F}=+1\rangle\) and \(|F=2,m_{\rm F}=-1\rangle\) states has an interspecies Feshbach Figure 6: **Defect annihilation.** — The KZM predicts the density of defects at time \(+\hat{t}\) immediately after the time evolution catches up with the ramp soon after crossing the critical point. These early defects can be too difficult to distinguish from quantum fluctuations to be reliably counted. Therefore, the actual counting is postponed until deep in the immiscible phase. In the meantime, their number can be reduced by their mutual annihilation (or, equivalently, shrinking of the minority domains) as shown in the two panels where two domains disappear between \(+\hat{t}\) and the end of the ramp. Figure 7: **Defect annihilation.** — Same as in Fig. 6 but for a slower quench, deeper in the adiabatic regime. Here a single domain shrinks and disappears between \(+\hat{t}\) and the end of the ramp. resonance that can be used to tune \(\Delta\) to 0.8 while keeping \(g_{11}\approx g_{22}\)[115; 116]. However, it must be noted that the use of a Feshbach resonance has a known disadvantage of inelastic atom losses [117], especially near resonance. An alternative way to experimentally realise the miscible-immiscible phase transition of our system is via spin-orbit coupling of neutral atoms. In Ref. [114], the authors have coupled the two Zeeman sublevels of the \(|F=1\rangle\) of \({}^{87}\)Rb and were able to measure the phase separation of the dressed states across the critical point. The phase transition is achieved by ramping up the intensity of two slightly detuned lasers coupling the two hyperfine levels. This method has the advantage of reaching deeper into the immiscible regime without suffering atom losses, unlike the case for using a Feshbach resonance. However, as noted in Refs. [1] and [2], the precise spatial arrangement of the dressed state could not be directly accessed. Instead, it was inferred from absorption imaging of the bare components. Consequently, while the increased separation and stability were advantageous, they necessitated a more complex detection process for determining the number of domains. ## IX Conclusion This work unifies two themes in the theory of the quantum Kibble-Zurek mechanism (QKZM). One is the theory of the miscible-immiscible quantum phase transition in quasi-1D Bose-Einstein condensates developed in Refs. [1] and [2]. This mean-field quantum phase transition can be realized in binary condensate mixtures. The other is the QKZM with a bias that was investigated theoretically in Ref. [70] and whose classical version was experimentally verified in helium-3 [42]. The motivation for the study in Ref. [70] was to make the dynamics of the quantum phase transition adiabatic by applying a weak bias in order to speed up adiabatic quantum state preparation in a controlled way. Here we propose to test this effect in a robust mean-field quantum transition. We verified the QKZM scalings for the order parameter when the ramp is crossing the critical point but, as the system is further ramped into the immiscible phase, some defects are annihilated making the defect-free regime expand to faster non-adiabatic transitions. Phenomenological power laws were proposed to describe approximately the final order parameter and defect density. ###### Acknowledgements. Helpful discussion concerning experimental possibilities with Malcolm Boshier are gratefully acknowledged. This research was supported in part by the National Science Centre (NCN), Poland under project 2021/03/Y/ST2/00184 within the QuantERA II Programme that has received funding from the European Union Horizon 2020 research and innovation programme under Grant Agreement No 101017733 (F.B. and J.D.). The research was also supported by a grant from the Priority Research Area DigiWorld under the Strategic Programme Excellence Initiative at Jagiellonian University (J.D.).
2307.04967
Detecting Tidal Features using Self-Supervised Representation Learning
Low surface brightness substructures around galaxies, known as tidal features, are a valuable tool in the detection of past or ongoing galaxy mergers. Their properties can answer questions about the progenitor galaxies involved in the interactions. This paper presents promising results from a self-supervised machine learning model, trained on data from the Ultradeep layer of the Hyper Suprime-Cam Subaru Strategic Program optical imaging survey, designed to automate the detection of tidal features. We find that self-supervised models are capable of detecting tidal features and that our model outperforms previous automated tidal feature detection methods, including a fully supervised model. The previous state of the art method achieved 76% completeness for 22% contamination, while our model achieves considerably higher (96%) completeness for the same level of contamination.
Alice Desmons, Sarah Brough, Francois Lanusse
2023-07-11T02:00:37Z
http://arxiv.org/abs/2307.04967v1
# Detecting Tidal Features using Self-Supervised Representation Learning ###### Abstract Low surface brightness substructures around galaxies, known as tidal features, are a valuable tool in the detection of past or ongoing galaxy mergers. Their properties can answer questions about the progenitor galaxies involved in the interactions. This paper presents promising results from a self-supervised machine learning model, trained on data from the Ultradeep layer of the Hyper Suprime-Cam Subaru Strategic Program optical imaging survey, designed to automate the detection of tidal features. We find that self-supervised models are capable of detecting tidal features and that our model outperforms previous automated tidal feature detection methods, including a fully supervised model. The previous state of the art method achieved 76% completeness for 22% contamination, while our model achieves considerably higher (96%) completeness for the same level of contamination. Machine Learning, Tidal Features, Tidal Features, Tidal Features, Tidal Features ## 1 Introduction The currently accepted model of the Universe, known as the Lambda Cold Dark Matter (\(\Lambda\)CDM) Cosmological Model, postulates that galaxies evolve through a process which is referred to as the 'hierarchical merger model', wherein the growth of the universe's highest-mass galaxies is dominated by merging with lower-mass galaxies (e.g. Lacey and Cole, 1994; Cole et al., 2000; Robotham et al., 2014; Martin et al., 2018). During the merging process, the extreme gravitational forces involved cause stellar material to be pulled out from the galaxies, forming diffuse non-uniform regions of stars in the outskirts of the galaxies, known as tidal features. These tidal features contain information about the merging history of the galaxy, and can thus be used to study the galaxy evolution process. In order to draw accurate and statistically robust conclusions about this evolution process, we require a large sample of galaxies exhibiting tidal features. One thing that makes this difficult is the extremely low surface brightness of tidal features, which can easily reach \(\mu_{r}\geq\) 27 mag arcsec\({}^{-2}\). With the next generation of wide-field optical imaging surveys reaching new limiting depths, such as the Vera C Rubin Observatory's Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) which is predicted to reach \(\mu_{r}\sim\) 30.1 mag arcsec\({}^{-2}\)(Martin et al., 2022), assembling a statistically significant sample of galaxies with tidal features is becoming more feasible. One challenge associated with surveys like LSST, due to commence in 2024 and run for 10 years, is the amount of data predicted to be released, with LSST predicted to output over 500 petabytes of imaging data including billions of galaxies (Ivezic et al., 2019). Current tidal feature detection and classification is primarily achieved through visual identification (e.g. Tal et al., 2009; Sheen et al., 2012; Atkinson et al., 2013; Hood et al., 2018; Bilek et al., 2020; Martin et al., 2022), but this amount of data is virtually impossible to classify visually by humans, even using large community based projects such as Galaxy Zoo (Lintott et al., 2008; Darg et al., 2010), and hence we are in urgent need of a tool that can automate this classification task and isolate galaxies with tidal features. With the promising recent results of machine learning in galaxy classification tasks (e.g. Hocking et al., 2018; Diaz et al., 2019; Pearson et al., 2019; Snyder et al., 2019; Walmsley et al., 2019; Cavanagh and Bekki, 2020; Martin et al., 2020), we turn to machine learning to construct a model which can take galaxy images as input, convert them into representations - low-dimensional maps which preserve the important information in the image - and output a classification based on whether the galaxy possesses tidal features. We use a recently developed machine learning method that is essentially a middle-point between supervised and unsupervised learning, known as Self-Supervised machine Learning (SSL; He et al., 2019; Chen et al., 2020; 20; Chen and He, 2020). Such models do not require labelled data for the training of the encoder, which learns to transform images into meaningful low-dimensional representations, but can perform classification when paired with a linear classifier and a small labelled dataset. Instead of labels, SSL models rely on augmentations to learn under which conditions the output low-dimensional representations should be invariant. These types of models have been successfully used for a variety of astronomical applications (e.g. Hayat et al.2021; Stein et al.2022; Slijepevic et al.2022; Walmsley et al.2022; Wei et al.2022; Ciprijanovic et al.2023; Huertas-Company et al.2023; Slijepevic et al.2023) Compared to supervised models, self-supervised models are also much easier to adapt to perform new tasks, and apply to datasets from different astronomical surveys (Ciprijanovic et al., 2023) making this kind of model perfect for our goal of applying the tool developed using HSC-SSP data to future LSST data. ## 2 Methods ### Sample Selection The dataset used for this work is sourced from the Ultradeep (UD) layer of the HSC-SSP Public Data Release 2 (PDR2; Aihara et al.2019) for deep galaxy images. We use the Ultradeep field, which spans an area of \(3.5\) deg\({}^{2}\) and reaches a surface brightness depth of \(\mu_{r}\sim\) 28.0 mag arcsec\({}^{-2}\) as it reaches depths faint enough to detect tidal features. We assemble an unlabelled dataset of \(\sim\)44,000 galaxies by parsing objects in the HSC-SSP PDR2 database using an SQL search and only selecting objects which have at least 3 exposures in each band and have \(i\)-band magnitudes \(15<i<20\) mag. We set a faint magnitude limit of 20 mag to ensure that objects are bright enough for tidal features to be visible. We access the HSC-SSP galaxy images using the 'Unagi' Python tool (Huang et al., 2019) which, given a galaxy's right ascension and declination, allows us to create multi-band 'HSC cutout' images of size 128 \(\times\) 128 pixels, or 21 \(\times\) 21 arcsecs, centred around each galaxy. Each cutout is downloaded in five (\(g,~{}r,~{}i,~{}z,~{}y\)) bands. For the training of the linear classifier we require a small labelled dataset of galaxies with and without tidal features. We use the HSC-SSP UD PDR2 dataset assembled by Desmons et al. (2023) composed of 211 galaxies with tidal features and 641 galaxies without tidal features. These galaxies were selected from a volume-limited sample from the cross-over between then Galaxy and Mass Assembly survey (Driver et al., 2011) and HSC-SSP with spectroscopic redshift limits \(0.04\leq z\leq 0.2\) and stellar mass limits \(9.50\leq\log_{10}(M_{*}/\mbox{M}_{\odot})\leq 11.00\) and have \(i\)-band magnitudes in the range \(12.8<i<21.6\) mag. To increase the size of our tidal feature training sample we classified additional galaxies from our HSC-SSP PDR2 unlabelled dataset of \(\sim\) 44,000 objects, according to the classification scheme outlined in Desmons et al. (2023). Our final labelled sample contains 760 galaxies, 380 with tidal features, labelled 1, and 380 without, labelled 0. We split our labelled dataset set into training, validation, and testing datasets composed of 600, 60, and 100 galaxies respectively. ### Image Pre-processing and Augmentations Before the images are augmented and fed through the model we apply a pre-processing function to normalise the images. The augmentations we use for this project are: * **Orientation:** We randomly flip the image across each axis (x and y) with 50% probability. * **Gaussian Noise**: We sample a scalar from \(\mathcal{U}\)(1,3) and multiply it with the median absolute deviation of each channel (calculated over 1000 training examples) to get a per-channel noise \(\sigma_{c}\). We then introduce Gaussian noise sampled from \(\sigma_{c}~{}\times~{}\mathcal{N}\)(0,1) for each channel. * **Jitter and Crop:** For HSC-SSP images we crop the 128 \(\times\) 128 pixel image to the central 109 \(\times\) 109 pixels before randomly cropping the image to 96 \(\times\) 96 pixel. Random cropping means the image center is translated, or 'jittered', along each respective axis by \(i\), \(j\) pixels where \(i\), \(j~{}\sim~{}\mathcal{U}\)(-13,13) before cropping to the central 96 \(\times\) 96 pixels. ### Model Architecture The model we utilise to perform classification of tidal feature candidates consists of two components; a self-supervised model used for pre-training, and a 'fine-tuned' model used for classification. All models described below are built using the TensorFlow framework (Abadi et al., 2016). #### 2.3.1 The Self-Supervised Architecture For our task of classifying tidal feature candidates we use a type of self-supervised learning known as Nearest Neighbour Contrastive Learning of visual Representations (NNCLR; Dwibedi et al.2021). We closely follow Dwibedi et al. (2021) in designing the architecture and training process for our model. The model was compiled using the Adam optimiser (Kingma and Ba, 2015) and trained for 25 epochs on our unlabelled dataset of \(\sim\) 44,000 HSC-SSP PDR2 galaxies. #### 2.3.2 The Fine-tuned Architecture The fine-tuned model is a simple linear classifier which takes galaxy images as input and converts them to representations using the pre-trained self-supervised encoder. These representations are passed through a 'Dense' layer with a sigmoid activation, which outputs a single number between 0 and 1. This fine-tuned model was compiled using the Adam optimiser (Kingma and Ba, 2015) and a binary cross entropy loss. It was trained for 50 epochs using the labelled training set of 600 HSC-SSP galaxies. Training was completed within \(\sim\) 1 minute using a single GPU. #### 2.3.3 The Supervised Architecture To draw conclusions about the suitability of self-supervised models for the detection and classification of tidal features, we compare our results with those of a fully supervised model. We do not construct this model from scratch, but instead use the published model designed by Pearson et al. (2019) to classify merging galaxies. The output layer was changed from two neurons with softmax activation, to a single neuron with sigmoid activation. The network was compiled using the Adam optimiser (Kingma and Ba, 2015) with the default learning rate and loss of the network was determined using binary cross entropy. We additionally changed the input image dimension from 64 \(\times\) 64 pixels with three colour channels to 96 \(\times\) 96 pixels with five colour channels to ensure extended tidal features remain visible. We train this fully supervised network from scratch using the labelled training set of 600 HSC-SSP galaxies. ### Model Evaluation To evaluate our model performance we use the true positive rate (also known as recall or completeness) and false positive rate (also known as fall-out or contamination). The true positive rate (TPR) ranges from 0 to 1 and is defined as the fraction of galaxies correctly classified by the model as having tidal features with respect to the total number of galaxies with tidal features. The false positive rate (FPR) also ranges from 0 to 1 and is defined as the fraction of galaxies incorrectly classified by the model as having tidal features with respect to the total number of galaxies without tidal features. In addition to using the TPR for a given FPR to evaluate our model, we also use the area under the receiver operating characteristic (ROC) curve, or ROC AUC, to evaluate performance. ## 3 Results ### Self-Supervised vs. Supervised Performance Figure 1 illustrates the testing set ROC AUC for a supervised and self-supervised network as a function of the number of labels used in training for our HSC-SSP dataset. Each point represents the ROC AUC averaged over ten runs using the same training, validation, and testing sets for each run. We average the ROC AUC over the 10 runs and remove outliers further than \(3\sigma\) from the mean. Our SSL model maintains high performance across all amounts of labels used for training, having ROC AUC \(=\) 0.911 \(\pm\) 0.002 when training on the maximum number of labels and only dropping to ROC AUC \(=\) 0.89 \(\pm\) 0.01 when using only 50 labels for training. The supervised model also maintains its performance regardless of label number, but only reaches ROC AUC \(=\) 0.867 \(\pm\) 0.004 when training on the maximum number and ROC AUC \(=\) 0.83 \(\pm\) 0.01 when using 50 labels for training. This figure not only shows that an SSL model can be used for the detection of tidal features with good performance, but also that it performs consistently better than the supervised network regardless of the number of training labels. We also calculated the average TPR reached by the self-supervised model on the testing set for a given FPR \(=\) 0.2, averaging over 10 runs and removing outliers. When training using 600 labels, the model reaches TPR \(=\) 0.94 \(\pm\) 0.01, and this only drops to TPR \(=\) 0.90 \(\pm\) 0.01 when using a mere 50 labels for training. ### Detection of Tidal Features One advantage of self-supervised models over supervised models is the ability to use just one labelled example to find examples of similar galaxies from the full dataset. By using just one image from our labelled tidal feature dataset as a query image, and the encoded 128-dimensional representations from the self-supervised encoder, we can perform a similarity search that assigns high similarity scores to images which have similar representations to the query image. This is demonstrated in Figure 2 where we select a random galaxy with tidal features from our training sample and perform a similarity search with the 44,000 unlabelled HSC-SSP galaxies. In Figure 2 the query image is shown on the right alongside the 24 galaxies which received the highest similarity scores. This figure shows the power of self-supervised learning, where using only a single labelled example, we can find a multitude of other tidal feature candidates. We can also visualise how the model organises the galaxy images in representation space, by using Uniform Manifold Figure 1: Average ROC AUC as a function of the number of HSC-SSP labels used for training for a supervised (blue) and self-supervised (red) model. Each point is an average of ten runs. Approximation and Projection (UMAP; McInnes et al., 2018) which reduces the encoded representations to an easier to visualise 2 dimensional projection. Figure 3 illustrates this 2D projection, created by binning the space into \(100~{}\times~{}100\) cells and randomly selecting a sample from that cell to plot in the corresponding cell location. We also acquire whether the scores given to galaxies by the linear classifier are related to the galaxies' positions in the UMAP projection, by colouring the UMAP plot according the scores given to each galaxy by the linear classifier, shown in the right panel of Figure 3. We find that the majority of galaxies which were assigned a high classifier score, indicating a high likelihood of tidal features, are located on the left side of the UMAP projection plot. This reinforces the idea that the encoded representations contain meaningful information about tidal features. ## 4 Discussion and Conclusions In this work, we have shown that SSL models composed of a self-supervised encoder and linear classifier can not only be used to detect galaxies with tidal features, but can do so reaching both high completeness (TPR = 0. 94 \(\pm\) 0.1) for low contamination (FPR = 0.20) and high area under the ROC curve (ROC AUC = 0.91 \(\pm\) 0.002). This means that such models can be used to isolate the majority of galaxies with tidal features from a large sample of galaxies, thus drastically reducing the amount of visual classification needed to assemble a large sample of tidal features. One major advantage of this model over other automated classification methods, is that this level of performance can be reached using only 600 labelled training examples, and only drops mildly when using a mere 50 labels for training maintaining ROC AUC = 0.89 \(\pm\) 0.01 and TPR = 0.90 \(\pm\) 0.1 for FPR = 0.2. This makes SSL models easy to re-train on data from different surveys with minimal visual classification needed. Following Stein et al. (2021), we emphasise the usefulness of being able to perform a similarity search using just the self-supervised encoder and one example of a galaxy with tidal features to find other galaxies with tidal features from a dataset of tens of thousands of galaxies. The level of comparison that can be carried out with respect to the results obtained here and other works is limited due Figure 3: Left: 2D UMAP projection of the self-supervised representations. Made by binning the space into \(100~{}\times~{}100\) cells and randomly selecting a sample from that cell to plot in the corresponding cell location. Right: The same 2D UMAP projection without binning, coloured according the scores assigned to each galaxy by the linear classifier. Figure 2: Results from a similarity search using a random galaxy with tidal features as a query image, displayed on the left, alongside the top 24 galaxies with the highest similarity scores for each similarity search on the right. The similarity score is displayed in the top left corner for each image. The red outlines indicate images containing galaxies which would be visually classified as hosting tidal features, regardless of whether this galaxy is the central object in the image. to the scarcity of similar works. There is only one study focusing on the detection of tidal features using machine learning, namely the work of Walmsley et al. (2019) who used a supervised network to identify galaxies with tidal features from the Wide layer of the Canada-France-Hawaii Telescope Legacy Survey (Gwyn, 2012). Walmsley et al. (2019) found that their method outperformed other automated methods of tidal feature detection, reaching 76% completeness (or TPR) and 22% contamination (or FPR). Our SSL model, trained on 600 galaxies performs considerably better, reaching a completeness of 96% for the same contamination percentage. Most importantly, our model consistently outperforms a fully supervised model trained on the same data, reaching ROC AUC = 0.911 \(\pm\)0.002 while the fully supervised model only reaches a maximum ROC AUC of 0.864 \(\pm\) 0.004. The code use to create, train, validate, and test the SSML model, along with instructions on loading and using the pre-trained model as well as training the model using different data can be downloaded from GitHub1. Footnote 1: [https://github.com/LSSTISSC/Tidalsaurus](https://github.com/LSSTISSC/Tidalsaurus)